Withdrawal Design Interactive Model with Confounds
Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)
WHAT IS IT?
This model is an extension of the Withdrawal Design Interactive Model. In this extension, we have added two switches, CONFOUND1? and CONFOUND2? to represent confounds that might affect results. The original description of the model is provided below and a more complete description of the confounds? switches is provided under the "How To Use It" section.
This model is designed to be used as a "hands on" instructional supplement for teaching students to analyze the effects of an independent variable using a withdrawal design. In the model, the agent engages in two behaviors according to separate schedules of reinforcement. Behaviors are plotted session by session, allowing students to use visual inpsection to determine when to move to the next condition (set of reinforcement schedules).
For a comprehnsive set of instructions on how to use to use the model, see https://app.box.com/s/wmtp0hs41st2jb5sp4dohq0wdtn2x2jl
REINFORCEMENT
This model includes only one type of reinforcement schedule - the fixed ratio schedule. In fixed-ratio schedules, a reinforcer is delivered after a specific number of responses. For example, a vending machine that only accepts quarters and vends items for 1 dollar could be said to reinforce inserting quarters on a fixed-ratio 4 schedule. In the simulation, setting ratio1 to 4 means that every fourth time the agent engages in behavior 1, reinforcement will be delivered.
In the simulation, behavior 1 is represented by drawing an orange square, and behavior 2 is represented by drawing a violet circle. Reinforcement is represnted by the shape being filled in and (if the SOUND? switch is available and on) an audible tone (a note from a grand acoustic piano for behavior 1, and a note from a glockenspiel for behavior 2).
HOW IT WORKS
An agent is randomly seeded on the screen and given the following properties:
behavior1 - a running tally of the number of times the agent draws a square
behavior2 - a running tally of the number of times the agent draws a circle
reinforcement1 - a running tally of the number of reinforcements that have followed behavior1
reinforcement2 - a running tally of the number of reinforcements that have followed behavior2
b1set - a tally of behavior1 that resets each new session
b2set - a tally of behavior2 that resets each new session
After the investigator adjusts the sliders to select the session-length, the ratio schedules, and the extinction settings, the investigator is ready to click the SETUP button and GO.
Each tick, the agent engages in behavior1 or behavior2. Which behavior the agent engages in is based upon the ratio of reinforcement for engaging in each of the behaviors (see NetLogo Features below). If both extinction switches are switched on, neither behavior is engaged in.
When an agent draws one of the shapes, it's behavior is reinforced based upon the selected schedule. Reinforcement is increased by 1 if the count of the behavior is divisable by the ratio.
At the beginning of each new session, b1set and b2set are reset to 0, and reinforcement1 and reinforcement2 are both set to 1.
HOW TO USE IT
SESSION-LENGTH: Use this slider to set the session-length (how many ticks the session will last).
RATIO sliders: Set these sliders to determine the schedule of reinforcement for behavior1 and behavior2. Remember, the lower the number of the ratio, the more often the behavior will be followed by reinforcement. A ratio of 1 means every behavior will be reinforced.
EXTINCTION switches: Setting these to "ON" will prevent reinforcement from being delivered, regardless of the RATIO slider setting.
CONFOUND switches: Setting these to "ON" overrides the RATIO and EXTINCTION switches. Setting CONFOUND1? to ON results in reinforcement being delivered for every occurrence of behavior1; setting CONFOUND2? to ON results in reinforcement being delivered for every occurrence of behavior2.
SETUP: resets the simulation.
GO: advances the simulation one session at a time.
EVER-RUN?: setting this switch to on keeps the simulation moving right along until the switch is turned off or "GO" is clicked again.
SOUND?: If available, this switch allows you to turn sound on or off.
WHAT TO LOOK FOR AND TO TRY
There are two main ways to use the model.
Point by point analysis. To use the model this way, make sure the EVER-RUN? switch is set to off. Clicking GO will advance the simulation one session. The top plot shows the frequency of selections for behavior1 and behavior2 per session. The smaller plots show proportion of selections of each behavior, respectively, per session. Decisons can be made as to whether to continue with the same schedules of reinforcement or not using visual inspection.
Running multiple sessions at once. To use the model this way, turn the EVER-RUN? switch on and let the simulation continue to run until you are ready to make a change. Then click the EVER-RUN? switch off.
To create an ABAB withdrawal design, try the following:
- Set RATIO1 and RATIO2 to 1.
- Set EXTINCTION1 to OFF and EXTINCTION 2 to on. This is Condition A.
- Set EVER-RUN to OFF.
- Click SETUP
- Click GO at least three times.
- Determine if responding is stable enough to change schedules. If not, continue clicking go until it is.
- Once responding is stable, change the schedule by turning EXTINCTION1? ON and EXTINCTION2? OFF. This is Condition B.
- Click GO at least three times. Analyze. Keep clicking GO unitl you are ready to move on.
- Reset the schedule to Condition A (EXTINCTION1? OFF, EXTINCTION2? ON).
- Click GO until responding is steady.
- Reset the schedule to Condition B. (EXTINCTION1? ON, EXTINCTION2? OFF).
- Click GO until stable.
Following the instructions above should result in an ABAB withdrawal design that is easy to see in the top plot.
Setting the RATIOs to 1 and adjusting the schedules by turning the EXTINCTION? switches on and off results in the most stable and differentiated levels of behavior. If you wish to investigate more variable behavior, keep the EXTINCTION? switches off and try different ratio values. Keep in mind that if you set the ratios very high there will be few opporunities for reinforcement and responding will be fairly random. For this reason, the ratio sliders are set to range from 1 to 10.
EXTENDING THE MODEL
Some suggestions for extending the model include the following:
adding in additional schedules of reinforcement
allowing more precise user-control over variability, level, and trend
simulating other types of single-subject designs
making the effect of the confounds adjustable
simulating different types of interventions in addition to reinforcement
NETLOGO FEATURES
The logic for determining which behavior is engaged in is determined as follows:
...
ask turtles [
ifelse extinction1? and extinction2? [set b1set b1set set b2set b2set]
[
ifelse random-float 1 < reinforcement-ratio
[
behave1
if behavior1 mod ratio1 = 0 [if not extinction1? [reinforce1]]
]
[
behave2
if behavior2 mod ratio2 = 0 [if not extinction2? [reinforce2]]
]
]
]
...
where the "reinforcement-ratio" is determined by dividing reinforcement1 by the sum of reinforcement1 and reinforcement2.
RELATED MODELS
A version of this model without confounds can be found at https://app.box.com/s/y8z1yp9a6z9cbushw0un4z0hvlys4x02 and found on the NetLogo Commons at http://modelingcommons.org/browse/onemodel/5954#modeltabsbrowseinfo. Subsequent models for use in interactive exercises exploring single-subject experimental designs will be linked to here as they are developed.
CREDITS AND REFERENCES
This model can be downloaded from the web at https://app.box.com/s/74azchtqhr9n2n6k7gjfm594jiidrk45
It is also available from the NetLogo Commons at http://modelingcommons.org/browse/onemodel/5958#modeltabsbrowseinfo
COPYRIGHT AND LICENSE
Copyright 2019 Allen Karsina
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc-sa/3.0.
Comments and Questions
turtles-own [ ;; these variables reflect selection and reinforcement behavior1 ;; counts all moves to black patches from any patch behavior2 ;; tcounts all moves to white patches from any patch reinforcement1 ;; tracks all reinforcement for moves to black patches reinforcement2 ;; tracks all reinforcement for moves to white patches b1set ;; depicts sets of 10 trials/time units b2set ;; depicts sets of 10 trials/time units reinforcement-ratio ;; this represents reinforcement1/(reinforcement1 + reinforcement2) ] to setup clear-all setup-turtles reset-ticks end to setup-turtles create-turtles 1 ;; only 1 agent in the base model ask turtles [ setxy random-xcor random-ycor ;;each turtle starts on a random patch set shape "face neutral" set size 6 set color yellow ;; set initial parameters set behavior1 0 set reinforcement1 1 set behavior2 0 set reinforcement2 1 set reinforcement-ratio (reinforcement1 / (reinforcement1 + reinforcement2)) ] end to go ask turtles [ set b1set 0 set b2set 0 ] repeat session-length [ cd ask turtles [ set shape "face neutral" ifelse extinction1? and extinction2? [if confound1? [ behave1 reinforce1] if confound2? and not confound1? [ behave2 reinforce2] ] [ ifelse random-float 1 < reinforcement-ratio [ behave1 if behavior1 mod ratio1 = 0 [if not extinction1? or confound1? [reinforce1]] if confound1? and behavior1 mod ratio1 != 0 [reinforce1] ] [ behave2 if behavior2 mod ratio2 = 0 [if not extinction2? or confound2? [reinforce2]] if confound2? and behavior2 mod ratio2 != 0 [reinforce2] ] ] ] ask turtles [ ;; if R1 + R2 do not equal 0, then ifelse reinforcement1 + reinforcement2 != 0 ;; set reinforcement-ratio equal to R1/(R1+R2) adjusted for history [set reinforcement-ratio (reinforcement1) / (reinforcement1 + reinforcement2)] ;; otherwise set reinforcement-ratio to chance [set reinforcement-ratio 0.5] ] tick ] ;;reset reinforcement ratios at end of session to help "discriminate" new schedules ask turtles [ set reinforcement1 1 set reinforcement2 1 ] if not ever-run? [stop] ;; stops simulation at intervals specified by session-length ;;if ever-run? = false and ticks mod session-length = 0 [stop] end to behave1 ;; executes behavior1 and updates settings for behavior1 and b1set pu set color orange rt random 360 fd random 20 pd repeat 4 [ fd 10 rt 90 ] set behavior1 behavior1 + 1 set b1set b1set + 1 end to behave2 ;; executes behavior2 and updates settings for behavior2 and b2set pu set color violet rt random 360 fd random 20 pd repeat 9 [ fd 12 left 40 ] set behavior2 behavior2 + 1 set b2set b2set + 1 end to reinforce1 set reinforcement1 reinforcement1 + 1 set shape "face happy" let n 10 loop [ set n n - 0.25 repeat 4 [ fd n rt 90 ] if n = 0.25 [stop] ] end to reinforce2 set reinforcement2 reinforcement2 + 1 set shape "face happy" pd let n 12 loop [ set n n - 0.25 repeat 9 [ fd n left 40 ] if n = 0.25 [stop] ] end
There are 5 versions of this model.
Attached files
No files