Cyberslug 2.0

Cyberslug 2.0 preview image

2 collaborators

Tags

(This model has yet to be categorized with any tags)
Visible to everyone | Changeable by the author
Model was written in NetLogo 6.0.1 • Viewed 21 times • Downloaded 0 times • Run 0 times
Download the 'Cyberslug 2.0' modelDownload this modelEmbed this model

Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)


## ## WHAT IS IT?

Cyberslug™ reproduces the decisions for approach or avoidance in the predatory sea-slug Pleurobranchaea californica. It applies relations discovered in the nervous system of the real animal that underlie decisions in foraging for prey. Those decisions are based on motivation and reward learning. The approach-avoidance decision is basic to foraging as well as to most other economic behavior.

## ## HOW IT WORKS

Approach-avoidance choice is organized around appetitive state, which is how the Cyberslug agent feels in terms of hunger, the savory qualities of prey odor, and what it remembers about earlier experience with that prey. The agent adds up sensation, motivation (satiation/hunger), and memory from moment-to-moment into its appetitive state. Appetitive state controls the switch for approach vs. avoidance turn responses to prey.

The difference of odor sensation at two sensors on Cyberslug's head is used to calculate the probable location of prey for the turn response. The sensors respond to betaine, an odor representing the energy value of the prey (like the taste of sugar to the human tongue), and to the learned identifying odors of Hermi and Flab.

## ## HOW TO USE IT

The user can occupy the world with valuable and dangerous prey, Hermis and Flabs respectively, with the slider bars. In the beginning the simulation starts with three Hermis and 10 Flabs. Cyberslug learns to prefer or avoid the different prey.

The progress of learning is shown in the interface tabs V_hermi and V_flab. Other tabs show important quantities used in calculating the decision: the nutritional and satiation states, summed appetitive state (App_State), and the positive and negative rewards and incentives sensed for prey. There is also a tab for Cyberslug's estimate of the odor source direction (Somatic_Map). Other tabs on the left show the strengths of the three odors sensed at the two sensors, and the averaged strengths of the odors (sns_odor).

Another animal prey can be introduced, Faux-Flab, which has the odor of the noxious Flab but is not dangerous. It is a "Batesian mimic", which is a harmless species that evolved to imitate the warning signals of a harmful species directed at a predator of them both. It may receive protection if the predator learns from the real Flab that the odor can signal danger. Three tabs on the right record the numbers of the different prey eaten.

The program is set to run for 150,000 software cycles (ticks). This can be changed in the code.

## ## THINGS TO NOTICE

What happens to approach-avoidance decision when Cyberslug is not hungry? What happens to decision about the noxious Flab prey when the Cyberslug is very hungry?

## ## THINGS TO TRY

What is the effect on prey selection when Faux-Flab is introduced?.

What are the effects of altering the densities of the different prey?

At what different prey densities does Faux-Flab receive protection or not?

Is the Cyberslug always accurate in prey choice? Why or why not, do you think?

Learning happens here according to the Rescorla-Wagner rule for classical learning. What happens if you go into the program and alter the values of the Rescorla-Wagner equation? What are the effects on the Batesian mimic of altering the densities of itself, the prey it mimics, and the predator?

## ## CREDITS AND REFERENCES

Jeffrey W Brown, Derek Caetano-Anollés, Marianne Catanho, Ekaterina Gribkova, Nathaniel Ryckman, Kun Tian, Mikhail Voloshin, and Rhanor Gillette. Implementing Goal-Directed Foraging Decisions of a Simpler Nervous System in Simulation. In preparation, 2017.

The relations are discussed in detail in the following technical references:

1. Brown JW, Caetano-Anollés D, Catanho M, Gribkova E, Ryckman N, Tian K, Voloshin M, and Gillette R. Implementing Goal-Directed Foraging Decisions of a Simpler Nervous System in Simulation. In preparation, 2017.

2. Gillette R, Brown JW (2015) The sea slug, Pleurobranchaea californica: A signpost species in the evolution of complex nervous systems and behavior. Integrative and Comparative Biology, v. 55, pages 1058-1069

3. http://www.scholarpedia.org/article/Pleurobranchaea. The article's Curator is: R Gillette. Published November 13, 2014.

4. Hirayama K and others (2012) A core circuit module for cost/benefit decision. Frontiers in Neuroscience, v. 6, pages 123-128.

5. Gillette R and others (2000) Cost-benefit analysis potential in feeding behavior of a predatory snail by integration of hunger, taste, and pain. Proceedings of the National Academy of Sciences USA, v. 97, pages3585-3590.

Comments and Questions

Please start the discussion about this model! (You'll first need to log in.)

Click to Run Model

;;CYBERSLUG

;;Create animats and assign qualities

breed [Cslugs Cslug]
breed [probos proboscis]
breed [flabs flab]
breed [hermis hermi]
breed [fauxflabs fauxflab]
probos-own [parent phase]
Cslugs-own [sns_hermi Reward Reward_neg App_State_Switch sns_flab_left sns_flab_right sns_hermi_left sns_hermi_right sns_betaine_left sns_betaine_right speed turn-angle Nutrition Satiation
 App_State Incentive Somatic_Map Vf Vh alpha_hermi beta_hermi lambda_hermi alpha_flab beta_flab lambda_flab delta_Vh delta_Vf hermcount flabcount
fauxflabcount]
patches-own [odor_flab odor_hermi odor_betaine]

to startup
  setup
end 

to setup
  clear-all

  create-Cslugs 1 [
    set shape "Cslug"
    set color orange - 2
    set size 16
    set heading 0

    set Nutrition 0.5
    set Incentive 0
    set Somatic_Map 0
    set Satiation 0.5

;Preliminary Rescorla-Wagner parameters for learning Hermi & Flab odors. V is learned value of an odor, alpha is the salience
;(or noticeability) of an odor, beta is the learning rate, and lambda sets the maximum value of learning (between 0 and 1).
    set Vf 0
    set Vh 0
    set alpha_hermi 0.5
    set beta_hermi 1
    set lambda_hermi 1
    set alpha_flab 0.5
    set beta_flab 1
    set lambda_flab 1

;Give Cslug a feeding apparatus for decorative effect
    hatch-probos 1 [
      set shape "airplane"
      set size size / 2
      set parent myself
    ]

; Track Cslug's path
    pen-down
  ]

 create-flabs flab-populate [
    set shape "circle"
    set size 1
    set color red + 2
    setxy random-xcor random-ycor
  ]

  create-hermis hermi-populate [
    set shape "circle"
    set size 1
    set color green + 2
    setxy random-xcor random-ycor
  ]

    create-fauxflabs fauxflab-populate [
    set shape "circle"
    set size 1
    set color blue
    setxy random-xcor random-ycor
  ]

    reset-ticks
end 

to go

;; allow user to drag things around
  if mouse-down? [
    ask Cslugs [
      if distancexy mouse-xcor mouse-ycor < 3 [setxy mouse-xcor mouse-ycor]
    ]
    ask flabs [
      if distancexy mouse-xcor mouse-ycor < 3 [setxy mouse-xcor mouse-ycor]
    ]
    ask hermis [
      if distancexy mouse-xcor mouse-ycor < 3 [setxy mouse-xcor mouse-ycor]
    ]
  ]

; Initialize, deposit, diffuse, and evaporate odors
  ask hermis [set odor_hermi 0.5]
  ask hermis [set odor_betaine 0.5]
  ask flabs [set odor_flab 0.5]
  ask flabs [set odor_betaine 0.5]
  ask fauxflabs [set odor_flab 0.5]
  ask fauxflabs [set odor_betaine 0.5]

;; diffuse odors
  diffuse odor_hermi 0.5
  diffuse odor_flab 0.5
  diffuse odor_betaine 0.5

;; evaporate odors
  ask patches [
    set odor_hermi 0.95 * odor_hermi
    set odor_flab 0.95 * odor_flab ; changed from 0.98 to 0.95
    set odor_betaine 0.95 * odor_betaine
    recolor-patches
  ]

;; Cslug actions

  ask Cslugs [

    update-sensors
    update-proboscis
    set speed 0.06
    set turn-angle -1 + random-float 2


    ;; Detecting prey
    set sns_hermi (sns_hermi_left + sns_hermi_right ) / 2
    let sns_betaine (sns_betaine_left + sns_betaine_right) / 2
    let sns_flab (sns_flab_left + sns_flab_right ) / 2
    let H (sns_hermi - sns_flab)
    let F (sns_flab - sns_hermi)

    set Reward sns_betaine / (1 + (0.5 * Vh * sns_hermi) ) + 1.32 * Vh * sns_hermi ; R
    set Reward_neg 1.32 * Vf * sns_flab ; R-



    set Nutrition Nutrition - 0.0005 * Nutrition ; Nutritional state declines with time
    set Satiation 1 / ((1 + 0.7 * exp(-4 * Nutrition + 2)) ^ (2))
    set Incentive Reward - Reward_neg;
    set Somatic_Map (- ((sns_flab_left - sns_flab_right) / (1 + exp (-50 * F)) + (sns_hermi_left - sns_hermi_right) / (1 + exp (-50 * H))))
    set App_State 0.01 + (1 / (1 + exp(- (Incentive * 0.6) + 10 * satiation)) + 0.1 * ((App_State_Switch - 1) * 0.5)); + 0.25
    set App_State_Switch (((-2 / (1 + exp(-100 * (App_State - 0.245)))) + 1)) ; The switch for approach-avoidance

    set turn-angle (2 * App_State_Switch) / (1 + exp (3 * Somatic_Map)) - App_State_Switch

    set speed 0.1

    rt turn-angle
    fd speed

;; PREY CONSUMPTION AND ODOR LEARNING

    let hermitarget other (turtle-set hermis) in-cone (0.4 * size) 45
    if any? hermitarget [
      set Nutrition Nutrition + count hermitarget * 0.3
      set hermcount hermcount + 1
      ask hermitarget [setxy random-xcor random-ycor]
      set delta_Vh alpha_hermi * beta_hermi * (lambda_hermi - Vh)
      set Vh Vh + delta_Vh ; The Rescorla-Wagner Learning Algorithm
    ]

    let flabtarget other (turtle-set flabs) in-cone (0.4 * size) 45
    if any? flabtarget [
      set Nutrition Nutrition + count flabtarget * 0.3;
      set flabcount flabcount + 1
      ask flabtarget [setxy random-xcor random-ycor]
      set delta_Vf alpha_flab * beta_flab * (lambda_flab - Vf)
      set Vf Vf + delta_Vf ; The Rescorla-Wagner Learning Algorithm
    ]

    let fauxflabtarget other (turtle-set fauxflabs) in-cone (0.4 * size) 45
    if any? fauxflabtarget [
      set Nutrition Nutrition + count fauxflabtarget * 0.3
      set fauxflabcount fauxflabcount + 1
      ask fauxflabtarget [setxy random-xcor random-ycor]

      set delta_Vf alpha_flab * beta_flab * (0 - Vf)
      set Vf Vf + delta_Vf; Odor_flab is linked to Reward, a virtual extinction mechanism
    ]

  ]


;; Hermi and Flab actions

  ask flabs [
    rt -1 + random-float 2
    fd 0.02
  ]

  ask hermis [
    rt -1 + random-float 2
    fd 0.02
  ]

  ask fauxflabs [
    rt -1 + random-float 2
    fd 0.02
  ]


  tick
  if ticks = 150000 [stop] ; definitie end of an epoch of play
end 

to update-proboscis
 ask probos [
    set heading [heading] of parent
    setxy ([xcor] of parent) ([ycor] of parent)
    ifelse ([sns_betaine_left] of parent > 5.5) or ([sns_betaine_right] of parent > 5.5)
      [set phase (phase + 1) mod 20]
      [set phase 0]
    fd (0.15 * size) + (0.1 * phase)
  ]
end 

to update-sensors

  let odor_flab_left [odor_flab] of patch-left-and-ahead 40 (0.4 * size)
  ifelse odor_flab_left > 1e-7
    [set sns_flab_left 7 + (log odor_flab_left 10)]
    [set sns_flab_left 0]

  let odor_flab_right [odor_flab] of patch-right-and-ahead 40 (0.4 * size)
  ifelse odor_flab_right > 1e-7
    [set sns_flab_right 7 + (log odor_flab_right 10)]
    [set sns_flab_right 0]

  let odor_hermi_left [odor_hermi] of patch-left-and-ahead 40 (0.4 * size)
  ifelse odor_hermi_left > 1e-7
    [set sns_hermi_left 7 + (log odor_hermi_left 10)]
    [set sns_hermi_left 0]

  let odor_hermi_right [odor_hermi] of patch-right-and-ahead 40 (0.4 * size)
  ifelse odor_hermi_right > 1e-7
    [set sns_hermi_right 7 + (log odor_hermi_right 10)]
    [set sns_hermi_right 0]

  let odor_betaine_left [odor_betaine] of patch-left-and-ahead 40 (0.4 * size)
  ifelse odor_betaine_left > 1e-7
    [set sns_betaine_left 7 + (log odor_betaine_left 10)]
    [set sns_betaine_left 0]

  let odor_betaine_right [odor_betaine] of patch-right-and-ahead 40 (0.4 * size)
  ifelse odor_betaine_right > 1e-7
    [set sns_betaine_right 7 + (log odor_betaine_right 10)]
    [set sns_betaine_right 0]
end 

to recolor-patches
    ifelse odor_flab > odor_hermi [
      set pcolor scale-color red odor_flab 0 1
    ][
      set pcolor scale-color green odor_hermi 0 1
    ]
end 

to show-sensors
  ask Cslugs [
    ask patch-left-and-ahead 40 (0.4 * size) [set pcolor yellow]
    ask patch-right-and-ahead 40 (0.4 * size) [set pcolor yellow]
  ]
end 

There is only one version of this model, created 11 days ago by Rhanor Gillette.

Attached files

File Type Description Last updated
Cslug.pdf pdf Implementing Goal-Directed Foraging Decisions of a Simpler Nervous System in Simulation 11 days ago, by Rhanor Gillette Download
Cyberslug 2.0.png preview Preview for 'Cyberslug 2.0' 11 days ago, by Rhanor Gillette Download

This model does not have any ancestors.

This model does not have any descendants.