Penalty Kicks in Soccer

Penalty Kicks in Soccer preview image

1 collaborator

Default-person Matt Hong (Author)

Tags

adaptive cognition 

Tagged by Matt Hong almost 11 years ago

game theory 

Tagged by Matt Hong almost 11 years ago

goal-based learning 

Tagged by Matt Hong almost 11 years ago

sports 

Tagged by Matt Hong almost 11 years ago

Model group MAM-2013 | Visible to everyone | Changeable by the author
Model was written in NetLogo 3D 5.0.4 • Viewed 1082 times • Downloaded 79 times • Run 0 times
Download the 'Penalty Kicks in Soccer' modelDownload this modelEmbed this model

Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)


## WHAT IS IT?

This is a model of penalty kicks in soccer. Being a zero-sum mixed-strategy game involving a finite number of players, it has a Nash equilibrium. The Nash equilibrium is a set of strategies chosen by each of the players which result in an optimal situation for all. For any player in equilibrium, deciding to change the proportion of any pure strategy used over another will result in having chosen a worse strategy than before. The agents in the model each try to employ different strategies in attempts to learn to play in that Nash equilibrium. By default, the Nash equilibrium exists at around [40%, 60%] for all players. The pure strategies involved are [L, R], non-natural or natural side. Note that the natural side for a right-footed player is his left and the keeper's right, indicated by R.

The analysis of this theory of minimax play in penalty kicks is put forth by Ignacio Palacios Huerta (2003).

## HOW IT WORKS

Goalkeepers learn adaptively. The algorithm used here is fully described in the Info Tab of the El Farol model in the models library.

Penalty takers learn goal-oriented. One takes a cheat sheet with information about his opponent's choices when he failed, and exploits that knowledge by choosing the side on which the opponent let in the larger amount of goals.

The length of the cheat sheet, and the amount of data available to an agent to create his cheat sheets is given by MEMORY-SIZE.

At any moment, either goalkeepers are learning or penalty takers are learning. This is given by STUDENTS. Teachers, on the other hand, have a slider TEACH-NAT-TENDENCY, which will set the probability of the teachers choosing their natural side over their non-natural side.

That's how the learning works, and each penalty kick itself is geared to replicate real-life data, so physically they work as they do in real life. Ball goes in, he scores!

## HOW TO USE IT

To use the model, set the NUM-TAKERS, MEMORY-SIZE, NUM-STRATEGIES, TEACH-NAT-TENDENCY, choose which breed to make learn, press SETUP, and then GO.

There many plots and monitors available to analyze the data. The monitor for scoring rates is aggregate, but the scoring rates on the plot is measured at each tick for the last NUM-TAKERS players. The plot and histograms show each breed's natural side tendencies, also measured for the last NUM-TAKERS players. The four monitors which report the isolated success rates according to footing can be used to compare results to the prediction of equal success rates among pure strategies.

## THINGS TO NOTICE

The 3D visual is awesome.

## THINGS TO TRY

The manual settings exist to extend the option of further analysis. Set MANUAL switch to ON if you wish to use these settings. The powers indicate the power of the ball when kicked to either side. For example, it might be useful to know what the total success rate is when all players choose their natural side 50% of the time.

Run the model with different settings for MEMORY-SIZE and NUM-STRATEGIES. What happens to the variability in the plots?

## EXTENDING THE MODEL

At the bottom of the code, there is a botched equation for calculating the Z-score of a sample list to know the probability of it being non-random. Try to get this right! There is a bit of code in the GO function as well to get you started on the implementation if you get the Z-score function right.

It is not too hard to change the learning algorithms for the players. Try adapting certain cognition codes from other models.

## NETLOGO FEATURES

This model does not run on ticks. A tick here represents a single penalty kick.

Lists are used to represent choices, results, and strategies.

n-values is useful for generating random lists.

Histograms provide an interesting visualization, but notice that the upper limit to each interval is open.

## RELATED MODELS

El Farol, Traffic Grid

## CREDITS AND REFERENCES

Palacios-Huerta, Ignacio (2003) http://www.palacios-huerta.com/docs/professionals.pdf

Comments and Questions

Please start the discussion about this model! (You'll first need to log in.)

Click to Run Model

breed [ keepers keeper ]
breed [ takers taker ]

keepers-own [ 
  dive ;;how fast he dives (the taker doesn't move)
  choices 
  choice 
  results
  result
  strategies
  best-strategy 
  tendency
  cheat-sheet1
  cheat-sheet2
  ]

takers-own [ choices choice results best-strategy strategies tendency cheat-sheet1 cheat-sheet2 ] ;;same stuff

turtles-own [ power ] ;;how fast the ball's moving

globals [ 
  ;;for statistics
  goals 
  misses 
  saves 
  
  ;;for functions
  over? ;;true when we know the result of the kick 
  gk ;;the keeper whose turn it is 
  tk ;;the taker whose turn it is
  ball ;;the ball turtle
  
  ;;for plotting
  allresults
  gk-allchoices 
  tk-allchoices 
  gk-tendencies ;;list of each player's tendencies noted after his turn
  tk-tendencies 
  ]


;SETUP FUNCTIONS

to setup
  clear-all-plots
  reset-ticks
  setup-plots
  clr
  ask patches with [ pzcor = -4 and pycor = 1 ] [ 
    if pcolor != green [ line-field set-goal ] 
    ] ;;set up field only if it isn't already set
  setup-turtles
end 

to clr
  ask turtles with [ color != white ] [ die ] ;;clear keepers, takers, and the ball, but not the goal
  set over? false
  set goals 0
  set misses 0
  set saves 0
  set gk-allchoices []
  set tk-tendencies []
  set gk-tendencies []
  set tk-allchoices []
  set allresults []
end 

to line-field
  ask patches with [ pzcor = -4 ] [ set pcolor green ]
  ask patches with [ pycor = 36 or pycor = -18 or (pycor = 18 and abs pxcor <= 30) or (abs pxcor = 30 and 18 <= pycor and pycor <= 36) and pzcor = -4 ] [ set pcolor white ]
  ask patches with [ pycor = 0 and pxcor = 0 and pzcor = -4 ] [ set pcolor white ]
  ask patches with [ precision (distance patch 0 0 0) 0 = 30 and pycor <= -18 and pzcor = -4 ] [ set pcolor white ]
end 

to set-goal
  ask patches with [ (pycor = 38  and abs pxcor <= 12) or (abs pxcor = 12 and 38 >= pycor and pycor >= 36) and pzcor > -4 and pzcor <= 4 ] [ sprout 1 [ set shape "box" set color white set size .5 ] ]
  ask patches with [ pzcor = 4 and pycor < 38 and pycor >= 36 and abs pxcor <= 12 ] [ sprout 1 [ set shape "box" set color white set size .5 ] ]
end 

to setup-turtles
  crt 1 [ ;;create ball
    set size .75 set color orange set shape "circle" setxyz 0 0 -3.125 set ball self 
    ]
  foreach n-values num-takers [ ? * 2 ] [ ;;create num-takers penalty takers
    create-takers 1 [ set size 6 set color red set shape "person" setxyz 36 (35 - ?) -1 set heading 0 ]
    ]
  foreach n-values (num-takers + 1) [ ? * 2 ] [ ;;create num-takers + 1 goalkeepers
    create-keepers 1 [ set size 6 set color yellow set shape "person" setxyz -36 (35 - ?) -1 set heading 0 ]
    ]
  if not Manual [
    ask (turtle-set keepers takers) [ 
      set choices n-values (memory-size * 2) [ one-of (list -1 1) ] ;;randomize previous choices
      set results n-values (memory-size * 2) [ one-of (list 0 0 1 1 1 1 1 1 1 1) ] ;;randomize previous results to success rate of 80%
      learning-setup ;;setup strategies
    ]
  ]
  get-ready
end 

to learning-setup
  set strategies n-values num-strategies [random-strategy]
  set best-strategy first strategies
end 

to get-ready
  ask (turtle-set keepers takers) [ fd 2 ] ;;dequeue
  ask keepers with [ patch-here = patch -36 37 -1 ] [ ;;(head of the keepers queue)
    set gk self move-to patch 0 36 -1 
    ]
  ask takers with [ patch-here = patch 36 37 -1 ] [ ;;(head of the takers queue)
    set tk self move-to patch -6 -10 -1 
    ]
  if not Manual [
    ask (turtle-set gk tk) [ scout ] ;;obtain scouting data
  ]
  ask gk [ save-decision ] ;;both players decide before shot
  ask tk [ shot-decision ] 
end 


;STRATEGY FUNCTIONS
; (Adapted from El Farol Bar)

to-report random-strategy
  report n-values (memory-size + 1) [ 1.0 - random-float 2.0 ]
end 

to-report predict [strategy subhistory]
  report first strategy + sum (map [?1 * ?2] butfirst strategy subhistory)
end 

to update-strategies [ sheet1 sheet2 ]
  let best-score (memory-size ^ 3) + 1
  foreach strategies [
    let score 0
    let kick 1
    repeat memory-size [
      let pred predict ? sublist sheet1 kick (kick + memory-size)
      set choice pred / (abs pred)
      set score score + abs (item (kick - 1) sheet2 - choice)
      set kick kick + 1
    ]
    if (score <= best-score) [
      set best-score score
      set best-strategy ?
    ]
  ]
end 


;DECISION FUNCTIONS

to scout
  ifelse breed = keepers 
  [ set cheat-sheet1 translate [ choices ] of tk [ results ] of tk set cheat-sheet2 [ choices ] of tk ] 
  [ set cheat-sheet1 translate [ choices ] of gk [ results ] of gk set cheat-sheet2 [ choices ] of gk ]
end 

to-report translate [ l1 l2 ]
  report ( map [ ?1 * ?2 ] l1 l2 )
end 

to save-decision
  ifelse not Manual 
  [
    update-strategies cheat-sheet1 cheat-sheet2
    let raw-choice (predict best-strategy sublist cheat-sheet1 0 memory-size) + .000001 ;;to avoid division by zero
    ifelse students = "Goalkeepers"
      [ set choice (raw-choice / abs raw-choice) ] 
      [ ifelse random 100 < teach-nat-tendency [ set choice 1 ] [ set choice -1 ] ] ;;positive = 1 = natural, negative = -1 = non-natural
    set choices fput choice butlast choices ;;choices are limited to memory-size * 2
    set gk-allchoices fput choice gk-allchoices ;;allchoices are unlimited
    set dive (random 20) * .0005 ;;how far he stretches to get the ball is random, since direction of ball is random except left or right
    ifelse choice = 1 ;;1 is natural
      [ set heading -90 ] 
      [ set heading 90 ]
  ]
  [
    set dive (random 20) * .0005
    ifelse random 100 < gk-tendency [ set heading -90 set choice 1 ] [ set heading 90 set choice -1 ]
  ]
end 

to shot-decision
  ifelse not Manual 
  [
    let raw-choice (sum cheat-sheet1) + one-of (list .000001 (-1 * .000001))
    ifelse students = "Penalty Takers"
      [ set choice (raw-choice / abs raw-choice) ] 
      [ ifelse random 100 < teach-nat-tendency [ set choice 1 ] [ set choice -1 ] ]
    set choices fput choice butlast choices
    set tk-allchoices fput choice tk-allchoices
    ask ball [
      ifelse [ choice ] of myself = 1
        [ set heading random-float -1 * (atan 12.2 36) set pitch random-float (atan 8.1 36) set power .04 ]
        [ set heading random-float (atan 12.55 36) set pitch random-float (atan 8.3 36) set power .03 ]
    ]
  ]
  [
    ask ball [
      ifelse random 100 < tk-tendency
        [ set heading random-float -1 * (atan 12.2 36) set pitch random-float (atan 8.1 36) set power natural-power ask myself [ set choice 1 ] ]
        [ set heading random-float (atan 12.55 36) set pitch random-float (atan 8.3 36) set power non-natural-power ask myself [ set choice -1 ] ]
    ]
  ]
end 


;GO FUNCTIONS

to go
  if over? [ 
;     if ticks mod (num-takers + 1) = 0 [ 
;        ask (turtle-set keepers takers) [
;          let Z ind-sequence-Z-score choices count-runs choices 
;          if (abs Z > 2) [ 
;            set rejections rejections + 1 
;          ]
;        ]
;      ]
    if not Manual [
      ask (turtle-set gk tk) [ ;;update tendencies
        set tendency length filter [ ? = 1 ] sublist choices 0 memory-size / memory-size
        ifelse breed = keepers [ set gk-tendencies fput tendency gk-tendencies ] [ set tk-tendencies fput tendency tk-tendencies ] ]
    ]
    move-queue 
    get-ready 
    ask ball [ setxyz 0 0 -3.125 ] 
    set over? false 
    if not Manual [ tick ]
    ]
  ;;if not over, proceed:
  ask ball [ fd power ]
  ask gk [ 
    if [ distance patch 0 0 -10 ] of ball >= 26 and choice * [ choice ] of tk = 1 [ 
      face ball set dive .01 
      ] 
    ] ;;the keeper stretches for the ball if he is near enough
  ask gk [ fd dive ]
  save?
  score?
end 

to move-queue
  ask min-one-of keepers [ distance patch 12 36 -4 ] [ move-to patch -36 (35 - num-takers * 2) -1 set heading 0 set pitch 0 ]
  ask min-one-of takers [ distance patch 0 0 0 ] [ move-to patch 36 (37 - num-takers * 2) -1 set heading 0 ]
end 

to save?
  ask gk [ 
    if distance ball <= 2 and (abs ([ heading ] of ball - 180)) > ((atan -12 36) - 180) and [ pitch ] of ball < (atan 8 36) [ 
      ;;if the ball is between the posts and the distance between it and the keeper is less than or equal to 2
      set misses misses + 1 
      set saves saves + 1 
      set over? true 
      set allresults fput 0 allresults
      if not Manual [
        set results fput 0 butlast results
        ask tk [ set results fput 0 butlast results ]
      ]
      ask ball [
        set heading 270 - heading fd 1 ;;make sure the ball is parried away
      ]
    ]
  ] 
end 

to score?
  ask ball [ if [ pycor ] of patch-here > 36 and (abs (heading - 180) > (atan -12 36) - 180) and pitch < (atan 8 36) [ 
      ;;if the ball passes the goal-line between the posts
      set goals goals + 1 
      set over? true 
      if not Manual [
        ask (turtle-set gk tk) [ set results fput 1 butlast results ]
      ]
      set allresults fput 1 allresults 
      ] 
      ] 
  ask ball [ if [ pycor ] of patch-here > 36 and ((abs (heading - 180) <= (atan -12 36) - 180) or pitch >= (atan 8 36)) [ 
      ;;air-ball
      set misses misses + 1 
      set over? true
      if not Manual [
        ask (turtle-set gk tk) [ set results fput 0 butlast results ]
      ]
      set allresults fput 0 allresults
      ] 
  ]
end 


;;Non-functioning probability functions

;to-report ind-sequence-Z-score [ s r ] ;;reports Z-score of a sequence s with r runs
;  let tot length s
;  let lc 0
;  foreach s [ if ? = -1 [ set lc lc + 1 ] ]
;  let rc tot - lc
;  report (r - (2 * lc * rc / tot) - 1) / sqrt ( (2 * lc * rc * (2 * lc * rc - tot) ) / ((tot ^ 2) * (tot - 1)))
;end

;to-report count-runs [ s ]
;  let runs 0
;  let run? false
;  let prev first s
;  let cnt (length s) - 1
;  foreach (but-first s) [ set cnt cnt - 1
;                          if (run? = false and ? = prev) [ set run? true ]
;                          if (run? = true and (? != prev or cnt = 0)) [ set run? false set runs runs + 1 ]
;                          set prev ? ]
;  report runs
;end

There are 5 versions of this model.

Uploaded by When Description Download
Matt Hong almost 11 years ago final Download this version
Matt Hong almost 11 years ago 3D v3 Download this version
Matt Hong almost 11 years ago 3D v2 Download this version
Matt Hong almost 11 years ago 3D Update - May 19 Download this version
Matt Hong almost 11 years ago Initial upload Download this version

Attached files

File Type Description Last updated
Hong_Matt_Slam.pptm powerpoint Poster Slam almost 11 years ago, by Matt Hong Download
Penalty Kicks in Soccer.png preview Matt Le Tissier is the most accomplished penalty taker in soccer. almost 11 years ago, by Matt Hong Download
Penaltykicks-FINAL.docx word Final Report almost 11 years ago, by Matt Hong Download
penaltykicks_Jun2.docx word Progress Report 4 almost 11 years ago, by Matt Hong Download
PenaltyKicks_May12.doc word Progress Report 1 almost 11 years ago, by Matt Hong Download
PenaltyKicks_May20.doc.docx word Progress Report 2 almost 11 years ago, by Matt Hong Download
Penaltykicks_May28.docx word Progress Report 3 almost 11 years ago, by Matt Hong Download
PenaltyKicksModelProposal.docx word Proposal almost 11 years ago, by Matt Hong Download

This model does not have any ancestors.

This model does not have any descendants.