Jeremy Needle EECS 472 Final Project Progress Update 2013-6-3 ## New changes marked with '+'. 1. Agent behavior: How do the agents behave/work? - At each step, child agents sample adult agents’ ‘language’; then, the children fix their own language statistically. - 'Language' is represented as two variables: f1, f2. These correspond to the first and second formants, commonly used to describe a vowel in phonology. - In the base form, the children use a placeholder learning procedure (they all learn the mean f1,f2 from the whole adult population; this means they instantly converge to a set result). - There are many alternative learning procedures, so the simplest options will be implemented first: children may learn the statisical mode, the median, the first thing they happen to 'hear' (or a successive weighted version of this). - Children may also only listen to a subset of the adult population. This is implemented as a random subset at time of listening, but may also be implemented in terms of a persistent locality effect (children listen to nearby adults, then become adults while maintaining their location). The locality version will be added after the random subset version. + The locality-based option has been enabled; a radius of 5 was chosen to include ~10% of the adult population, but this can be made into a parameter as well. Because turtles do not move, local norms (that is, language values tied to a particular patch-region) may arise. + Noise has been added to all learning procedures. After applying learning, the values are modified by a random-normal percentage modification, which can therefore be positive or negative. + Learning is bounded by the typical human limits for the formants (as noted below), such that values which go too high or too low are pushed back to the edges. - Adults die, children become adults, and new children are created. + Lifespan has been implemented, such that adults only die if their age has exceeded the random lifespan assigned to them upon becoming adults. + The original generational mechanism is still accessible via lifespan=0, so the lifespans? switch should be superfluous. + Lifespan limit is assignable via a slider, and individual spans are assigned as a simply random integer up to that limit. + Due to lifespan, the graduation/birth mechanic had to be altered; now, old adults die, enough current children are graduated to replace deaths, and enough new children are created to replace graduations. + Population levels are maintained in this way. This population dynamic is not required, but it is simple and makes summary statistics on the state of the language simpler. - Because f1,f2 are often used to represent a 2D 'vowel space', the turtles are moved in the view to represent their position in such a space. - This may be changed if locality effects are used for the learning procedure. + This has indeed been changed. - Note: typical English vowel space is often represented as X = F2 (from 3500 to 500); Y = F1 (from 1200 to 200). This is a doubly-inverted 2D space. I need to decide the best way to set up the view and plot them, if I'm using the vowel space visualization. - Implemented the initial setup using a random-normal distribution with realistic English vowel space values for mean, standard deviation. 2. System behavior: How does the overall system behave/work? - Due to randomness and selection in the ‘fixing’ process, the language as a whole evolves over time. - In the base form, this is not the case: the pure mean (placeholder) learning procedure means instant convergence and stability. - When alternative learning rules are used, and random sample subsets, the predicted convergence behavior remains. The convergence is slowed by the median method. + With even very low noise levels, shifts routinely appear. In the long term and across many trials, these shifts reflect the randomness of the system, not an emergent bias/attractor. However, this behavior is the intended pattern for human language shift, which has a strong arbitrary component, as well as a 'short term' evaluation. That is, a 'shift' happens across decades, not millenia, so the temporal horizon of interest in this model is tens/hundreds of ticks, not the entire run timeline. + This shifting behavior is modulated in different ways by choices of learning procedure, locality, subset, and noise level. E.g., the mean-learning method is more stable than the pick-one method; more noise creates bigger/faster shifts; etc. 3. Rationale for agent rules: Why did you give the agents these rules? - Listening and learning mimics a very simple model of child language acquisition, and some traditional models of language change. - Different theories exist for child (statistical) learning strategies, so multiple such methods will be implemented. - Demonstrating that certain learning procedures can produce shift/stability patterns speaks to the plausibility of these different theories. + Because shifts are derivable from very low noise alone, the goal is now to characterize the different interactions with various learning procedures, locality, etc. 4. Model output: Do you think your model currently provides a good description of the system’s behavior? Why or why not? - The model is not functional yet, but the basic version is likely too simplified to describe the process adequately. - With the base form implemented, we do see that the pure mean method contains no randomness or bias, so there is no successive change. - With fully random subsetting and random/statistical learning procedures, the overall system should still be stable/convergent. However, it may be sensitive to initial conditions (stochastic behavior). This is fine, because language changes are observed to move in different directions, or remain stable. - A sustained shift needs to be demonstrated as possible, however. + The model now adequately reproduces short-scale shift patterns across a variety of positions in parameter space. 5. Questions: What questions do you have about your model? - The biggest question is which and how many of the proposed elaborations are necessary (or reasonable). - Specifically, additional learning procedures need to be selected. - In addition, it might be appropriate at this point to already add locality effects (global listening at minimum exacerbates the pure mean stability effect). [+] I'm not sure if it's worthwhile to very complex factors yet (e.g., social network/families). [+] Other influences on the learning procedure: influence (adult variable), 'tie strength' (link variable). - I need to decide the best way to set up the view and plot the vowel space visualization. [+] I think the learning methods can be better implemented with a 'task' structure, which I remember seeing in an example model from the HW; I need to find and consider this. + I am having lots of trouble with the R extension and RNetLogo package. 6. Next steps: Briefly list your next steps for improving the model. - The first step is getting the base form running, then implementing random lifespan. - Apart from random lifespan, alternative learning procedures are the crucial next step. - The current step is to get interesting patterns from the generational model, before adding random lifespan (mixed-generation method). [+] Checking on the 'task' structure should also be soon, though low priority. + Need help with R extension, or quickly choose another extension to use. + Time to run data analysis/BehaviorSpace searches across different parameters: + learning methods; noise levels; locality; non-local subset + The goals are to (1) determine which variables matter, (2) characterize the influence of those variables.