Jeremy Needle EECS 472 Final Project Progress Update 2013-5-27 ## New changes marked with '+'. 1. Agent behavior: How do the agents behave/work? - At each step, child agents sample adult agents’ ‘language’; then, the children fix their own language statistically. + 'Language' is represented as two variables: f1, f2. These correspond to the first and second formants, commonly used to describe a vowel in phonology. - In the base form, the children use a placeholder learning procedure (they all learn the mean f1,f2 from the whole adult population; this means they instantly converge to a set result). + There are many alternative learning procedures, so the simplest options will be implemented first: children may learn the statisical mode, the median, the first thing they happen to 'hear' (or a successive weighted version of this). + Children may also only listen to a subset of the adult population. This is implemented as a random subset at time of listening, but may also be implemented in terms of a persistent locality effect (children listen to nearby adults, then become adults while maintaining their location). The locality version will be added after the random subset version. - Adults die, children become adults, and new children are created. - Because f1,f2 are often used to represent a 2D 'vowel space', the turtles are moved in the view to represent their position in such a space. + This may be changed if locality effects are used for the learning procedure. + Note: typical English vowel space is often represented as X = F2 (from 3500 to 500); Y = F1 (from 1200 to 200). This is a doubly-inverted 2D space. I need to decide the best way to set up the view and plot them, if I'm using the vowel space visualization. + Implemented the initial setup using a random-normal distribution with realistic English vowel space values for mean, standard deviation. 2. System behavior: How does the overall system behave/work? - Due to randomness and selection in the ‘fixing’ process, the language as a whole evolves over time. - In the base form, this is not the case: the pure mean (placeholder) learning procedure means instant convergence and stability. + When alternative learning rules are used, and random sample subsets, the predicted convergence behavior remains. The convergence is slowed by the median method. 3. Rationale for agent rules: Why did you give the agents these rules? - Listening and learning mimics a very simple model of child language acquisition, and some traditional models of language change. - Different theories exist for child (statistical) learning strategies, so multiple such methods will be implemented. + Demonstrating that certain learning procedures can produce shift/stability patterns speaks to the plausibility of these different theories. 4. Model output: Do you think your model currently provides a good description of the system’s behavior? Why or why not? - The model is not functional yet, but the basic version is likely too simplified to describe the process adequately. - With the base form implemented, we do see that the pure mean method contains no randomness or bias, so there is no successive change. + With fully random subsetting and random/statistical learning procedures, the overall system should still be stable/convergent. However, it may be sensitive to initial conditions (stochastic behavior). This is fine, because language changes are observed to move in different directions, or remain stable. + A sustained shift needs to be demonstrated as possible, however. 5. Questions: What questions do you have about your model? - The biggest question is which and how many of the proposed elaborations are necessary (or reasonable). - Specifically, additional learning procedures need to be selected. - In addition, it might be appropriate at this point to already add locality effects (global listening at minimum exacerbates the pure mean stability effect). + I'm not sure if it's worthwhile to very complex factors yet (e.g., social network/families). + Other influences on the learning procedure: influence (adult variable), 'tie strength' (link variable). + I need to decide the best way to set up the view and plot the vowel space visualization. + I think the learning methods can be better implemented with a 'task' structure, which I remember seeing in an example model from the HW; I need to find and consider this. 6. Next steps: Briefly list your next steps for improving the model. - The first step is getting the base form running, then implementing random lifespan. - Apart from random lifespan, alternative learning procedures are the crucial next step. + The current step is to get interesting patterns from the generational model, before adding random lifespan (mixed-generation method). + Checking on the 'task' structure should also be soon.