/
and a word starting with may, when concatenated, be pronounced either and a word starting with may, when concatenated, be pronounced either

and a word starting with may, when concatenated, be pronounced either - PDF document

pasty-toler
pasty-toler . @pasty-toler
Follow
377 views
Uploaded On 2016-07-03

and a word starting with may, when concatenated, be pronounced either - PPT Presentation

A short explanation of the notation may be appropriate According to the ideas ofFunctional Phonology the gestural constraint evaluates the articulatory candidate and the faithfulnes ID: 388822

 A short

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "and a word starting with may, when conca..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

and a word starting with may, when concatenated, be pronounced either as or as LetÕs say that the relevant constraints for our example are *G (tongue tip:close & open) and *R (place: coronal, labial nasal _C), i.e., the choice and is the outcome of a struggle between the importance of is the outcome of a struggle between the importance oftier as conditioned by a nasal environment before a consonant (Boersma 1997). Thecandidate  would win if the ranking were *R (cor)*G  A short explanation of the notation may be appropriate. According to the ideas ofFunctional Phonology, the gestural constraint evaluates the articulatory candidate, and the faithfulness constraint evaluates the difference between the * and the output as perceived by the listener; the similarities between these (tip)*R / *(3) EPLACE (cor / plosive)*GESTURE (tip)*REPLACE (cor / nasal) One possibility would be to rank *R (cor) and *G (tip) equallyconstraints are capable of cancelling each other. Rather, we should interpret equalIf place assimilation occurs more often than not, we say that *G (tip) is (cor) along a continuous scale (whose physiological IFA Proceedings 21, 1997 *REPLACE (cor)*GESTURE (tip)45 49 52 55 In this example, the ranking value of *R (cor) is 49, and the ranking value of (tip) is 52. In the absence of stochastic evaluation these values wouldranking would be equivalent to grammar (4). However, with stochastic evaluation(whose physiological correlate could be the noise in the amount of locally availableof the constraints, which are determined at evaluation time from the ranking value is a Gaussian random variable with mean 0 and standard deviation 1. For with a 12345678910 disharmony50.551.250.249.152.952.952.753.855.454.3 disharmony50.848.350.751.248.948.848.250.348.148.7  We see that in most replications, *G (tip) was evaluated as higher than (cor), but that *R (cor) was higher in three of the ten cases. Thus, seven times, and three times.disharmonydisharmonyrrrankingSpreading121212 and are Gaussian distributions with standard deviations of 1, theiris also Gaussian, with a standard deviation of 2 , so that the Pdisharmonydisharmonyerf0123456789101112 236%24%14%7.9%3.9%1.7%0.7%0.2%7 (for distances below, say, 7) and At four years of age, Dutch children tend to pronounce faithfully as most of the time. This is a natural stage in as *REPLACE (cor)*GESTURE (tip)35 49 40 55 Sandhi initial state (after motor learning)The next step in phonological development is to learn that faithfulness constraints canbe violated: the separation between perceived and underlying forms can begin. Thelearner will notice that she says /, but that adults sometimes say /. Thediscrepancy within this / *R  In this tableau, the top left shows the adult production and the childperception of it: should have been the winner, and she has (in the row withthe check mark). This offending constraint is *R (cor). A simple strategy that, i.e., to lower the ranking of *R (cor) by a small amount (cor) is below *G EPLACE (cor)*GESTURE (tip)35 38 40 45 and a learnersuch a case will lead to a further demotion of *R and the learnersuch a case would lead to demotion of *G (tip). Thus, even now that than of *Gthere will be more demotions of *G than of *R *REPLACE (cor)*GESTURE (tip)35 35 40 45 In this case, a demotion of *G will occur in only 86% 3.9% = 3.3% of thecases, and a demotion of *R in 14% 96.1% = 14% of the cases. The net and *R are equal, 48 EPLACE (cor)*GESTURE (tip)35 37 40 45 also the adult ranking differences and, therefore, the adult degree of optionality in (cor plosive) will be dragged along at a distance of6 above *Gpromotion of the other: when *R falls by 0.01, *G will rise by 0.01.the correct (but losing) candidate. If our constraint set is correct, we know that suchoptimal candidate in the target (adult) grammar. In (12), this constraint is *G EPLACE (cor)*GESTURE (tip)40 43 46 50 We see that the centre of the two constraint rankings is still at 44.5, as in the initial because the rankings will and local: to implement it, we will have much smaller than the difference between the rankings of relevant constraints. To (safe ranking difference) must be maintained. In an error-driven learning algorithm of refresh a safety margin that has been shrunk by an error. Thus, optionality followsrefresh a safety margin that has been shrunk by an error. Thus, optionality follows/30/, /50/, and /70/ along this dimension.3.1. An OT grammar for perceptual categorizationIn the listeners perception grammar, the relative tness of the various categories, constraints forAn acoustic value on a perceptual tier is not categorized into theits dependence on , so that *W is now analogous to the *R family of thedistorted recognition, so the *W yx2 are on the same side of the category centre *WARP (x, /30/) *WARP (x, /50/) *WARP (x, /70/)Acoustic input x 30 50 70 0 100 Ð7 0 5 44 40 60 /30/Ranking perception grammar determines the winner:[44]*WW/70/)*WW/30/)*W / / The ranking of the three relevant *W constraints can be read from the dotted line constraints can be read from the dotted lineÞgure (19): in going from the bottom up, it Þrst cuts the *WARP (x, /50/)curve, then the *W = 44, thelistener categorizes the acoustic input into the class. Given the three equally class. Given the three equallyabove [60] is classied as ed as /30/, and every other inputas /50/.3.2. Production distributions and the optimal listenerVariations within and between speakers will lead to random distributions of theacoustic input to the listenermidpoints at [30], [50], and [70] along a perceptual dimension, and a problematiclexical occurrences). The speakers productions, which are the inputs to the listener 30 50 70 45.5 54.5 0 100 1 3 she chooses the PprodyacxPacxprodyPprodyPacxFor instance, if the acoustic input is [44], an optimal listener will choose the in /50/ category than to the midpoint of category than to the midpoint of class /30/, all the values between [45.5] and the second criterion [54.5] into the class, all the values between [45.5] and the second criterion [54.5] into the class/70/. I will now show how an OT. I will now show how an OTthe /30/ class than into the class, though she will prefer the class herself.. Suppose that the speaker had intended the category. Tableau (20) [44]*WW/70/)*WW/30/)*W  / that she has made a categorization error. The offending (in the row with the check mark). This ([44], ). A simple learning strategy (demote the offender, i.e., to lower the ranking of *W ([44], ([44], /30/, /50/,or even the three *W ([44], ([44], ([44], Because the constraint family is continuous, I used a Gaussian in the simulations, ([44], ) will be demoted below *W ([44], *WARP (x, /30/) *WARP (x, /50/) *WARP (x, /70/)Acoustic input x 30 50 70 0 100 -7 0 5 (24), the perceptual range was divided into 200 steps of 0.5, the error-driven) was 0.01 (also stochastic, with a relative spreading of 0.1),shift in the direction of the middle category, until they fall together with the optimalshift in the direction of the middle category, until they fall together with the optimalintended /30/ production, there is still a large probability that it is initially perceivedwhat she hears and force her to choose from the categories uniform distribution of the stimuli (only computerized listeners can be frozen in this The Praat script that performs these simulations and produces the IFA Proceedings 21, 1997 % /30/ responses % /50/ responses % /70/ responses Acoustic stimulus30 50 70 0 100 0 50 100 continuum) in Thai (Lisker & Abramson 1967); vowel height (the continuum) in English (Fry, Abramson, Eimas & Liberman 1962); and (the perceptual continuum) in English (Liberman, continuum) in English (Liberman,intended /30/ category in 74% of all cases, and the /50/ category in 25% of all cases.Equilibrium has been achieved (for a demotion/promotion learner, who shows noÒdowndriftÓ) when the probability of the error of classifying an intended /30/,realized as [40], into the /50/ category, is equal to the probability of the error of category, is equal to the probability of the error of/30/ category:Pprod Pprod Pprod Pprod Pperc Pprod Pprod Pperc Pperc Pperc Pprod Pprod going to equal the production bias: she will categorize the input [40] into the class in 74% of all cases, and into the class in 24% of the cases. We may note 54/promotion learner matches production probabilities Acoustic stimulus30 50 70 0 100 0 50 100 no matter how weak the random part of it is, as long as it ispair, and for combined demotion constraint families that are constraint families that are*WARP constraints are demoted equally often, i.e., when the listener makes an equal in classifying an intended production.PprodpercPprodpercPprodpercPprodpercPprodpercPprodperc305030705030507070307050PprodPpercPpercPprodPpercPpercPprodPpercPperc305070503070703050PpercPpercPperc3050701(32) PpercPPPPPPprodprodprodprodprodprodprodprod30125070305030705070 IFA Proceedings 21, 1997 Acoustic stimulus30 50 70 0 100 -100 0 100 The situation is clearly pathological: we see negative probabilities except in a smallThe situation is clearly pathological: we see negative probabilities except in a smallno concerted downdrift of the three constraints: at [60], for instance, *W (x, (x, (x, /30/ to zero. In the limit, therefore, the listenerfollow a two-constraint probability-matching strategy outside the small acoustic Acoustic stimulus30 50 70 0 100 0 50 100 PercentageA simulated demotion-only learner conÞrmed this when asked to classify the wholeacoustic range after a million learning data: % /30/ responses % /50/ responses % /70/ responses Acoustic stimulus30 50 70 0 100 0 50 100 have not been found in categorization experiments. To the extent that the responsedistributions (25) and (29) are more realistic, we must conclude that a symmetric 3 gradual violated constraints in the adult violated constraints in the learnerbe no difference between this algorithm and the minimal algorithm, but in a of being chosen by the adult. constraints with rankings = 1... will increase upon the next learning pair by a negative amount (37) is the plasticity constant, and is 1 if candidate violates constraint Likewise, the promotion of all the learners violated constraints will lead to an(38)(39)i.e. if equals for all iss grammar. An important part of the proof involves showing, i.e. E  rn is zero for every constraint0(40) This section did not occur in the ROA version of this chapter. The maximal algorithm evolved after a , the learner can end up in any grammar that satis must be zero if behaves well. We can distinguish the following cases of ill-behaved1.There are two candidates and who violate the same set of constraints. so that as well as in the adult 2.There is a candidate that violates all constraints violated by candidate as well for whichthere is an so that violates a proper superset of the constraints violated by another0, so that 3.Candidate violates B and C. Equation (40) is then valid for any for which there is so that will be adjusted so that ends up near PnLPlL 0.1, 0.2, 0.27, and 0.43, the0.2, 0.3, 0.17, and 0.33, and sheviolations, any grammar must satisfy the condition that if (i.e., C4.Any more complicated dependencies between the violations of the candidates., i.e. that the algorithm converges upon the adultOptionality follows directly from the robustness requirement of learnability: a, your perception grammar will make you classify this acoustic input into winning candidate. There is some evidence that this combined demotion place assimilation will turn into a 18%-82% preferenceproperties of surface rerankability are compatible with, and may well follow from, awhere optionality is very common. Our account may well explain how theinteracting and possibly competing principles and preferences of Functionalrelevant functional principles (like ), and another part