{ "paper_id": "R11-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:03:59.088570Z" }, "title": "MDL-based Models for Alignment of Etymological Data", "authors": [ { "first": "Hannes", "middle": [], "last": "Wettig", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki", "location": { "country": "Finland" } }, "email": "" }, { "first": "Suvi", "middle": [], "last": "Hiltunen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki", "location": { "country": "Finland" } }, "email": "" }, { "first": "Roman", "middle": [], "last": "Yangarber", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki", "location": { "country": "Finland" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce several models for alignment of etymological data, that is, for finding the best alignment, given a set of etymological data, at the sound or symbol level. This is intended to obtain a means of measuring the quality of the etymological data sets, in terms of their internal consistency. One of our main goals is to devise automatic methods for aligning the data that are as objective as possible, the models make no a priori assumptions-e.g., no preference for vowel-vowel or consonant-consonant alignments. We present a baseline model and several successive improvements, using data from the Uralic language family.", "pdf_parse": { "paper_id": "R11-1016", "_pdf_hash": "", "abstract": [ { "text": "We introduce several models for alignment of etymological data, that is, for finding the best alignment, given a set of etymological data, at the sound or symbol level. This is intended to obtain a means of measuring the quality of the etymological data sets, in terms of their internal consistency. One of our main goals is to devise automatic methods for aligning the data that are as objective as possible, the models make no a priori assumptions-e.g., no preference for vowel-vowel or consonant-consonant alignments. We present a baseline model and several successive improvements, using data from the Uralic language family.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We present work on induction of alignment rules for etymological data, in a project that studies genetic relationships among the Uralic language family. This is a continuation of previous work, reported in (Wettig and Yangarber, 2011) , where the methods were introduced. In this paper, we extend the models reported earlier and give a more comprehensive evaluation of results. In addition to the attempt to induce alignment rules, we aim to derive measures of quality of data sets in terms of their internal consistency. More consistent dataset should receive a higher score in the evaluations. Currently our goal is to analyze given, existing etymological datasets, rather than to construct cognate sets from raw linguistic data. The question to be answered is whether a complete description of the correspondence rules can be discovered automatically. Can they be found directly from raw etymological datasets of cognate words from languages within the language family? Are the alignment rules are \"inherently encoded\" in a dataset (the corpus) itself? We aim to develop methods that are as objective as possible, that rely only on the data, rather than on any prior assumptions about the data, the possible rules and alignments.", "cite_spans": [ { "start": 206, "end": 234, "text": "(Wettig and Yangarber, 2011)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Computational etymology encompasses several problem areas, including: discovery of sets of genet-ically related words-cognates; determination of genetic relations among groups of languages, from raw or organized linguistic data; discovering regular sound correspondences across languages in a given language family; and reconstruction, either diachronic-i.e., reconstruction of proto-forms for a hypothetical parent language, from which the word-forms found in the daughter languages derive, or synchronic-i.e., of word forms that are missing from existing languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Several approaches to etymological alignment have emerged over the last decade. The problem of discovering cognates is addressed, e.g., in, e.g., (Bouchard-C\u00f4t\u00e9 et al., 2007; Kondrak, 2004; Kessler, 2001) . In our work, we do not attempt to find cognate sets, but begin with given sets of etymological data for a language family, possibly different or even conflicting. We use the principle of recurrent sound correspondence, as in much of the literature, including the mentioned work, (Kondrak, 2002; Kondrak, 2003) and others. Modeling relationships within the language family arises in the process of evaluation of our alignment models. Phylogenetic reconstruction is studied extensively by, e.g., (Nakhleh et al., 2005; Ringe et al., 2002; Barbancon et al., 2009) ; these work differ from ours in that they operate on pre-compiled sets of \"characters\", capturing divergent features of entire languages within the family, whereas we operate at the level of words or cognate sets. Other related work is further mentioned in the body of the paper.", "cite_spans": [ { "start": 146, "end": 174, "text": "(Bouchard-C\u00f4t\u00e9 et al., 2007;", "ref_id": "BIBREF2" }, { "start": 175, "end": 189, "text": "Kondrak, 2004;", "ref_id": "BIBREF10" }, { "start": 190, "end": 204, "text": "Kessler, 2001)", "ref_id": "BIBREF7" }, { "start": 486, "end": 501, "text": "(Kondrak, 2002;", "ref_id": "BIBREF8" }, { "start": 502, "end": 516, "text": "Kondrak, 2003)", "ref_id": "BIBREF9" }, { "start": 701, "end": 723, "text": "(Nakhleh et al., 2005;", "ref_id": "BIBREF14" }, { "start": 724, "end": 743, "text": "Ringe et al., 2002;", "ref_id": "BIBREF16" }, { "start": 744, "end": 767, "text": "Barbancon et al., 2009)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We describe our datasets in the next section, present a statement of the etymology alignment problem in Section 3, cover our models in detail in Sections 4-6, and discuss results and next steps in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We use two digital Uralic etymological resources, SSA-Suomen Sanojen Alkuper\u00e4, \"The Origin of Finnish Words\", (Itkonen and Kulonen, 2000) , and the StarLing database, (Starostin, 2005) . StarLing, originally based on (R\u00e9dei, 1988 (R\u00e9dei, 1991 , differs from SSA in several respects. StarLing has about 2000 Uralic cognate sets, compared with over 5000 in SSA, and does not explicitly indicate dubious etymologies. However, Uralic data in StarLing is more evenly distributed, because it is not Finnish-centric like SSA is-cognate sets in StarLing are not required to contain a member from Finnish.", "cite_spans": [ { "start": 110, "end": 137, "text": "(Itkonen and Kulonen, 2000)", "ref_id": "BIBREF6" }, { "start": 167, "end": 184, "text": "(Starostin, 2005)", "ref_id": "BIBREF19" }, { "start": 217, "end": 229, "text": "(R\u00e9dei, 1988", "ref_id": null }, { "start": 230, "end": 242, "text": "(R\u00e9dei, 1991", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "The Uralic language family has not been studied by computational means previously.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "We begin with pairwise alignment: aligning a set of pairs of words from two related languages in our data set. The task of alignment means, for each word pair, finding which symbols correspond. We expect that some symbols will align with themselves, while others have undergone changes over the time when the two related languages have been evolving separately. The simplest form of such alignment at the symbol level is a pair (\u03c3 : \u03c4 ) \u2208 \u03a3 \u00d7 T , a single symbol \u03c3 from the source alphabet \u03a3 with a symbol \u03c4 from the target alphabet T . We denote the sizes of the alphabets by |\u03a3| and |T |, respectively. 1 Clearly, with this type of 1x1 alignment alone we cannot align a source word \u03c3 of length |\u03c3| with a target word \u03c4 of length |\u03c4 | = |\u03c3|. 2 To model also insertions and deletions, we augment both alphabets with the empty symbol, denoted by a dot, and use \u03a3 . and T . as augmented alphabets. We can then align word pairs such as ien-ige, meaning \"gum\" in Finnish and Established, for example, as:", "cite_spans": [ { "start": 605, "end": 606, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Aligning Pairs of Words", "sec_num": "3" }, { "text": "i e n i . e n | | | | | | | i g e i g e .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aligning Pairs of Words", "sec_num": "3" }, { "text": "etc. The (historically correct) alignment on the right consists, e.g., of symbol pairs: (i:i), (.:g), (e:e), (n:.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aligning Pairs of Words", "sec_num": "3" }, { "text": "We wish to encode these aligned pairs as compactly as possible, following the Minimum Description Length Principle (MDL), see e.g. (Gr\u00fcnwald, 2007; Rissanen, 1978) . Given a data corpus", "cite_spans": [ { "start": 131, "end": 147, "text": "(Gr\u00fcnwald, 2007;", "ref_id": "BIBREF5" }, { "start": 148, "end": 163, "text": "Rissanen, 1978)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "D = (\u03c3 1 , \u03c4 1 ), . . . , (\u03c3 N , \u03c4 N )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "of N word pairs, we first choose an alignment of each word pair (\u03c3 i , \u03c4 i ), which we then use to \"transmit\" the data, by simply listing the sequence of the atomic pairwise symbol alignments. 3 In order for the code to be uniquely decodable, we also need to encode the word boundaries. This can be done by transmitting a special symbol # that we use only at the end of a word.", "cite_spans": [ { "start": 193, "end": 194, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "1 We refer to \"source\" and \"target\" language for convenience only-our models are symmetric, as will become apparent. 2 We use boldface to denote words, as vectors of symbols.", "cite_spans": [ { "start": 117, "end": 118, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "3 By atomic we mean that the symbols are not analyzedin terms of their phonetic features-and treated by the baseline algorithm as atoms. In particular, the model has no notion of identity of symbols across the languages! Thus, we transmit objects, or events, e, in the event space E-which is in this case:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "E = \u03a3 . \u00d7 T . \u222a (# : #)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "We do this by means of Bayesian marginal likelihood, or prequential coding, see e.g., (Kontkanen et al., 1996) , giving the total code length as:", "cite_spans": [ { "start": 86, "end": 110, "text": "(Kontkanen et al., 1996)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "L base (D) = (1) \u2212 e\u2208E log \u0393 c(e) + \u03b1(e) + e\u2208E log \u0393 \u03b1(e) + log \u0393 e\u2208E c(e) + \u03b1(e) \u2212 log \u0393 e\u2208E \u03b1(e)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "The count c(e) is the number of times event e occurs in a complete alignment of the corpus; in particular, c(# : #) = N occurs as many times as there are word pairs. The alignment counts are maintained in a corpus-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "global count matrix M , where M (i, j) = c(i : j).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "The \u03b1(e) are the (Dirichlet) priors on the events. In the baseline algorithm, we set \u03b1(e) = 1 for all e, the so-called uniform prior, which does not favor any distribution over E, a priori. Note that this choice nulls the second summation in equation 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "Our baseline algorithm is simple: we first randomly align the entire corpus, then re-align one word pair at a time, greedily minimizing the total cost in Eq. 1, using dynamic programming.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "In the matrix in Fig. 1 , each cell corresponds to a partial alignment: reaching cell (i, j) means having read off i symbols of the source and j symbols of the target word. We iterate this process, re-aligning the word pairs, i.e., for the given word pair, we subtract the contribution of its current alignment from the global count matrix, then re-align the word pair, then add the newly aligned events back to the global count matrix. Realignment continues until convergence.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 23, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "Re-alignment Step: align source word \u03c3 consisting of symbols", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "\u03c3 = [\u03c3 1 ...\u03c3 n ] \u2208 \u03a3 * with target word \u03c4 = [\u03c4 1 ...\u03c4 m ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": ". We use dynamic programming to fill in the matrix, e.g., top-to-bottom, left-to-right: 4 Alignments of \u03c3 and \u03c4 correspond in a 1-1 fashion to paths through the matrix, starting with cost equal to 0 in top-left cell and terminating in bottom-right cell, moving only downward or rightward.", "cite_spans": [ { "start": 88, "end": 89, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "Each cell stores the cost of the most probable path so far: the most probable way to have scanned \u03c3 up to symbol \u03c3 i and \u03c4 up to \u03c4 j , marked X in the Figure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "V (\u03c3 i , \u03c4 j ) = min \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 V (\u03c3 i , \u03c4 j\u22121 ) +L(. : \u03c4 j ) V (\u03c3 i\u22121 , \u03c4 j ) +L(\u03c3 i : .) V (\u03c3 i\u22121 , \u03c4 j\u22121 ) +L(\u03c3 i : \u03c4 j ) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "Each term V (., .) has been computed earlier by the dynamic programming; the term L(.)-the cost of align- ing the two symbols-is a parameter of the model, computed in equation 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "The parameters L(e), or P (e), for every observed event e, are computed from the change in the total code-length-the change that corresponds to the cost of adjoining the new event e to the set of previously observed events E:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(e) = \u2206 e L = L E \u222a {e} \u2212 L(E) P (e) = 2 \u2212\u2206eL = 2 \u2212L E\u222a{e} 2 \u2212L(E)", "eq_num": "(3)" } ], "section": "The Baseline Model", "sec_num": "4" }, { "text": "Combining eqs. 1 and 3 gives the probability:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "P (e) = c(e) + 1 e c(e ) + |E|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "In particular, the cost of the most probable complete alignment of the two words will be stored in the bottom-right cell, V (\u03c3 n , \u03c4 m ), marked . An example alignment count matrix is shown in Fig. 2 .", "cite_spans": [], "ref_spans": [ { "start": 193, "end": 199, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "The Baseline Model", "sec_num": "4" }, { "text": "The baseline model revealed two problems. First, it seems to get stuck in local optima, and second, it produces many events with very low counts (occurring only once or twice).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Two-Part Code", "sec_num": "4.1" }, { "text": "To address the first problem we use simulated annealing with a sufficiently slow cooling schedule. This yields a reduction in the cost, and a better-more sparse-alignment count matrix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Two-Part Code", "sec_num": "4.1" }, { "text": "The second problem is more substantial. Starting from a common ancestor language, the number of changes that occurred in either language should be small. We expect sparse data-that only a small proportion of all possible events in E will actually ever occur.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Two-Part Code", "sec_num": "4.1" }, { "text": "We incorporate this notion into the model by means of a two-part code. First we encode which events have occurred/have been observed: we send a. the number of events with non-zero counts-this costs log(|E|+1) bits, and b. specifically which subset E + \u2282 E of the Figure 2 : Global count matrix, using two-part model events have non-zero counts-this costs log |E| |E + | bits. This first part of the code is called the codebook. Given the codebook, we transmit the complete data, E + , using Bayesian marginal likelihood. The code length becomes:", "cite_spans": [], "ref_spans": [ { "start": 263, "end": 271, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "The Two-Part Code", "sec_num": "4.1" }, { "text": "L tpc (D) = log(|E| + 1) + log |E| |E + | (5) \u2212 e\u2208E +", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Two-Part Code", "sec_num": "4.1" }, { "text": "log \u0393 c(e) + 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Two-Part Code", "sec_num": "4.1" }, { "text": "+ log \u0393 e\u2208E + c(e) + 1 \u2212 log \u0393(|E + |)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Two-Part Code", "sec_num": "4.1" }, { "text": "where E + denotes the set of events with non-zero counts, and we have set all \u03b1(e)'s to one. Optimizing the above function with simulated annealing yields much better alignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Two-Part Code", "sec_num": "4.1" }, { "text": "Multiple symbols are aligned in (Bouchard-C\u00f4t\u00e9 et al., 2007; Kondrak, 2003) . For example, Estonian and Finnish have frequent geminated consonants, which correspond to single symbols/sounds in other languages; diphthongs may align with single vowels; etc.", "cite_spans": [ { "start": 32, "end": 60, "text": "(Bouchard-C\u00f4t\u00e9 et al., 2007;", "ref_id": "BIBREF2" }, { "start": 61, "end": 75, "text": "Kondrak, 2003)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Aligning Multiple Symbols", "sec_num": "4.2" }, { "text": "We extend the baseline model to a 2x2 model, to allow correspondences of up to two symbols on both the source and the target side. The set of admissible kinds of events is then extended to include:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aligning Multiple Symbols", "sec_num": "4.2" }, { "text": "K = \uf8f1 \uf8f2 \uf8f3 (# : #), (\u03c3 : .), (\u03c3\u03c3 : .), (. : \u03c4 ), (\u03c3 : \u03c4 ), (\u03c3\u03c3 : t), (. : \u03c4 \u03c4 ), (\u03c3 : \u03c4 \u03c4 ), (\u03c3\u03c3 : \u03c4 \u03c4 ) \uf8fc \uf8fd \uf8fe (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aligning Multiple Symbols", "sec_num": "4.2" }, { "text": "We expect correspondences of the different types to behave differently, so we encode the occurrences of different event kinds separately in the codebook:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aligning Multiple Symbols", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L mult = L(CB) + L(Data|CB)", "eq_num": "(7)" } ], "section": "Aligning Multiple Symbols", "sec_num": "4.2" }, { "text": "L(CB) = k\u2208K log(N k + 1) + log N k M k (8) L(D|CB) = \u2212 e\u2208E log \u0393 c(e) + 1 (9) + log \u0393 e\u2208E c(e) + 1 \u2212 log \u0393(|E|)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aligning Multiple Symbols", "sec_num": "4.2" }, { "text": "where N k is the number of possible events of kind k and M k the corresponding number of such events actually observed in the alignment; k M k \u2261 |E|.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aligning Multiple Symbols", "sec_num": "4.2" }, { "text": "The baseline models align languages pairwise. The alignment models allow us to learn 1-1 patterns of correspondence in the language family. This model is easily extended to any number of languages. The model in (Bouchard-C\u00f4t\u00e9 et al., 2007) also aligns more than two languages at a time. We extend the 2-D model to three dimensions as follows. We seek an alignment where symbols correspond to each other in a 1-1 fashion, as in the 2-D baseline. A three-dimensional alignment is a triplet of symbols (\u03c3 :", "cite_spans": [ { "start": 211, "end": 239, "text": "(Bouchard-C\u00f4t\u00e9 et al., 2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "\u03c4 : \u03be) \u2208 \u03a3 . \u00d7 T . \u00d7 \u039e . .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "For example, the words meaning \"9\" in Finnish, Estonian and Mordva, can be aligned simultaneously as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "y . h d e k s\u00e4 n | | | | | | | | | u . h . e k s a . | | | | | | | | | v e \u03c7 . . k s a .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "In 3-D alignment, the input data contains all examples where words in at least two languages are present 5i.e., a word may be missing from one of the languages, (which allows us to utilize more of the data). Thus we have two types of examples: complete-where all three words present (as \"9\" above), and incompletecontaining words in only two languages. For example, for (haamu:-:\u010dama)-\"ghost\" in Finnish and Mordva-the cognate Estonian word is missing. We next extend the 2-D count matrix and the 2-D re-alignment algorithm to three dimensions. The 3-D re-alignment matrix is directly analogous to the 2-D version. For the alignment counts in 3-D, we handle complete and incomplete examples separately.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "Our \"marginal\" 3-D alignment model aligns three languages simultaneously, using three marginal 2-D matrices, each storing a pairwise 2-D alignment. The marginal matrices for three languages are denoted M \u03a3T , M \u03a3\u039e and M T \u039e . The algorithm optimizes the total cost of the complete data, which is defined as the sum of the three 2-D costs obtained from applying prequential coding to the marginal alignment matrices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "When computing the cost for event e = (\u03c3, \u03c4, \u03be), we consider complete and incomplete examples separately. In \"incomplete\" examples, we use the counts from the corresponding marginal matrix directly. E.g., for event count c(e), where e = (\u03c3, \u2212, \u03be), and \"\u2212\" denotes the missing word, the event count is given by: M \u03a3\u039e (\u03c3, \u03be), In case when the data triplet is complete-fully observed-the alignment cost is computed as the sum of the pairwise 2-D costs, given by three marginal alignment count matrices: 6 L(\u03c3 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c4 : \u03be) = L \u03a3T (\u03c3 : \u03c4 ) + L \u03a3\u039e (\u03c3 : \u03be) + L T \u039e (\u03c4 : \u03be)", "eq_num": "(10)" } ], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "The cost of each pairwise alignment is computed using prequential two-part coding, as in sec. 4.1. Note that when we register a complete alignment (\u03c3, \u03c4, \u03be), we register it in each of the base matrices-we increment each of the marginal counts:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "M \u03a3T (\u03c3, \u03c4 ), M \u03a3\u039e (\u03c3, \u03be)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": ", and M T \u039e (\u03c4, \u03be).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "To calculate the transition costs in the Viterbi algorithm, we also have two cases, complete and incomplete. For incomplete examples, we perform Viterbi in 2-D, using the costs directly from the corresponding marginal matrix, equation 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "3-D re-alignment phase: for complete examples in 3-D, is a direct analogue of the 2-D re-alignment-in the (i, j) plane-in eq. (2), extended to the third dimension, k. The cell V (\u03c3 i , \u03c4 j , \u03be k )-the cost of the most probable path leading to the cell (i, j, k)-is calculated by Dynamic Programming, using the symbolalignment costs L(\u03c3 : \u03c4 : \u03be). In addition to the three source cells as in eq. (2), in plane k, there are four additional source cells from the previous plane, k \u2212 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "Visualization: We wish to visualize the distribution of counts in the final 3-D alignment, except that now we must deal with expected counts, rather than observed counts, because some of the examples are incomplete. We can form a 3-D visualization matrix M * as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "\u2022 Compute |D|, the total number of alignments in the complete data (including the end-of-word alignments)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "\u2022 For each cell (i, j, k) in M * , the weight in that cell is given by P (i : j : k) \u2022 |D|, where P (i : j : k) is the probability of the alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "\u2022 The matrix of expected counts will have no zeroweight cells, since there are no zero-probability events-except (. : . : .). To suppress visualizing events with very low expected counts, we don't show cells with counts below a threshold, say, 0.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "A distribution of the expected counts in 3-D alignment is shown in figure 3. The three languages are Finnish, Estonian and Mordva. The area of each point in this figure is proportional to the expected count of the corresponding 3-way alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three-Dimensional Alignment", "sec_num": "5" }, { "text": "The existing etymological datasets are not always perfectly suited to the alignment task as we have defined it here. For example, the SSA contains mostly complete word-forms from all the languages, as they would appear in a dictionary. As a consequence, this frequently includes morphological material that is not relevant from the point of view of etymology or alignment. To illustrate this (in the Indo-European family), consider aligning English maid and German m\u00e4dchen-in German, the word-form without the suffix has disappeared. Many instances with such suffixes are found in the SSA; StarLing presents stemmed data to a larger extent, though assuring that every form in the dataset is perfectly stemmed is a very difficult task. From the point of view of computational alignment, such \"nuisance\" suffixes present a problem, by confusing the model. We extend the model to handle, or discover, the nuisance suffixes automatically, as follows. Consider, in the realignment matrix in Fig. 1 , the cells (i, j) (marked X,) (i, m), and (j, n). We always end by transitioning from cell marked , to the terminal cell, via the special end-of-word alignment event (# : #), whose cost is computed from N , the number of word pairs in the data (this final transition is not shown in the figure) .", "cite_spans": [], "ref_spans": [ { "start": 986, "end": 992, "text": "Fig. 1", "ref_id": "FIGREF0" }, { "start": 1281, "end": 1288, "text": "figure)", "ref_id": null } ], "eq_spans": [], "section": "Nuisance Suffixes", "sec_num": "6" }, { "text": "While previously, we could only reach the terminal cell from cell via event (# : #), we now also permit a hyper-jump from any cell in the matrix to the terminal cell, which is equivalent to treating the remainder of source and/or target word as a nuisance suffix. Thus, hyper-jump from cell marked X means that we code the remaining symbols [\u03c3 i+1 ...\u03c3 n ] in \u03c3 and [\u03c4 j+1 ...\u03c4 m ] in \u03c4 separately, not using the global count matrix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nuisance Suffixes", "sec_num": "6" }, { "text": "That is, to align \u03c3 and \u03c4 , we first code the symbols up to X jointly, prequentially, using the global count matrix. After X, we code a special event (\u2212 : \u2212), meaning an aligned morpheme boundary, similar to (# : #) which says we have aligned the word boundaries. Then we code the rest of [\u03c3 i+1 ...\u03c3 n ], and the rest of [\u03c4 j+1 ...\u03c4 m ], both followed by #.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nuisance Suffixes", "sec_num": "6" }, { "text": "If we hyper-jump from cell (i, m), rather than from X, then we code the event (\u2212 : #)-empty suffix on (j, m) . The cost of each symbol in the suffix can be coded, for example, according to: a uniform language model: each source symbol costs \u2212 log 1/(|\u03a3| + 1); a unigram model: for each source symbol \u03c3 (including #), compute its frequency p(\u03c3) from the raw source data, and let cost(\u03c3) = \u2212 log p(\u03c3); a bigram model; etc. Table 1 compares the code length between the original 1x1 two-part code model and a nuisance suffix model (for two language pairs). The code length is always lower in the nuisance suffix model.", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 108, "text": "(j, m)", "ref_id": null }, { "start": 421, "end": 428, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Nuisance Suffixes", "sec_num": "6" }, { "text": "Although it finds instances of true nuisance suffixes, the model may be fooled by certain phenomena. For example, when aligning Finnish and Estonian, the model decides that final vowels in Finnish which have disappeared in Estonian are suffixes, whereas that is historically not the case. To avoid such misinterpretation, the suffix detection feature should be used in conjunction with other model variants, including alignment of more than a pair of languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nuisance Suffixes", "sec_num": "6" }, { "text": "One way to evaluate the presented models thoroughly would require a gold-standard aligned corpus; the models produce alignments, which would be compared to expected alignments. Given a gold-standard, we could measure performance quantitatively, e.g., in terms of accuracy. However, no gold-standard alignment for the Uralic data currently exists, and building one is very costly and slow.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "7" }, { "text": "Alignment: We can perform a qualitative evaluation, by checking how many correct sound correspondences a model finds-by inspecting the final alignment of the corpus and the alignment matrix. A matrix for a 2-D, 1x1 two-part model alignment of Finnish-Estonian is shown in figure 2. The size of each ball is proportional to the number of alignments in the corpus of the corresponding symbols.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "7" }, { "text": "Finnish and Estonian are closely related, and the alignment shows a close correspondence-the model finds the \"diagonal,\" i.e., most sounds correspond to \"themselves.\" We must note that this model has no a priori knowledge about the nature of the symbols, e.g., that Finnish a is identical to or has any relation to Estonian a. The languages are coded separately, and they may have different alphabets-as they do in general (we use transcribed data).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "7" }, { "text": "Rules of correspondence: One of our main goals is to model complex rules of correspondence among languages. We can evaluate the models based on how well they discover rules, and how complex the rules are. In Fig. 2 , the baseline model finds that Fin. u \u223c Est. u, but sometimes to o-this entropy is left unexplained by this model. However, the more complex 2x2 model identifies the cause exactly-by discovering that Finnish diphthongs uo, y\u00f6, ie correspond to Estonian long vowels oo,\u00f6\u00f6, ee, which covers (i.e., explains!) all instances of (u:o).", "cite_spans": [], "ref_spans": [ { "start": 205, "end": 214, "text": "In Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "7" }, { "text": "The plot shows many Finnish-Estonian correspondences, which can be found in handbooks, e.g., (Lytkin, 1973; Sinor, 1997) . For example,\u00e4\u223c\u00e4 vs.\u00e4\u223ca about evenly-reflecting the rule that original front vowels (\u00e4) became back (a) in non-first syllables in Estonian; word-final vowels a, i,\u00e4, preserved in Finnish are often deleted in Estonian; etc. These can be observed directly in the alignment matrix, and in the aligned corpus.", "cite_spans": [ { "start": 93, "end": 107, "text": "(Lytkin, 1973;", "ref_id": "BIBREF12" }, { "start": 108, "end": 120, "text": "Sinor, 1997)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "7" }, { "text": "Compression: In figure 4, we compare the models against standard compressors, gzip and bzip, tested on over 3200 Finnish-Estonian word pairs from SSA. The data given to our models is processed by the compressors, one word per line. Of course, our models know that they should align pairs of consecutive lines. This shows that learning about the \"vertical\" correspondences achieves much better compression ratesextract regularity from the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "7" }, { "text": "Language distance: We can use alignment to measure inter-language distances. We align all languages in StarLing pairwise, e.g., using a two-part 1x1 model. We can then measure the Normalized Compression Distance (Cilibrasi and Vitanyi, 2005) :", "cite_spans": [ { "start": 212, "end": 241, "text": "(Cilibrasi and Vitanyi, 2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "7" }, { "text": "N CD(a, b) = C(a, b) \u2212 min(C(a, a), C(b, b)) max(C(a, a), C(b, b))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "7" }, { "text": "where 0 < N CD < 1, and C(a, b) is the compression cost-i.e., the cost of the complete aligned data for languages a and b. The pairwise compression distances . We can then use these distances to draw phylogenetic trees, using hierarchical clustering methods. We used the UPGMA algorithm, (Murtagh, 1984) , the resulting tree shown in Fig. 6 . More sophisticated methods, such as the Fast Quartet method, CompLearn, (Cilibrasi and Vitanyi, 2011) may produce even more accurate trees. Even such a simple model as the 1x1 baseline shows emerging patterns that mirror the relationships in the Uralic family tree, shown in Fig. 5 , adapted from (Anttila, 1989) . For example, scanning the entries in the table corresponding to Finnish, the compression distances grow as the corresponding distance within the family tree grows. Sister languages (in bold) should be closest among all their relations. This confirms that the model is able to compress better-find more regularity in the databetween languages that are are more closely related.", "cite_spans": [ { "start": 288, "end": 303, "text": "(Murtagh, 1984)", "ref_id": "BIBREF13" }, { "start": 415, "end": 444, "text": "(Cilibrasi and Vitanyi, 2011)", "ref_id": "BIBREF4" }, { "start": 640, "end": 655, "text": "(Anttila, 1989)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 334, "end": 340, "text": "Fig. 6", "ref_id": null }, { "start": 618, "end": 624, "text": "Fig. 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "7" }, { "text": "We have presented a baseline model for alignment, and several extensions. We have evaluated the models qualitatively, by examining the alignments and the rules of correspondence that they discover, and quantitatively by measuring compression cost and language distances. We trust that the methods presented here provide a good basis for further research. We are developing methods that take context, or en- vironment into account in modeling. The idea is to code sounds and environments as vectors of phonetic features and instead of aligning symbols, to align individual features of the symbols. The gain from introducing the context enables us to discover more complex rules of correspondence. We also plan to extend our models to diachronic reconstruction, which allows reconstruction of proto forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "8" }, { "text": "NB: inFig. 1, the left column and the top row store the costs for symbol deletions at the beginning of the source and the target word, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This was true by definition in the baseline 2-D algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that this results in an incomplete code, since every symbol is coded twice, but that does not affect the learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported by the Uralink Project of the Academy of Finland, grant 129185. We thank Teemu Roos for his suggestions, and Arto Vihavainen for his work on the implementation of the algorithms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Historical and comparative linguistics", "authors": [ { "first": "R", "middle": [], "last": "Anttila", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Anttila. 1989. Historical and comparative linguis- tics. John Benjamins.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An experimental study comparing linguistic phylogenetic reconstruction methods", "authors": [ { "first": "F", "middle": [], "last": "Barbancon", "suffix": "" }, { "first": "T", "middle": [], "last": "Warnow", "suffix": "" }, { "first": "D", "middle": [], "last": "Ringe", "suffix": "" }, { "first": "S", "middle": [], "last": "Evans", "suffix": "" }, { "first": "L", "middle": [], "last": "Nakhleh", "suffix": "" } ], "year": 2009, "venue": "Proc. Conf. on Languages and Genes, UC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Barbancon, T. Warnow, D. Ringe, S. Evans, and L. Nakhleh. 2009. An experimental study compar- ing linguistic phylogenetic reconstruction methods. In Proc. Conf. on Languages and Genes, UC Santa Barbara. Cambridge University Press.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A probabilistic approach to diachronic phonology", "authors": [ { "first": "A", "middle": [], "last": "Bouchard-C\u00f4t\u00e9", "suffix": "" }, { "first": "P", "middle": [], "last": "Liang", "suffix": "" }, { "first": "T", "middle": [], "last": "Griffiths", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2007, "venue": "Proc. EMNLP-CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Bouchard-C\u00f4t\u00e9, P. Liang, T.Griffiths, and D. Klein. 2007. A probabilistic approach to diachronic phonology. In Proc. EMNLP-CoNLL, Prague.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Clustering by compression", "authors": [ { "first": "R", "middle": [], "last": "Cilibrasi", "suffix": "" }, { "first": "P", "middle": [ "M B" ], "last": "Vitanyi", "suffix": "" } ], "year": 2005, "venue": "IEEE Transactions on Information Theory", "volume": "", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Cilibrasi and P.M.B. Vitanyi. 2005. Clustering by compression. IEEE Transactions on Information Theory, 51(4).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A fast quartet tree heuristic for hierarchical clustering", "authors": [ { "first": "R", "middle": [ "L" ], "last": "Cilibrasi", "suffix": "" }, { "first": "P", "middle": [ "M B" ], "last": "Vitanyi", "suffix": "" } ], "year": 2011, "venue": "Pattern Recognition", "volume": "44", "issue": "3", "pages": "662--677", "other_ids": {}, "num": null, "urls": [], "raw_text": "R.L. Cilibrasi and P.M.B. Vitanyi. 2011. A fast quar- tet tree heuristic for hierarchical clustering. Pattern Recognition, 44(3):662-677.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The Minimum Description Length Principle", "authors": [ { "first": "P", "middle": [], "last": "Gr\u00fcnwald", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Gr\u00fcnwald. 2007. The Minimum Description Length Principle. MIT Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Suomen Sanojen Alkuper\u00e4 (The Origin of Finnish Words). Suomalaisen Kirjallisuuden Seura", "authors": [ { "first": "E", "middle": [], "last": "Itkonen", "suffix": "" }, { "first": "U.-M", "middle": [], "last": "Kulonen", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Itkonen and U.-M. Kulonen. 2000. Suomen Sano- jen Alkuper\u00e4 (The Origin of Finnish Words). Suo- malaisen Kirjallisuuden Seura, Helsinki, Finland.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Significance of Word Lists: Statistical Tests for Investigating Historical Connections Between Languages", "authors": [ { "first": "B", "middle": [], "last": "Kessler", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Kessler. 2001. The Significance of Word Lists: Statistical Tests for Investigating Historical Con- nections Between Languages. The University of Chicago Press, Stanford, CA.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Determining recurrent sound correspondences by inducing translation models", "authors": [ { "first": "G", "middle": [], "last": "Kondrak", "suffix": "" } ], "year": 2002, "venue": "Proceedings of COLING 2002", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Kondrak. 2002. Determining recurrent sound cor- respondences by inducing translation models. In Proceedings of COLING 2002, Taipei.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Identifying complex sound correspondences in bilingual wordlists", "authors": [ { "first": "G", "middle": [], "last": "Kondrak ; Cicling", "suffix": "" }, { "first": "Mexico", "middle": [], "last": "Springer", "suffix": "" }, { "first": "Lncs", "middle": [], "last": "", "suffix": "" }, { "first": "No", "middle": [], "last": "", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Kondrak. 2003. Identifying complex sound cor- respondences in bilingual wordlists. In A. Gelbukh (Ed.) CICLing, Mexico. Springer LNCS, No. 2588.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Combining evidence in cognate identification", "authors": [ { "first": "G", "middle": [], "last": "Kondrak", "suffix": "" } ], "year": 2004, "venue": "Proceedings of Canadian-AI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Kondrak. 2004. Combining evidence in cognate identification. In Proceedings of Canadian-AI 2004, London, ON. Springer-Verlag LNCS, No. 3060.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Constructing Bayesian finite mixture models by the EM algorithm", "authors": [ { "first": "P", "middle": [], "last": "Kontkanen", "suffix": "" }, { "first": "P", "middle": [], "last": "Myllym\u00e4ki", "suffix": "" }, { "first": "H", "middle": [], "last": "Tirri", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Kontkanen, P. Myllym\u00e4ki, and H. Tirri. 1996. Con- structing Bayesian finite mixture models by the EM algorithm. Technical Report NC-TR-97-003, ES- PRIT Working Group on NeuroCOLT.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Voprosy Finno-Ugorskogo Jazykoznanija (Issues in Finno-Ugric Linguistics)", "authors": [ { "first": "V", "middle": [ "I" ], "last": "Lytkin", "suffix": "" } ], "year": 1973, "venue": "", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. I. Lytkin. 1973. Voprosy Finno-Ugorskogo Jazykoz- nanija (Issues in Finno-Ugric Linguistics), volume 1-3. Nauka, Moscow.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Complexities of hierarchic clustering algorithms: the state of the art", "authors": [ { "first": "F", "middle": [], "last": "Murtagh", "suffix": "" } ], "year": 1984, "venue": "Computational Statistics Quarterly", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Murtagh. 1984. Complexities of hierarchic cluster- ing algorithms: the state of the art. Computational Statistics Quarterly, 1.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Perfect phylogenetic networks: A new methodology for reconstructing the evolutionary history of natural languages", "authors": [ { "first": "L", "middle": [], "last": "Nakhleh", "suffix": "" }, { "first": "D", "middle": [], "last": "Ringe", "suffix": "" }, { "first": "T", "middle": [], "last": "Warnow", "suffix": "" } ], "year": 2005, "venue": "Language", "volume": "", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Nakhleh, D. Ringe, and T. Warnow. 2005. Perfect phylogenetic networks: A new methodology for re- constructing the evolutionary history of natural lan- guages. Language, 81(2).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Indoeuropean and computational cladistics", "authors": [ { "first": "D", "middle": [], "last": "Ringe", "suffix": "" }, { "first": "T", "middle": [], "last": "Warnow", "suffix": "" }, { "first": "A", "middle": [], "last": "Taylor", "suffix": "" } ], "year": 2002, "venue": "Transact. Philological Society", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Ringe, T. Warnow, and A. Taylor. 2002. Indo- european and computational cladistics. Transact. Philological Society, 100(1).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Modeling by shortest data description", "authors": [ { "first": "J", "middle": [], "last": "Rissanen", "suffix": "" } ], "year": 1978, "venue": "Automatica", "volume": "", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Rissanen. 1978. Modeling by shortest data descrip- tion. Automatica, 14(5).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The Uralic Languages: Description, History and Foreign Influences (Handbook of Uralic Studies)", "authors": [], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Denis Sinor, editor. 1997. The Uralic Languages: De- scription, History and Foreign Influences (Handbook of Uralic Studies). Brill Academic Publishers.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Tower of babel: Etymological databases", "authors": [ { "first": "S", "middle": [ "A" ], "last": "Starostin", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. A. Starostin. 2005. Tower of babel: Etymological databases. http://newstar.rinet.ru/.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Probabilistic models for alignment of etymological data", "authors": [ { "first": "H", "middle": [], "last": "Wettig", "suffix": "" }, { "first": "R", "middle": [], "last": "Yangarber", "suffix": "" } ], "year": 2011, "venue": "Proc. NODALIDA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Wettig and R. Yangarber. 2011. Probabilistic mod- els for alignment of etymological data. In Proc. NODALIDA, Riga, Latvia.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "Re-alignment matrix: computes Dynamic Programming search for the most probable alignment." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "3-dimensional alignment matrix. and the cost of each alignment is computed as in the baseline model, directly in two dimensions." }, "FIGREF3": { "uris": null, "type_str": "figure", "num": null, "text": "Comparison of compression power. Two-part code model refers to the 1x1 model that is described in section 4.1 and 2x2-boundaries model multiple symbol alignment model that is discussed in section 4.2." }, "FIGREF4": { "uris": null, "type_str": "figure", "num": null, "text": "Finno-Ugric branch of the Uralic familyFigure 6: Finno-Ugric tree induced by NCD are shown in table 2" }, "TABREF0": { "type_str": "table", "text": "Nuisance suffix models. target side, and then code the rest of [\u03c3 i+1 ...\u03c3 n ] in \u03c3 and #. Symmetrically for the hyper-jump from", "html": null, "content": "
Two-part model Suffix model
Fin-Est21748.2921445.01
Fin-Ugr10987.9810794.87
", "num": null }, "TABREF2": { "type_str": "table", "text": "Pairwise normalized compression costs for Finno-Ugric sub-family of Uralic, in StarLing data.", "html": null, "content": "", "num": null } } } }