{ "paper_id": "R11-1009", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:05:00.026642Z" }, "title": "Constructing Linguistically Motivated Structures from Statistical Grammars", "authors": [ { "first": "Ali", "middle": [], "last": "Basirat", "suffix": "", "affiliation": { "laboratory": "NLP Lab", "institution": "University of Tehran", "location": { "settlement": "Tehran", "country": "Iran" } }, "email": "a.basirat@srbiau.ac.ir" }, { "first": "Heshaam", "middle": [], "last": "Faili", "suffix": "", "affiliation": { "laboratory": "NLP Lab", "institution": "University of Tehran", "location": { "settlement": "Tehran", "country": "Iran" } }, "email": "hfaili@ut.ac.ir" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper discusses two Hidden Markov Models (HMM) for linking linguistically motivated XTAG grammar and the automatically extracted LTAG used by MICA parser. The former grammar is a detailed LTAG enriched with feature structures. And the latter one is a huge size LTAG that due to its statistical nature is well suited to be used in statistical approaches. Lack of an efficient parser and sparseness in the supertags set are the main obstacles in using XTAG and MICA grammars respectively. The models were trained by the standard HMM training algorithm, Baum-Welch. To converge the training algorithm to a better local optimum, the initial state of the models also were estimated using two semi-supervised EM-based algorithms. The resulting accuracy of the model (about 91%) shows that the models can provide a satisfactory way for linking these grammars to share their capabilities together.", "pdf_parse": { "paper_id": "R11-1009", "_pdf_hash": "", "abstract": [ { "text": "This paper discusses two Hidden Markov Models (HMM) for linking linguistically motivated XTAG grammar and the automatically extracted LTAG used by MICA parser. The former grammar is a detailed LTAG enriched with feature structures. And the latter one is a huge size LTAG that due to its statistical nature is well suited to be used in statistical approaches. Lack of an efficient parser and sparseness in the supertags set are the main obstacles in using XTAG and MICA grammars respectively. The models were trained by the standard HMM training algorithm, Baum-Welch. To converge the training algorithm to a better local optimum, the initial state of the models also were estimated using two semi-supervised EM-based algorithms. The resulting accuracy of the model (about 91%) shows that the models can provide a satisfactory way for linking these grammars to share their capabilities together.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Tree Adjoining-Grammar (TAG) is a tree generating system that forms the object language by the set of derived trees (Joshi and Schabez, 1991) . This formalism as a Mildly Context Sensitive Grammar is supposed to be powerful enough to model the natural languages (Joshi, 1985) .", "cite_spans": [ { "start": 116, "end": 141, "text": "(Joshi and Schabez, 1991)", "ref_id": "BIBREF9" }, { "start": 262, "end": 275, "text": "(Joshi, 1985)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the lexicalized case (LTAG), each lexical item of the object language is associated with at least one elementary structure of the grammar called elementary tree. Each elementary tree in LTAGs can be considered as a complex description of its anchor that provides a domain of locality over which the anchor can specify syntactic and semantic constraints (Bangalore and Joshi, 1999) . Extended domain of locality and factoring of recursion from the domain of dependency are the main key properties of using these grammars (Bangalore and Joshi, 1999) .", "cite_spans": [ { "start": 356, "end": 383, "text": "(Bangalore and Joshi, 1999)", "ref_id": "BIBREF0" }, { "start": 523, "end": 550, "text": "(Bangalore and Joshi, 1999)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are two ways for creating the set of elementary trees (Faili and Basirat, 2010) . The first method is the manual crafting of the elementary trees as it was done in the XTAG project (XTAG-Group, 2001) . And the alternate one is the automatically extraction of them from some annotated treebanks as it was done in (Xia, 2001; Chen, 2001) . The result of the former method is a detailed LTAG that is enriched with semantic representation but suffers from the lack of statistical information. The output of the latter one on the other hand, is a huge size LTAG that suffers from the sparseness problem in the elementary trees set but contains enough statistical information that make it suitable to be used in statistical approaches. The relatively huge size of the automatically extracted elementary trees set is an obstacle in annotating these structures with semantic representation (Chen, 2001) .", "cite_spans": [ { "start": 60, "end": 85, "text": "(Faili and Basirat, 2010)", "ref_id": "BIBREF6" }, { "start": 187, "end": 205, "text": "(XTAG-Group, 2001)", "ref_id": "BIBREF18" }, { "start": 318, "end": 329, "text": "(Xia, 2001;", "ref_id": "BIBREF17" }, { "start": 330, "end": 341, "text": "Chen, 2001)", "ref_id": "BIBREF3" }, { "start": 888, "end": 900, "text": "(Chen, 2001)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One of the negative aspects of using LTAGs is their high computational complexity of parsing algorithm, (O(n 6 )) (Kallmeyer, 2010) . Regarding the work presented in (Sarkar, 2007) , the factors that affect the parsing complexity of such lexicalized grammars are the number of trees selected by the words in the input sentence and the clausal complexity of the sentence to be parsed. The first factor, named Syntactic Lexical Ambiguity, directly addresses Supertagging, proposed by Bangalore and Joshi (1999) .", "cite_spans": [ { "start": 114, "end": 131, "text": "(Kallmeyer, 2010)", "ref_id": "BIBREF11" }, { "start": 166, "end": 180, "text": "(Sarkar, 2007)", "ref_id": "BIBREF15" }, { "start": 482, "end": 508, "text": "Bangalore and Joshi (1999)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Supertagging is a robust partial parsing approach that can be applied for increasing up the speed of LTAG parsing algorithm (Bangalore and Joshi, 1999) . In supertagging the flexibility of linguistically motivated lexical descriptions are integrated with the robustness of statistical approaches. The idea is based on extending the no-tion of 'tag' from the standard Part Of Speech to a tag that represents a rich and complex syntactic structure, called Supertag. In the lexicalized grammars like LTAGs each elementary structure of the grammar can be considered as a supertag. Supertagging itself is the task of assigning the supertags to each word of the processing sentence. After supertagging the only thing that the LTAG parser should do is to attach these selected supertags for creating a forest of derived/derivation trees.", "cite_spans": [ { "start": 124, "end": 151, "text": "(Bangalore and Joshi, 1999)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Supertagging as a search problem can be modeled by two major methods, generative model and classification approach (Bangalore et al., 2005) . In the former method the problem is modeled by a Hidden Markov Model and in the latter one it is modeled by the discriminant approaches like SVM and Maximum Entropy Estimation. Applying each of these methods in supertagging is subject to the availability of enough statistical information about the problem. Hence, due to their statistical nature, the automatically extracted LT-AGs are more suitable to be used by supertagging algorithm than the manually crafted LTAGs. This characteristic of automatically extracted LT-AGs caused the emergence of some powerful statistical parsers like MICA (Bangalore et al., 2009) that works based on the supertagging approach.", "cite_spans": [ { "start": 115, "end": 139, "text": "(Bangalore et al., 2005)", "ref_id": "BIBREF1" }, { "start": 735, "end": 759, "text": "(Bangalore et al., 2009)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The lack of an efficient LTAG parser for manually crafted LTAGs beside the weakness of the automatically extracted LTAGs in representing semantic representation, encouraged us to rectify these deficiencies by making an interface between these grammars. The interface was established between individual elementary trees of each grammars such that any elementary tree of the source LTAG could be mapped onto an elementary tree of the target LTAG. The idea is similar to the Hidden TAG Model (Chiang and Rambow, 2006 ) that links many spoken dialects of a language to benefit from sharing rich resources. Here by relating two different perspectives of a natural language presented in the form of two LTAGs, we are going to share their capabilities together.", "cite_spans": [ { "start": 489, "end": 513, "text": "(Chiang and Rambow, 2006", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The interface was modeled as a sequence tagger that deals with the problem of how to map each supertag sequence of the source LTAG onto a supertag sequence of the target LTAG given the local and non-local information of the source sequence. An unsupervised sequence tagger based on Hidden Markov Model (HMM) was proposed that produces a target supertag sequence given a source supertag sequence. The sequence tagger was trained using the standard HMM training algorithm called Baum-Welch. Due to this fact that the algorithm convergence is tightly depending on the HMM initial state, the initial state of the HMM also was trained intellectually using an EM-Based semi-supervised bootstrapping algorithm. The solution was applied on the manually crafted English XTAG grammar (XTAG-Group, 2001 ) as target LTAG and the automatically extracted LTAG used by MICA parser (Bangalore et al., 2009) as source LTAG.", "cite_spans": [ { "start": 774, "end": 791, "text": "(XTAG-Group, 2001", "ref_id": "BIBREF18" }, { "start": 866, "end": 890, "text": "(Bangalore et al., 2009)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The significance of this work is as follow. First, as a solution for enhancing the parsing efficiency of the XTAG grammar, as it was done by Faili (2009) . Second, as a fully automated method for bridging between grammars in order to share their capabilities together.", "cite_spans": [ { "start": 141, "end": 153, "text": "Faili (2009)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Bridging between grammars in order to share their capabilities is considered by some researchers. Improving the parsing quality in the resource-poor languages (Chiang and Rambow, 2006) , enriching automatically extracted LTAGs with semantic representation (Chen, 2001; Faili and Basirat, 2010; Faili and Basirat, 2011) , increasing the syntactic coverage of lexicalized resources , and finding the overlap between two grammars (Xia and Palmer, 2000) are considered as the most important reasons for performing this task.", "cite_spans": [ { "start": 159, "end": 184, "text": "(Chiang and Rambow, 2006)", "ref_id": "BIBREF4" }, { "start": 256, "end": 268, "text": "(Chen, 2001;", "ref_id": "BIBREF3" }, { "start": 269, "end": 293, "text": "Faili and Basirat, 2010;", "ref_id": "BIBREF6" }, { "start": 294, "end": 318, "text": "Faili and Basirat, 2011)", "ref_id": "BIBREF7" }, { "start": 427, "end": 449, "text": "(Xia and Palmer, 2000)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In general, the proposed methods for performing such a task could be classified into two major categories. The first category consists of the methods that try to link the grammars using the structural similarities of the grammar's elements regardless of the syntactic environments that the elements may be placed. The approaches proposed in (Chen, 2001) , (Xia and Palmer, 2000) , and (Ryant and Kipper, 2004) are classified in this category.", "cite_spans": [ { "start": 341, "end": 353, "text": "(Chen, 2001)", "ref_id": "BIBREF3" }, { "start": 356, "end": 378, "text": "(Xia and Palmer, 2000)", "ref_id": "BIBREF16" }, { "start": 385, "end": 409, "text": "(Ryant and Kipper, 2004)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The second one consists of the methods that try to make the connection regarding the statistical information of the syntactic environments where the grammar's elements appear on. Chiang and Rambow (2006) by introducing a novel concept, namely hidden TAG model, proposed a model analogous to a HMM for linking a resourcerich language to a resource-poor language. In (Faili and Basirat, 2010; Faili and Basirat, 2011) also a statistical approach based on HMM for linking the automatically extracted LTAG from Penn Treebank (Chen, 2001) and English XTAG grammar (XTAG-Group, 2001 ) was proposed. Here by introducing two statistical models, we have closely followed the approach presented in (Faili and Basirat, 2011) .", "cite_spans": [ { "start": 365, "end": 390, "text": "(Faili and Basirat, 2010;", "ref_id": "BIBREF6" }, { "start": 391, "end": 415, "text": "Faili and Basirat, 2011)", "ref_id": "BIBREF7" }, { "start": 521, "end": 533, "text": "(Chen, 2001)", "ref_id": "BIBREF3" }, { "start": 559, "end": 576, "text": "(XTAG-Group, 2001", "ref_id": "BIBREF18" }, { "start": 688, "end": 713, "text": "(Faili and Basirat, 2011)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The task of mapping a MICA elementary tree sequence onto an appropriate XTAG elementary tree sequence could be formulated as below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM-based LTAG mapping", "sec_num": "3" }, { "text": "Given a sequence of MICA elementary trees T = (t 1 , . . . , t n ) assigned to sentence S = (w 1 , . . . , w n ) by MICA, tag each element of T with an elementary tree t i \u2208 XTAG Grammar such that the likelihood of T = (t 1 , . . . , t n ) given T and S be maximized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM-based LTAG mapping", "sec_num": "3" }, { "text": "This problem directly addresses a Hidden Markov Model (HMM) that relates a MICA elementary tree sequence as an observation sequence to the most probable XTAG elementary tree sequence as a hidden state path. Given such a model, the Viterbi algorithm can be used for finding the most probable hidden state path that generates the observation sequence. The rest of this part deals with the problem modeling using HMM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM-based LTAG mapping", "sec_num": "3" }, { "text": "Regarding the existence gap between XTAG and MICA grammars (Chen, 2001) , two possible mapping models were proposed. The M-1 model simply ignores this gap. It assumes every syntactic structure in the MICA grammar has at least one corresponding element in the XTAG grammar. In this case, each hidden state is exactly corresponded to a XTAG elementary tree. The MICA supertags also are considered as the observation symbols. Given any XTAG elementary tree t i and t j , the state transition matrix (A = [a i, j ]) contains the probability of seeing t j after t i in a sequence of XTAG elementary trees. For each MICA elementary tree t j and XTAG elementary tree t i the observation probability matrix (B = [b i, j ]) also contains the probability P(t j |t i ).", "cite_spans": [ { "start": 59, "end": 71, "text": "(Chen, 2001)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Problem Modeling Using HMM", "sec_num": "3.1" }, { "text": "On the other hand, the alternate model, M-2, tries to model the relation between the grammars with respect to the existence gap between them. In this model it is assumed that there are some syntactic structures in the MICA grammar that are not supported by the XTAG grammar. The main difference between M-1 and M-2 is in their hidden states. In addition to the hidden states used in M-1, a new symbolic state, namely UNKNOWN, is added to the M-2 hidden states set. This new state is the representative of all syntactic structures that are modeled by MICA grammar but not by XTAG grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Modeling Using HMM", "sec_num": "3.1" }, { "text": "Both of the M-1 and M-2 models were trained by the Baum-Welch algorithm. As the other HMM training algorithm, Baum-Welch algorithm also cannot find the global optimum of the search space. This weakness is inherited from the HMM in which does not provide any clear solution to use any extra information of the problem. In this case, the initial state of the training algorithm provides a way to use a part of environment's knowledge that can largely cover the mentioned weakness (Rabiner, 1989) .", "cite_spans": [ { "start": 478, "end": 493, "text": "(Rabiner, 1989)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.2" }, { "text": "To lead the training algorithm to a better solution two methods was peoposed for estimating the initial state of the models. Next part, introduces these algorithms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.2" }, { "text": "The initial state of the models has been trained using two novel semi-supervised EM-based training algorithms. The algorithms work based on the available set of MICA and XTAG elementary tree sequences achieved from parsing a set of English sentences namely Initialization Data Base (IDB).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initialization", "sec_num": "3.3" }, { "text": "In the M-1 model, IDB must be selected so that all of its sentences can be modeled in both of XTAG and MICA grammars. This constraint is due to the M-1 assumption about the problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initialization", "sec_num": "3.3" }, { "text": "In M-2 the only constraint over the IDB sentences is that the sentences must be modeled in the MICA's grammar. In this case, IDB can be partitioned into two parts. The sentences that can be modeled by XTAG grammar, Parsable Initialization dataset (PI), and the sentences that cannot be modeled by the XTAG grammar, NotParsable Initialization dataset (NPI). The partitioning enables the model to consider the existence gap between the grammars.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initialization", "sec_num": "3.3" }, { "text": "Let C and C' be two sets of elementary tree sequences achieved from parsing IDB using MICA and XTAG parsers, respectively. Due to the statistical nature of MICA parser, for any sentence S i \u2208 IDB, C contains a set of scored elementary tree sequences. Nevertheless, C' contains an ambiguity set of elementary tree sequences without any clear way to disambiguate it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "Given C and C', the simplest and most intuitive way for estimating the initial values of the HMM is MLE. Nevertheless, performing this application is subject to disambiguating the output of the XTAG parser stored in C'. This problem addresses a function that assigns a real value to each member of C' as shown in eq. 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c9: C \u2192 R", "eq_num": "(1)" } ], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "Given such a weighting function \u03c9, the probability of transition (S i \u2192 S j ) in hidden states can be estimated by taking weighted count from all bigrams (S i , S j ) in C' and normalizing by the sum of all bigrams (S i , S k ) that share the same first elements. A similar method also can be used for computing the probabilities presented in the observation matrix (B) and \u03a0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "Given C\" = \u03c9(C ) and C, we define function \u039b for generating the HMM \u03bb using the aforementioned MLE (eq. 2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u039b: C\" \u00d7 C \u2192 \u03bb", "eq_num": "(2)" } ], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "The main problem here is to find an appropriate function \u03c9. Function \u03c9 was estimated using a semi-supervised EM-based method. The algorithm takes the C and C' as input and attempts to estimate some values for function \u03c9 such that the objective function presented in eq. 3 is being maximized. Function shows the likelihood of observing C given the HMM \u03bb achieved by \u039b.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= P(C|\u03bb = \u039b(C\", C))", "eq_num": "(3)" } ], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "In the EM formulation, the E-step was defined as the computing the value of \u03bb using \u039b. In Mstep the algorithm attempts to update the \u03c9 regarding the earlier model resulted from E-step. Eq. 4 shows how to estimate the value of \u03c9 for a XTAG elementary tree sequences T \u2208 C . In this equation, \u03be shows the set of XTAG elementary tree sequences in C' that are generated from the sentence S, the generator of T . T i \u2208 C also represents the ith MICA elementary tree sequence in \u03be. The index n shows the total number of sequences in C generated from S (| \u03be |). In this model, in addition to computing \u03c9, applying MLE is subject to generating the set of elementary tree sequences for the sentences in dataset NPI. We name this set of elementary tree sequences C XNP . Each sequence in C XNP consists of XTAG elementary trees and have to contain at least one UNKNOWN symbol regarding this fact that NPI contains the sentences that couldn't be modeled in XTAG grammars. Given the paired sets (C MNP , C XNP ) and (C MP , C XP ) and an appropriate weighting function \u03c9 as shown in eq. 5, the initial values of HMM can be estimated using the mentioned MLE method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c9(T ) = n i=1 P(T i )P(T i , T |\u03bb) | \u03be |", "eq_num": "(" } ], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c9: C XNP \u222a C XP \u2192 R", "eq_num": "(5)" } ], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "The \u03c9 was estimated using a semi-supervised boot strapping EM-based algorithm. Like the initialization algorithm proposed in sec. 3.3.1, this algorithm also has an iterative nature that tries to estimate some values for \u03c9 (hence for HMM parameters) in a greedy manner. The objective function in this phase is to maximize the likelihood of observing MICA supertag sequences in C MNP \u222a C MP (eq. 6). In the heart of the algorithm, the C XNP is bootstrapped by applying a customized version of Viterbi algorithm on the C MNP using the earlier value of HMM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= P(C MP \u222a C MNP | \u03bb)", "eq_num": "(6)" } ], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "The algorithm consists of four main stages as below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "2. Bootstrapping: Bootstrapping C XNP by annotating C MNP with hidden states labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "3. Updating: Estimating the new value of HMM using Maximum Likelihood Estimation (MLE) on the paired sequences (C MNP , C XNP ) and (C MP , C XP ) 4. Termination: Until the termination criterion is not satisfied go to step 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "In the rest of this part we will express each phase in detail. Pre Initializing: In this step, it tries to estimate the HMM parameters from the related sequences in (C MP , C XP ) using the MLE. Applying the MLE over these sets gives some approximations about the probabilities presented in the HMM parameters except the probabilities related to UNKNOWN hidden state. The weighting function used in this phase gives a uniform distribution of probability to each member of C MP that are generated from same sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "The probabilities related to UNKNOWN hidden states also could be estimated using some heuristics over the existence gap between the grammars. For instance, the amount of uncertainty involved in the HMM parameters resulted by the MLE is a criterion for estimating the probabilities related to the UNKNOWN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "Bootstrapping: In this phase it tries to annotate each MICA supertag sequence in C MNP with a set of hidden state paths given the earlier value of HMM. To do this, a modified version of Viterbi algorithm, namely Forced Viterbi, was used. The algorithm looks for the hidden state paths that have the highest consistency with the earlier HMM and pass through UNKNOWN hidden state.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "Before applying Forced Viterbi over C MNP , we need some assumptions about the source elementary trees that are more likely to be corresponded to UNKNOWN. A simple solution for making such assumption is feasible via taking a differential between C MNP and C MP , and looking for the n-grams in the former that are not presented in the latter. The result of this process is a set of ngrams of MICA elementary trees, namely Gap-set, that their related n-gram in the original sentence couldn't be modeled in the XTAG grammar. For any n-gram in Gap-Set that is observed in a MICA elementary tree sequence member of C MNP , by considering all conditions that the UNKNOWN can be assigned to the elementary trees of the observed n-gram, the Forced Viterbi algorithm will generate 2 n XTAG elementary tree sequences. Updating: In this step, the HMM parameters will be updated regarding the paired sets (C MNP , C XNP ) and (C MP , C XP ). Having these paired sets and a scoring function \u03c9, the HMM parameters can be updated using the mentioned MLE method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "For each XTAG elementary tree sequence T \u2208 C XP \u222a C XNP and its related MICA elementary tree sequence T i \u2208 C MP \u222aC MNP , the scoring function \u03c9 can be defined as shown in eq. 7. \u03be in this equation refers to the set of XTAG elementary trees that are generated from the same sentence and T \u2208 \u03be. Fig. 1 gives an outline over the HMM initialization algorithm. Observing same values for the probability presented in eq. 6 or exceeding the predefined maximum number of iterations are two candidates to be used as termination criteria.", "cite_spans": [], "ref_spans": [ { "start": 294, "end": 300, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c9(T ) = P(T , T | \u03bb) | \u03be |", "eq_num": "(7)" } ], "section": "Initializing M-1", "sec_num": "3.3.1" }, { "text": "To evaluate the accuracy of the proposed models, the models have been initialized and trained with three real world data sets including ATIS , IBM Manual and Wall Street Journal (WSJ) corpora. Some parts of these datasets were randomly selected and divided into three distinct sections as initialization dataset (IDB), training dataset (TRDB) and testing dataset (TSDB). shows some statistics about the datasets used in initialization, training and testing the models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments Description", "sec_num": "4.1" }, { "text": "The results of applying each of the initializing methods M-1 and M-2 over the IDBs are presented in figure 2 and 3 respectively. These figures show the value of \u0398 presented in eq. 8. 'O' in this equation refers to all MICA elementary tree sequences used in the algorithms. The observed progress in the likelihood of observing the MICA elementary tree sequences is an evidence on the successful of the algorithms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u0398 = T i \u2208O log P(T i | \u03bb) | O |", "eq_num": "(8)" } ], "section": "Initializing", "sec_num": "4.2" }, { "text": "As these show, while the values resulted from M-2 are strictly ascending in a logarithmic manner, increasing in the values resulted from M-1 has no specific, predictable manner. It is due to the objective function shown in eq. 3 in which doesn't consider the score values of each MICA elementary tree sequences in C. In fact, related to any sentence in each IDB, C contains many scored MICA elementary tree sequences used in initializing algorithm but in the value of the objective function. 78.30% IBM 79.55% 88.30%", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing", "sec_num": "4.2" }, { "text": "88.70% WSJ 87.75% 91.50%", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initializing", "sec_num": "4.2" }, { "text": "88.96% Table 2 : The result of the tagging accuracy on the test sets", "cite_spans": [], "ref_spans": [ { "start": 7, "end": 14, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Initializing", "sec_num": "4.2" }, { "text": "The models were evaluated in two ways, tagging accuracy and parsed sequences. The first criterion originally introduced in (Faili and Basirat, 2011), enables us to evaluate the models as XTAG supertaggers. The latter one also, provide a way to evaluate them when combining with a LTAG parser. In parsed sequences the main focus is on the number of resulted XTAG sequences that their constituents elementary trees can be attached to each other regarding the standard operations defined in TAG formalism, Substitution and Adjunction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Evaluation", "sec_num": "4.3" }, { "text": "Due to the lack of a gold annotated corpus, the tagging accuracy has been done manually. Table 4 .3 shows the result of the tagging accuracy over the mentioned test sets (TSDBs). The base line here is the result of tagging accuracy reported in (Faili and Basirat, 2011) . As it can be seen, M-2 gives the best accuracy in comparison to the M-1 and the base line.", "cite_spans": [ { "start": 245, "end": 270, "text": "(Faili and Basirat, 2011)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 89, "end": 97, "text": "Table 4", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Models Evaluation", "sec_num": "4.3" }, { "text": "The result of the alternate criterion, parsed sentences, is given in table 4.3. As it shows, here also the M-2 gives a better response in compare to the M-1. An important point that should be noted is that, not all of the sentences in the test sets are covered by the XTAG grammar. In fact, our experiments showed that of all sentences in each of the ATIS-TSDB, IBM-TSDB, and WSJ-TSDB, all but 6%, 13% and 24% of them could be parsed by XTAG parser respectively. M-1 M-2 ATIS 5% 33% IBM 12.74% 43.10% WSJ 50.25% 57% Table 3 : Number of the parsed sentences", "cite_spans": [], "ref_spans": [ { "start": 516, "end": 523, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Models Evaluation", "sec_num": "4.3" }, { "text": "Two Hidden Markov Models (HMM) were proposed to make a bridge between the linguistic view of the English XTAG grammar and the statistical nature of the LTAG used by MICA parser (Bangalore et al., 2009) . The models were trained by the standard HMM training algorithm, Baum-Welch. The initial state of the models also were estimated using two semi-supervised EM-based algorithms. The models can be used to combine the statistical approaches with the grammar engineering.", "cite_spans": [ { "start": 177, "end": 201, "text": "(Bangalore et al., 2009)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": ". Pre Initializing: Initializing the HMM parameters with out considering UNKNOWN hidden state.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Supertagging: An approach to almost parsing", "authors": [ { "first": "Srinivas", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "Arvanid", "middle": [ "K" ], "last": "Joshi", "suffix": "" } ], "year": 1999, "venue": "Computational Linguistics", "volume": "25", "issue": "2", "pages": "237--266", "other_ids": {}, "num": null, "urls": [], "raw_text": "Srinivas Bangalore and Arvanid K. Joshi. 1999. Su- pertagging: An approach to almost parsing. Com- putational Linguistics, 25(2):237-266.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Factoring global inference by enriching local representations", "authors": [ { "first": "S", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "P", "middle": [], "last": "Haffner", "suffix": "" }, { "first": "G", "middle": [], "last": "Emami", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Bangalore, P. Haffner, and G. Emami. 2005. Fac- toring global inference by enriching local represen- tations. Technical report, AT&T Labs -Reserach.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Mica: A probabilistic dependency parser based on tree insertion grammar", "authors": [ { "first": "Srinivas", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "P", "middle": [], "last": "Boulllier", "suffix": "" }, { "first": "A", "middle": [], "last": "Nasr", "suffix": "" }, { "first": "O", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "B", "middle": [], "last": "Sagot", "suffix": "" } ], "year": 2009, "venue": "North American Chapter of the Association for Computational Linguistics (NAACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Srinivas Bangalore, P. Boulllier, A. Nasr, O. Rambow, and B. Sagot. 2009. Mica: A probabilistic depen- dency parser based on tree insertion grammar. North American Chapter of the Association for Computa- tional Linguistics (NAACL).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Toward Efficient Statistical Parsing Using Lexicalized Grammatical Information", "authors": [ { "first": "John", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Chen. 2001. Toward Efficient Statistical Parsing Using Lexicalized Grammatical Information. Ph.D. thesis, University of Delaware.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The hidden tag model: Synchronous grammars for parsing resource-poor languages", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 8th International Workshop on Tree Adjoining Grammar and Related Formalisms", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang and Owen Rambow. 2006. The hid- den tag model: Synchronous grammars for parsing resource-poor languages. Proceedings of the 8th In- ternational Workshop on Tree Adjoining Grammar and Related Formalisms, July.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Integrating compositional semantic into a verb lexicon", "authors": [ { "first": "H", "middle": [], "last": "Dang", "suffix": "" }, { "first": "K", "middle": [], "last": "Kipper", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Eighteenth International Conference on Computational Linguistic (COLING-2000)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Dang, K. Kipper, and Martha Palmer. 2000. Inte- grating compositional semantic into a verb lexicon. In In Proceedings of the Eighteenth International Conference on Computational Linguistic (COLING- 2000).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Augmenting the automated extracted tree adjoining grammars by semantic representation", "authors": [ { "first": "Heshaam", "middle": [], "last": "Faili", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Basirat", "suffix": "" } ], "year": 2010, "venue": "6th IEEE International Conference on Natural Language Processing and Knowledge Engineering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heshaam Faili and Ali Basirat. 2010. Augmenting the automated extracted tree adjoining grammars by se- mantic representation. In 6th IEEE International Conference on Natural Language Processing and Knowledge Engineering (IEEE NLP-KE'10), Bei- jing.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An unsupervised approach for linking automatically extracted and manually crafted ltags", "authors": [ { "first": "Heshaam", "middle": [], "last": "Faili", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Basirat", "suffix": "" } ], "year": 2011, "venue": "12th International conference on Intelligent Text Processing and Computational Linguistics (CICLing-2011)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heshaam Faili and Ali Basirat. 2011. An unsupervised approach for linking automatically extracted and manually crafted ltags. In 12th International con- ference on Intelligent Text Processing and Compu- tational Linguistics (CICLing-2011), Tokyo, Febru- ary.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "From partial toward full parsing", "authors": [ { "first": "Heshaam", "middle": [], "last": "Faili", "suffix": "" } ], "year": 2009, "venue": "Recent Advances In Natural Language Processing (RANLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heshaam Faili. 2009. From partial toward full parsing. In Recent Advances In Natural Language Process- ing (RANLP).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Tree adjoining grammars and lexicalized grammars", "authors": [ { "first": "K", "middle": [], "last": "Arvanid", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "", "middle": [], "last": "Schabez", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arvanid K. Joshi and Yves Schabez. 1991. Tree ad- joining grammars and lexicalized grammars. Tech- nical Report MS-CIS 91-22, Department of Com- puter & Information Science, University of Pennsyl- vania.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "How much context-sensitivity is necessary for characterizing structural descriptions?", "authors": [ { "first": "K", "middle": [], "last": "Arvanid", "suffix": "" }, { "first": "", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 1985, "venue": "Natural Language Processing: Theoretical, Computational, and Psychological Perspectives", "volume": "", "issue": "", "pages": "206--250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arvanid K. Joshi. 1985. How much context-sensitivity is necessary for characterizing structural descrip- tions? Natural Language Processing: Theoret- ical, Computational, and Psychological Perspec- tives, pages 206-250. New York, NY: Cambridge University Press.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Parsing Beyond Context-Free Grammars, volume 0 of Cognitive Technologies", "authors": [ { "first": "Laura", "middle": [], "last": "Kallmeyer", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Kallmeyer. 2010. Parsing Beyond Context- Free Grammars, volume 0 of Cognitive Technolo- gies. Springer.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Classbased construction of verb lexicon", "authors": [ { "first": "K", "middle": [], "last": "Kipper", "suffix": "" }, { "first": "H", "middle": [], "last": "Dang", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2000, "venue": "Proceedings of Seventh nation Conference on Artificial Intelligence (AAAI-2000)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Kipper, H. Dang, and Martha Palmer. 2000. Class- based construction of verb lexicon. In In Proceed- ings of Seventh nation Conference on Artificial In- telligence (AAAI-2000).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A tutorial on hidden markov models and selected applications in speech recognition", "authors": [ { "first": "Lawrence", "middle": [ "R" ], "last": "Rabiner", "suffix": "" } ], "year": 1989, "venue": "Proceedings of the IEEE", "volume": "77", "issue": "2", "pages": "257--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawrence R. Rabiner. 1989. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257- 286.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Assigning xtag trees to verbnet", "authors": [ { "first": "Neville", "middle": [], "last": "Ryant", "suffix": "" }, { "first": "Karin", "middle": [], "last": "Kipper", "suffix": "" } ], "year": 2004, "venue": "Seventh International Workshop on Tree Adjoining Grammar and Related Formalisms", "volume": "7", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Neville Ryant and Karin Kipper. 2004. Assigning xtag trees to verbnet. TAG+7: Seventh International Workshop on Tree Adjoining Grammar and Related Formalisms, May 20-22.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Combining supertagging and lexicalized tree-adjoining grammar parsing", "authors": [ { "first": "Anoop", "middle": [], "last": "Sarkar", "suffix": "" } ], "year": 2007, "venue": "Complexity of Lexical Descriptions and its Relevance to Natural Language Processing: A Supertagging Approach", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anoop Sarkar. 2007. Combining supertagging and lexicalized tree-adjoining grammar parsing. In Srinivas Bangalore and Aravind Joshi, editors, Com- plexity of Lexical Descriptions and its Relevance to Natural Language Processing: A Supertagging Ap- proach.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Evaluating the coverage of ltags on annotated corpora. the Workshop on Using Evaluation within HLT Programs: Results and Trends", "authors": [ { "first": "Fia", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fia Xia and Martha Palmer. 2000. Evaluating the cov- erage of ltags on annotated corpora. the Workshop on Using Evaluation within HLT Programs: Results and Trends, May 30.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Automatic grammar generation from two different perspectives", "authors": [ { "first": "Fia", "middle": [], "last": "Xia", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fia Xia. 2001. Automatic grammar generation from two different perspectives. Ph.D. thesis, University of Pennsylvania.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A lexicalized tree adjoining grammar for english", "authors": [ { "first": "", "middle": [], "last": "Xtag-Group", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "XTAG-Group. 2001. A lexicalized tree adjoining grammar for english. Technical Report IRCS 01- 03, Institute for Research in Cognitive Science, Uni- versity of Pennsylvania.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "The HMM Initialization algorithm used in M-2", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "The values of the objective function presented in eq. 8 while initializing the M-1", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "The values of the objective function presented in eq. 8 while initializing the M", "uris": null }, "TABREF0": { "type_str": "table", "text": "In this part also for the sake of simplicity, C MP , C XP and C MNP are used as the supertagging result of PI in MICA grammar, PI in XTAG grammar, and NPI in MICA grammar, respec-", "num": null, "html": null, "content": "
4)
3.3.2 Initializing M-2
tively. Unlike the M-1 that uses all of the MICA
elementary tree sequences resulted from parsing
a sentence in IDB, here only the most proba-
ble MICA elementary tree sequence was used.
So, related to each sentence in PI and NPI, we
have a single MICA elementary tree sequence in
C
" }, "TABREF1": { "type_str": "table", "text": "", "num": null, "html": null, "content": "
.1
" } } } }