Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R11-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:04:54.569185Z"
},
"title": "Enlarging Monolingual Dictionaries for Machine Translation with Active Learning and Non-Expert Users",
"authors": [
{
"first": "Miquel",
"middle": [],
"last": "Espl\u00e0-Gomis",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "V\u00edctor",
"middle": [
"M"
],
"last": "S\u00e1nchez-Cartagena",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Juan",
"middle": [],
"last": "Antonio P\u00e9rez-Ortiz",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper explores a new approach to help non-expert users with no background in linguistics to add new words to a monolingual dictionary in a rule-based machine translation system. Our method aims at choosing the correct paradigm which explains not only the particular surface form introduced by the user, but also the rest of inflected forms of the word. A large monolingual corpus is used to extract an initial set of potential paradigms, which are then interactively refined by the user through active machine learning. We show the results of experiments performed on a Spanish monolingual dictionary.",
"pdf_parse": {
"paper_id": "R11-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper explores a new approach to help non-expert users with no background in linguistics to add new words to a monolingual dictionary in a rule-based machine translation system. Our method aims at choosing the correct paradigm which explains not only the particular surface form introduced by the user, but also the rest of inflected forms of the word. A large monolingual corpus is used to extract an initial set of potential paradigms, which are then interactively refined by the user through active machine learning. We show the results of experiments performed on a Spanish monolingual dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Rule-based machine translation (MT) systems heavily depend on explicit linguistic data such as morphological dictionaries, bilingual dictionaries, grammars, and structural transfer rules (Hutchins and Somers, 1992) . Although some automatic acquisition is possible, collecting these data usually requires in the end the intervention of domain experts (mainly, linguists) who master all the encoding and format details of the particular MT system. We should, however, open the door to a broader group of non-expert users who could collaboratively enrich MT systems through the web.",
"cite_spans": [
{
"start": 187,
"end": 214,
"text": "(Hutchins and Somers, 1992)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present a novel method for enlarging the monolingual dictionaries in rule-based MT systems by non-expert users. An automatic process is first run to collect as much linguistic information as possible about the new word to be added to the dictionary and, after that, the resulting set of potential hypothesis is filtered by eliciting additional knowledge from non-experts with no linguistic background through active learning (Olsson, 2009; Settles, 2010) , that is, by interactively querying the user in order to efficiently reduce the search space. As these users do not possess the technical skills which are usually required to fill in the dictionaries, this elicitation is performed via a series of simple and easy yes/no questions which only require speaker-level understanding of the language. Our method does not only incorporate to the dictionary the particular surface form introduced by the user (for example, wants), but it also discovers a suitable paradigm for the new word so that all the word forms of the corresponding lexeme and their morphological information (such as wanted, verb, past or wanting, verb, gerund) are also inserted.",
"cite_spans": [
{
"start": 442,
"end": 456,
"text": "(Olsson, 2009;",
"ref_id": "BIBREF12"
},
{
"start": 457,
"end": 471,
"text": "Settles, 2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work focuses on monolingual dictionaries. These dictionaries have basically two types of data: paradigms, that group regularities in inflection, and word entries. The paradigm assigned to many common English verbs, for instance, indicates that by adding the ending -ing, the gerund is obtained. Paradigms make easier the management of dictionaries in two ways:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. by reducing the quantity of information that needs to be stored, thereby creating more compact data structures, and 2. by simplifying revision and validation by describing the regularities in the dictionary; for example, describing the inflection of a verb by giving its stem and inflection model (\"it is conjugated as\") is safer than writing all the possible conjugated forms one by one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Once the most frequent paradigms in a dictionary are defined, entering a new inflected word is generally limited to writing the stem and choosing an inflection paradigm. We show a semi-automatic method for the assignment of new words to the existing paradigms in a monolingual dictionary, which interrogates the user when it cannot automatically find enough evidence for unambiguously determining the correct paradigm. Note that as paradigms in MT usually contain morphological information (gender, noun, tense, etc.) on every inflected word form, our method also avoids the user from identifying all these linguistic data. In our experiments we will use the free/opensource rule-based MT system Apertium (Forcada et al., 2011) . Apertium 1 is being currently used to build MT systems for a variety of language pairs. Every word is assigned to a paradigm in Apertium's monolingual dictionaries, and specific paradigms are defined for words with irregular forms. In addition, all the lexical information is included in the paradigms; as a result, there exist paradigms which only contain lexical information and do not add any suffix to the corresponding stem; the paradigm for the proper nouns is a good example of this.",
"cite_spans": [
{
"start": 705,
"end": 727,
"text": "(Forcada et al., 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Once a word and its corresponding translation have been added to the monolingual dictionaries of the source and target languages, respectively, of a MT system, the next step is to link both of them by adding the corresponding entry in the bilingual dictionary. How to adapt this task to non-experts is out of the scope of this paper and will be tackled in future works.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Social Translation. In spite of the vast amount of contents and collaboratively-created knowledge uploaded to the web during the last years, linguistic barriers still pose a significant obstacle to universal collaboration as they lead to the creation of \"islands\" of content, only meaningful to speakers of a particular language. Until fully-automatic high-quality MT becomes a reality, massive online collaboration in translation may well be the only force capable of tearing down these barriers (Garcia, 2009) and produce large-scale availability of multilingual information. Actually, this collaborative translation movement is happening nowadays, although still timidly, in applications such as Cucumis.org, OneHourTranslation.com or the Google Translator Toolkit 2 .",
"cite_spans": [
{
"start": 497,
"end": 511,
"text": "(Garcia, 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The resulting scenario, which may be called social translation, will need efficient computer translation tools, such as reliable MT systems, friendly postediting interfaces, or shared translation memories. Remarkably, collaboration around MT should not only concern the postediting of raw machine translations, but also the creation and management of the linguistic resources needed by the MT systems; if properly done, this can lead to a significant improvement in the translation engines. Since as many hands as possible are necessary for the task, speakers that, in principle, do not have the level of technical know-how required to improve MT systems or manage linguistic resources must be involved, and, consequently, software that can make those tasks easier and elicit the knowledge of both experts and non-experts must be developed (Font-Llitj\u00f3s, 2007; S\u00e1nchez-Cartagena and P\u00e9rez-Ortiz, 2010) . This largescale collaboration implies a change of paradigm in the way linguistic resources are managed and a series of conditions should hold in order to fully accomplish the goals of this social translation scenario (P\u00e9rez-Ortiz, 2010).",
"cite_spans": [
{
"start": 840,
"end": 860,
"text": "(Font-Llitj\u00f3s, 2007;",
"ref_id": null
},
{
"start": 861,
"end": 901,
"text": "S\u00e1nchez-Cartagena and P\u00e9rez-Ortiz, 2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Two of the more prominent works related to the elicitation of knowledge for building or improving MT systems are those by Font-Llitj\u00f3s (2007) and McShane et al. (2002) . The former proposes a strategy for improving both transfer rules and dictionaries by analysing the postediting process performed by a non-expert through a special interface. McShane et al. (2002) design a complex framework to elicit linguistic knowledge from informants who are not trained linguists and use this information to build MT systems into English; their system provides users with a lot of information about different linguistic phenomena to ease the elicitation task. Ambati et al. (2010) show how to apply an active learning (Olsson, 2009) strategy to the configuration of a statistical machine translation.",
"cite_spans": [
{
"start": 122,
"end": 141,
"text": "Font-Llitj\u00f3s (2007)",
"ref_id": null
},
{
"start": 146,
"end": 167,
"text": "McShane et al. (2002)",
"ref_id": "BIBREF10"
},
{
"start": 344,
"end": 365,
"text": "McShane et al. (2002)",
"ref_id": "BIBREF10"
},
{
"start": 650,
"end": 670,
"text": "Ambati et al. (2010)",
"ref_id": "BIBREF0"
},
{
"start": 708,
"end": 722,
"text": "(Olsson, 2009)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Elicitation and Active Learning.",
"sec_num": null
},
{
"text": "Automatic Extraction of Resources. Many approaches have been proposed to deal with the automatic acquisition of linguistic resources for MT, mainly, transfer rules and bilingual dictionaries, even for the specific case of the Apertium platform (Caseli et al., 2006; S\u00e1nchez-Mart\u00ednez and Forcada, 2009) . The automatic identification of morphological rules (a problem for which paradigm identification is a potential resolution strategy) has also been subject of many recent studies (Monson, 2009; Creutz and Lagus, 2007; Goldsmith, 2010; Walther and Nicolas, 2011) .",
"cite_spans": [
{
"start": 244,
"end": 265,
"text": "(Caseli et al., 2006;",
"ref_id": "BIBREF2"
},
{
"start": 266,
"end": 301,
"text": "S\u00e1nchez-Mart\u00ednez and Forcada, 2009)",
"ref_id": "BIBREF15"
},
{
"start": 482,
"end": 496,
"text": "(Monson, 2009;",
"ref_id": "BIBREF11"
},
{
"start": 497,
"end": 520,
"text": "Creutz and Lagus, 2007;",
"ref_id": "BIBREF3"
},
{
"start": 521,
"end": 537,
"text": "Goldsmith, 2010;",
"ref_id": "BIBREF7"
},
{
"start": 538,
"end": 564,
"text": "Walther and Nicolas, 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Elicitation and Active Learning.",
"sec_num": null
},
{
"text": "Novelty. Our work introduces some novel elements compared to previous approaches:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Elicitation and Active Learning.",
"sec_num": null
},
{
"text": "1. Unlike the Avenue formalism used in the work by Font-Llitj\u00f3s (2007), our MT system is a pure transfer-based one in the sense that a single translation is generated and no language model is used to score a set of possible candidate translations. Therefore, we are interested in the unique right answer and assume that an incorrect paradigm cannot be assigned to a new word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Elicitation and Active Learning.",
"sec_num": null
},
{
"text": "2. Bartuskov\u00e1 and Sedl\u00e1cek (2002) also present a tool for semi-automatic assignment of words to declination patterns; their system is based on a decision tree with a question in every node. Their proposal, however, focuses on nouns and is aimed at experts because of the technical nature of the questions.",
"cite_spans": [
{
"start": 3,
"end": 33,
"text": "Bartuskov\u00e1 and Sedl\u00e1cek (2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Elicitation and Active Learning.",
"sec_num": null
},
{
"text": "3. Our approach is addressed to non-experts, including those who probably cannot define even vaguely what, for instance, an adverb is, but who can intuitively identify whether a particular word is correct under the rules for forming words in their language; therefore, the answer to as few as possible simple questions is our main source of information in addition to what an automated extraction method may deliver in a first step. Font-Llitj\u00f3s (2007) already anticipated the advisability of incorporating an active learning mechanism in her transfer rule refinement system, asking the user to validate different translations deduced from the initial hypothesis. However, this active learning approach has not yet been undertaken. Unlike the work by McShane et al. 2002, we want to relieve users of acquiring linguistic skills.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Elicitation and Active Learning.",
"sec_num": null
},
{
"text": "4. Our work focuses on identifying the paradigm which could be assigned to a word, a task more restrictive than decompounding a word into a set of morphemes. In the work by Monson (2009) some errors are tolerated in the final output of the system. 5. Our mid-term intention is to develop a system in line with the social translation principles which may be used to collaboratively build MT systems from scratch. This will also include the semi-automatic learning of the paradigms or the transfer rules which better serve the translation task, and which do not need necessarily correspond to the linguistically motivated ones. 3",
"cite_spans": [
{
"start": 173,
"end": 186,
"text": "Monson (2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Elicitation and Active Learning.",
"sec_num": null
},
{
"text": "Outline of the Paper. The rest of the paper is organised as follows. Section 2 introduces our method for semi-automatic assignment of words to paradigms. A brief outline of the format used by the dictionaries of the Apertium MT system is given in section 3. Section 4 presents our experimental set-up and Section 5 discusses the results attained. The experiments performed pose some limitations in our approach or in the way in which data is currently represented in Apertium's dictionaries, which are discussed in section 6, together with some ideas on how to cope with them in future work. Finally, the paper ends with some conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Elicitation and Active Learning.",
"sec_num": null
},
{
"text": "In this work we focus on languages which generate inflections by adding suffixes to the stems of words, as happens, for example, with Romance languages; our approach, however, could be easily adapted to inflectional languages based on different ways of adding morphemes. Let P = {p i } be the set of paradigms in a monolingual dictionary. Each paradigm p i defines a set of suffixes F i = {f ij } which are appended to stems to build new inflected word forms, along with some additional morphological information. The dictionary also includes a list of stems, each labelled with the index of a particular paradigm; the stem is the part of a word that is common to all its inflected variants. Given a stem/paradigm pair composed of a stem t and a paradigm p i , the expansion I(t, p i ) is the set of possible word forms resulting from appending all the suffixes in p i to t. For instance, an English dictionary may contain a paradigm p i with suffixes F i = { ,-s, -ed, -ing} ( denotes the empty string), and the stem want assigned to p i ; the expansion I(want, p i ) consists of the set of word forms want, wants, wanted and wanting. We also define a candidate stem t as an element of Pr(w), the set of possible prefixes of a particular word form w. Given a new word form w to be added to a monolingual dictionary, our objective is to find both the candidate stem t \u2208 Pr(w) and the paradigm p i which expand to the largest possible set of morphologically correct inflections. To that end, our method performs three tasks: obtaining the set of all compatible stem/paradigm candidates which generate, among others, the word form w when expanded; giving a confidence score to each of the stem/paradigm candidates so that the next step is as short as possible; and, finally, asking the user about some of the inflections derived from each of the stem/paradigm candidates obtained in the first step. Next we describe the methods used for each of these three tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "It is worth noting that in this work we assume that all the paradigms for the words in the dictionary are already included in it. The situation in which for a given word no suitable paradigm is available in the dictionary will be tackled in the future, possibly by following the ideas in related works (Monson, 2009) .",
"cite_spans": [
{
"start": 302,
"end": 316,
"text": "(Monson, 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "The first step for adding a word form w to the dictionary is to detect the set of compatible paradigms. To do so, we use a generalised suffix tree (GST) (McCreight, 1976) containing all the possible suffixes included in the paradigms in P . Each of these suffixes is labelled with the index of the corresponding paradigms. The GST data structure allows to retrieve the paradigms compatible with w by efficiently searching for all the possible suffixes of w; when a suffix is found, the prefix and the paradigm are considered as a candidate stem/paradigm pair. In this way, a list L of candidate stem/paradigm pairs is built; we will denote each of these candidates with c n .",
"cite_spans": [
{
"start": 153,
"end": 170,
"text": "(McCreight, 1976)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "The following example illustrates this stage of our method. Consider a simple dictionary with only three paradigms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "p 1 : f 11 = , f 12 =-s p 2 : f 21 =-y, f 22 =-ies p 3 : f 31 =-y, f 32 =-ies, f 33 =-ied, f 34 =-ying",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "Assume that a user wants to add the new word w=policies to the dictionary. The candidate stem/paradigm pairs which will be obtained after this stage are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "c 1 =policies/p 1 , c 2 =policie/p 1 , c 3 =polic/p 2 , c 4 =polic/p 3 2.2 Paradigm Scoring",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "Once L is obtained, a confidence score is computed for each stem/paradigm candidate c n \u2208 L using a large monolingual corpus C. One possible way to compute the score is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "Score(c n ) = \u2200w \u2208I(cn) Appear C (w ) |I(c n )| ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "where Appear C (w ) is a function that returns 1 when the inflected form w appears in the corpus C and 0 otherwise, and I is the expansion function as defined before. The square root term is used to avoid very low scores for large paradigms which include lot of suffixes. One potential problem with the previous formula is that all the inflections in I(c n ) are taken into account, including those that, although morphologically correct, are not very usual in the lan-guage and, consequently, in the corpus. To overcome this, Score(c n ) is redefined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "Score(c n ) = \u2200w \u2208I C (cn) Appear C (w ) |I C (c n )| ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "where I C (c n ) is the difference set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "I C (c n ) = I(c n ) \\ Unusual C (c n ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "The function Unusual C (c n ) uses the words in the dictionary already assigned to p i as a reference to obtain which of the inflections generated by p i are not usual in the corpus C. Let T (p i ) be a function retrieving the set of stems in the dictionary assigned to the paradigm p i . For each of the suffixes f ij in F i our system computes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "Ratio(f ij , p i ) = \u2200t\u2208T (p i ) Appear C (tf ij ) |T (p i )| ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "and builds the set Unusual C (c n ) by concatenating the stem t to all the suffixes f ij with Ratio(f ij , p i ) under a given threshold \u0398. Following our example, the following inflections for the different candidates will be obtained:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "I(c 1 )={policies, policiess} I(c 2 )={policie, policies} I(c 3 )={policy, policies} I(c 4 )={policy, policies, policied, policying}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "Using a large monolingual English corpus C, word forms policies and policy will be easily found; the other inflections (policie, policiess, policied and policying) will not be found. To simplify the example, assume that Unusual C (c n ) = \u2205 for all the candidates; the resulting scores will be: Score(c 1 )=0.71, Score(c 2 )=0.71, Score(c 3 )=1.41, Score(c 4 )=1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigm Detection",
"sec_num": "2.1"
},
{
"text": "Finally, the best candidate is chosen from L by querying the user about a reduced set of the inflections for some of the candidate paradigms c n \u2208 L. To do so, our system firstly sorts L in descending order by Score(c n ). Then, users are asked to confirm whether some of the inflections in each expansion are morphologically correct (more precisely, whether they exist in the language); the only possible answer for these questions is yes or no. In this way, when an inflected word form w is presented to the user",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Through User Interaction",
"sec_num": "2.3"
},
{
"text": "\u2022 if it is accepted, all c n \u2208 L for which w / \u2208 I(c n ) are removed from L;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Through User Interaction",
"sec_num": "2.3"
},
{
"text": "\u2022 if it is rejected, all c n \u2208 L for which w \u2208 I(c n ) are removed from L.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Through User Interaction",
"sec_num": "2.3"
},
{
"text": "Note that c 1 , the best stem/paradigm pair according to Score, may change after updating L. Questions are asked to the user until only one single candidate remains in L. In order to ask as few questions as possible, the word forms shown to the user are carefully selected. Let G(w , L) be a function giving the number of c n \u2208 L for which w \u2208 I(c n ). We use the value of G(w , L) in two different phases: confirmation and discarding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Through User Interaction",
"sec_num": "2.3"
},
{
"text": "Confirmation. In this stage our system tries to find a suitable candidate c n , that is, one for which all the inflections in I(c n ) are morphologically correct. In principle, we may consider that the inflections generated by the best candidate c 1 in the current L (the one with the highest score) are correct. Because of this, the user is asked about the inflection w \u2208 I(c n ) with the lowest value for G(w , L), so that, in case it is accepted, a significant part of the paradigms in L are removed from the list. This process is repeated until",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Through User Interaction",
"sec_num": "2.3"
},
{
"text": "\u2022 only one single candidate remains in L, which is used as the final output of the system; or",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Through User Interaction",
"sec_num": "2.3"
},
{
"text": "\u2022 all w \u2208 I(c 1 ) are generated by all the candidates remaining in L, meaning that c 1 is a suitable candidate, although there still could be more suitable ones in L.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Through User Interaction",
"sec_num": "2.3"
},
{
"text": "If the second situation holds, the system moves on to the discarding stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Through User Interaction",
"sec_num": "2.3"
},
{
"text": "Discarding. In this stage, the system has accepted c 1 as a possible solution, but it needs to check whether any of the remaining candidates in L is more suitable. Therefore, the new strategy is to ask the user about those inflections w / \u2208 I(c 1 ) with the highest possible value for G(w , L). This process is repeated until",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Through User Interaction",
"sec_num": "2.3"
},
{
"text": "\u2022 only c 1 remains in L, and it will be used as the final output of the system; or",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Through User Interaction",
"sec_num": "2.3"
},
{
"text": "\u2022 an inflection w / \u2208 I(c 1 ) is accepted, meaning that some of the other candidates is better than c n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Through User Interaction",
"sec_num": "2.3"
},
{
"text": "If the second situation holds, the system removes c 1 from L and goes back to the confirmation stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Through User Interaction",
"sec_num": "2.3"
},
{
"text": "For both confirmation and discarding stages, if there are many inflections with the same value for G(w , L), the system chooses the one with higher Ratio(f ij , p i ), that is, the most usual in C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Through User Interaction",
"sec_num": "2.3"
},
{
"text": "It is important to remark that this method cannot distinguish between candidates which generate the same set I(c n ). In the experiments, they have considered as a single candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Through User Interaction",
"sec_num": "2.3"
},
{
"text": "In our example, the ordered list of candidates will be L = (c 3 , c 4 , c 1 , c 2 ) . Choosing the inflection in I(c 3 ) with the smaller value for G(w , L) the inflection policy, which is only generated by two candidates, wins. Hopefully, the user will accept it and this will make that c 1 and c 2 be removed from L. At this point, I(c 3 ) \u2282 I(c 4 ), c 3 is suitable and, consequently, the system will try to discard c 4 . Querying the user about any of the inflections in I(c 4 ) which is not present in I(c 3 ) (policied and policying) and getting user rejection will make the system to remove c 4 from L, confirming c 3 as the most suitable candidate.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 83,
"text": "L = (c 3 , c 4 , c 1 , c 2 )",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Active Learning Through User Interaction",
"sec_num": "2.3"
},
{
"text": "A small example follows to show how a simple entry is encoded in the English Apertium's monolingual dictionary. A paradigm named par123 to be used in English nouns with singular ending in -um which change it to -a to form the plural form will be defined in XML as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Dictionaries in Apertium",
"sec_num": "3"
},
{
"text": "<pardef n=\"par1\"> <e><p> <l>um</l> <r>um<s n=\"n\"/><s n=\"sg\"/></r> </p></e> <e><p> <l>a</l> <r>um<s n=\"n\"/><s n=\"pl\"/></r> </p></e> </pardef> Now, the words bacterium/bacteria and datum/data will be defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Dictionaries in Apertium",
"sec_num": "3"
},
{
"text": "<e lm=\"bacterium\"> <i>bacteri</i> <par n=\"par123\"/> </e> <e lm=\"datum\"> <i>dat</i> <par n=\"par123\"/> </e>",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Dictionaries in Apertium",
"sec_num": "3"
},
{
"text": "The part inside the i element contains the stem of the lexeme, which is common to all inflected forms, and the element par refers to the assigned paradigm. In this case, bacterium will be analysed into bacterium<n><sg> and bacteria into bacterium<n><pl>.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Dictionaries in Apertium",
"sec_num": "3"
},
{
"text": "It is also possible to create entries in the dictionaries consisting of two or more words if these words are considered to build a single translation unit. Dictionaries may also contain nested paradigms used in other paradigms (for instance, paradigms for enclitic pronoun combinations are included in all Spanish verb paradigms).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Dictionaries in Apertium",
"sec_num": "3"
},
{
"text": "It is clear that it may be hard for non-experts to incorporate new entries to the dictionaries unless methods, like the one proposed in this paper, exist to conveniently elicit their language knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Dictionaries in Apertium",
"sec_num": "3"
},
{
"text": "The aim of the experiments is to asses, in a realistic scenario, whether our semi-automatic methodology is valid to find out, for a given word, its most suitable paradigm. Therefore, a group of people has been told to add a set of words to a monolingual dictionary using our methodology. For this task, we chose the Apertium Spanish monolingual dictionary from the language pair Spanish-Catalan. First, the dictionary was filtered to remove",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 word entries belonging to a closed part-ofspeech category: when building a monolingual dictionary from scratch, words from closed categories are usually included first, since they are very frequent in source texts;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 word entries assigned to a paradigm which only contains an empty suffix: these paradigms usually define proper nouns, which may be identified using other methods;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 multi-word units, which are out of the scope of this paper;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 prefix inflection entries: as our methodology is designed to deal with suffix inflection, the only entry found in the dictionary with prefix inflection was discarded;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 redundant paradigms, which generate the same inflections with the same lexical information and are, therefore, equivalent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "A test set was created with words extracted from the filtered dictionary. Firstly, a stem assigned to each of the paradigms p i with 1 < |T (p i )| < 10 was added. To build a more realistic test set, we chose one more stem from those paradigms p i with 10 \u2264 |T (p i )| in order to have more words assigned to very common paradigms. Then, we obtained, for each pair stem/paradigm, all the possible word forms and included the most common ones into the test set using the Ratio(f ij , p i ) value. In this way, we obtained 226 words: 106 extracted from the first group of paradigms and 120 from the second one. Obviously, the stems from which we obtained the words included in the test set were removed from the dictionary. Then, the test set was split into 10 subsets, and each subset was assigned to a different human evaluator. Each evaluator in an heterogeneous group of non-experts was then asked to introduce each of the words in their test set using our system. Experiments were run using the filtered dictionary and a word list obtained from the Spanish Wikipedia dump 4 as the monolingual corpus C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The different evaluation metrics obtained from the human evaluation process are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 success rate: number of words from the test set that have been tagged with the paradigm assigned to them in the original Apertium dictionary. This is the most straightforward metric to evaluate our methodology;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 average precision and recall: precision (P) and recall (R) were computed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "P (c, c ) = |I(c) \u2229 I(c )| \u2022 |I(c)| \u22121 , R(c, c ) = |I(c) \u2229 I(c )| \u2022 |I(c )| \u22121 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "where c is the stem/paradigm pair chosen by our system and c is the pair originally in the dictionary. Confidence intervals were estimated with 99% statistical confidence with a t-test;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 average number of questions: average number of questions made by our system for each word in the test set;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 average number of initial paradigms: the average number of compatible paradigms initially found as possible solutions in the first stage of our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The value of the threshold \u0398 used to compute the set Unusual C (c n ) defined in Section 2 was 0.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Finally, an alternative approach without user interaction was designed as a baseline so that the impact of active learning could be better evaluated. The baseline consists of directly choosing the first element in the list L as the most suitable candidate. The average position of the right candidate in L has also been computed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We evaluated our approach and computed the results following the metrics depicted in Section 4. The average number of initial candidates detected by our approach was 56.4; this metric was specially high for verbs, whereas it was much lower for nouns and adjectives. The average number of questions asked to the users by the active learning approach for the test set was 5.2, which is reasonably small considering that the 56.4 initial paradigms on average and that the average position of the right candidate in L was 9.1. Figure 1 shows an histogram representing the position of the right candidate in the initial list L for each word in the test set. We also observed that, in average, users needed around 30 seconds in average to find the paradigm of each word in the test set. We obtained a success rate of 72.9% for the active learning approach with a precision of P = 87% \u00b1 5 and a recall of R = 87% \u00b1 5. These results stress the fact that those words which were assigned to incorrect paradigms, were assigned to paradigms generating similar inflections. These results are clearly better than those obtained by the baseline approach, with a success rate of 28.9%, a precision of P = 70.3% \u00b1 6 and a recall of R = 62.77% \u00b1 7.",
"cite_spans": [],
"ref_spans": [
{
"start": 523,
"end": 531,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": null
},
{
"text": "Taking a closer look at the results, we observed some relevant causes for the errors. On the one hand, we detected human errors for words which should have been accepted but were rejected or vice-versa. These mistakes, caused by a lack of knowledge of the users (for example, about accentuation rules), should be taken into account in the future; they could be solved, for instance, by using reinforcement questions or combining the answers of different users for the same or similar words. Moreover it could be possible to give a kind of confidence score to the paradigms in the dictionary based on how frequently words are incorrectly assigned to them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": null
},
{
"text": "We also observed that most of the words which were not assigned to the expected paradigm were verbs. Spanish morphological rules allow multiple concatenations of enclitic pronouns at the end of verbs. In many occasions, users rejected forms of verbs with too many enclitic pronouns or for which some concrete enclitics had no semantic sense. This happens because, in order to reduce the number of possible paradigms, Apertium's dictionaries can assign some words to existing paradigms which are a superset of the correct one; since the included semantically incorrect word forms will never occur in a text to translate, this, in principle, may be safely done.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": null
},
{
"text": "In this paper we have described a system for interactively enlarging dictionaries and selecting the most suitable paradigm for new words. Our preliminary experiments have brought to light several limitations of our method which will be tackled in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Work Ahead",
"sec_num": "6"
},
{
"text": "Detection of lexical information. One of the most important limitations of our approach is that, as already commented in Section 2, candidate paradigms generating the same I(c n ) set cannot be distinguished. This situation usually holds when the expansions of two different stem/paradigm pairs are equal but the lexical information in each paradigm is different. For example, in Spanish two different paradigms may contain the same suffixes F ={ , -s} although one of them generates substantives and the other one generates adjectives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Work Ahead",
"sec_num": "6"
},
{
"text": "We have started to explore a method to semiautomatically obtain this lexical information. A statistical part-of-speech tagger may be used to obtain initial hypothesis about the lexical properties of a word w; this information could then be refined by querying users with complete sentences in which w plays different lexical roles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Work Ahead",
"sec_num": "6"
},
{
"text": "Lack of suitable paradigms. Our approach assumes that all the paradigms for a particular language are already included in the dictionary, but it could be interesting to have a method to also add new paradigms. The work by Monson (2009) could be a good start for the new method.",
"cite_spans": [
{
"start": 222,
"end": 235,
"text": "Monson (2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Work Ahead",
"sec_num": "6"
},
{
"text": "Other improvements. We plan to improve our approach by using simple statistical letter models of bigrams or trigrams to discard candidates generating morphologically unlikely word forms, or by using additional information in the scoring stage, such as word context, number of occurrences, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Work Ahead",
"sec_num": "6"
},
{
"text": "We have shown an active learning method for adding new entries to monolingual dictionaries. Our system allows non-expert users with no linguistic background to contribute to the improvement of RBMT systems. The Java source code for the tool described in this paper is published 5 under an open-source license.",
"cite_spans": [
{
"start": 278,
"end": 279,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "http://www.apertium.org 2 http://translate.google.com/toolkit",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For example, a single inferred paradigm could group inflections for verbs like wait ( , -s, -ed, -ing) and nouns like waiter ( , -s), whereas an expert would probably write two different paradigms in this case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://dumps.wikimedia.org/eswiki/ 20110114/eswiki-20110114-pages-articles.xml.bz2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been partially funded by Spanish Ministerio de Ciencia e Innovacin through project TIN2009-14009-C02-01 and by Generalitat Valenciana through grant ACIF/2010/174 from VALi+d programme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Active learning and crowd-sourcing for machine translation",
"authors": [
{
"first": "Vamshi",
"middle": [],
"last": "Ambati",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation, LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2010. Active learning and crowd-sourcing for ma- chine translation. In Proceedings of the Seventh con- ference on International Language Resources and Evaluation, LREC 2010.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Tools for semi-automatic assignment of czech nouns to declination patterns",
"authors": [
{
"first": "Dita",
"middle": [],
"last": "Bartuskov\u00e1",
"suffix": ""
},
{
"first": "Radek",
"middle": [],
"last": "Sedl\u00e1cek",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 5th International Conference on Text, Speech and Dialogue",
"volume": "",
"issue": "",
"pages": "159--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dita Bartuskov\u00e1 and Radek Sedl\u00e1cek. 2002. Tools for semi-automatic assignment of czech nouns to dec- lination patterns. In Proceedings of the 5th Inter- national Conference on Text, Speech and Dialogue, pages 159-164, London, UK. Springer-Verlag.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic induction of bilingual resources from aligned parallel corpora: application to shallowtransfer machine translation",
"authors": [
{
"first": "Helena",
"middle": [],
"last": "Caseli",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Nunes",
"suffix": ""
},
{
"first": "Mikel",
"middle": [],
"last": "Forcada",
"suffix": ""
}
],
"year": 2006,
"venue": "Machine Translation",
"volume": "20",
"issue": "",
"pages": "227--245",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helena Caseli, Maria Nunes, and Mikel Forcada. 2006. Automatic induction of bilingual resources from aligned parallel corpora: application to shallow- transfer machine translation. Machine Translation, 20:227-245.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised models for morpheme segmentation and morphology learning",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2007,
"venue": "ACM Trans. Speech Lang. Process",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphol- ogy learning. ACM Trans. Speech Lang. Process, 4.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic improvement of machine translation systems",
"authors": [],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ariadna Font-Llitj\u00f3s. 2007. Automatic improvement of machine translation systems. Ph.D. thesis, Carnegie Mellon University.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "paradigmlearning Apertium: a free/open-source platform for rulebased machine translation. Machine Translation",
"authors": [
{
"first": "Mikel",
"middle": [
"L"
],
"last": "Forcada",
"suffix": ""
},
{
"first": "Mireia",
"middle": [],
"last": "Ginest\u00ed-Rosell",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Nordfalk",
"suffix": ""
},
{
"first": "Jim",
"middle": [
"O"
],
"last": "'regan",
"suffix": ""
},
{
"first": "Sergio",
"middle": [],
"last": "Ortiz-Rojas",
"suffix": ""
},
{
"first": "Juan",
"middle": [
"Antonio"
],
"last": "P\u00e9rez-Ortiz",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "S\u00e1nchez-Martnez",
"suffix": ""
},
{
"first": "Gema",
"middle": [],
"last": "Ram\u00edrez-S\u00e1nchez",
"suffix": ""
},
{
"first": "Francis",
"middle": [
"M"
],
"last": "Tyers",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/s10590-011-9090-0"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel L. Forcada, Mireia Ginest\u00ed-Rosell, Jacob Nord- falk, Jim O'Regan, Sergio Ortiz-Rojas, Juan An- tonio P\u00e9rez-Ortiz, Felipe S\u00e1nchez-Martnez, Gema Ram\u00edrez-S\u00e1nchez, and Francis M. Tyers. 2011. 5 https://apertium.svn.sourceforge. net/svnroot/apertium/branches/ apertium-dixtools-paradigmlearning Apertium: a free/open-source platform for rule- based machine translation. Machine Translation. doi: 10.1007/s10590-011-9090-0.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Beyond translation memory: Computers and the professional",
"authors": [
{
"first": "Ignacio",
"middle": [],
"last": "Garcia",
"suffix": ""
}
],
"year": 2009,
"venue": "The Journal of Specialised Translation",
"volume": "12",
"issue": "",
"pages": "199--214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ignacio Garcia. 2009. Beyond translation memory: Computers and the professional. The Journal of Specialised Translation, 12:199-214.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The Handbook of Computational Linguistics and Natural Language Processing, chapter Segmentation and morphology",
"authors": [
{
"first": "John",
"middle": [
"A"
],
"last": "Goldsmith",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John A. Goldsmith, 2010. The Handbook of Compu- tational Linguistics and Natural Language Process- ing, chapter Segmentation and morphology. Wiley- Blackwell.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An introduction to machine translation",
"authors": [
{
"first": "W",
"middle": [
"J"
],
"last": "Hutchins",
"suffix": ""
},
{
"first": "H",
"middle": [
"L"
],
"last": "Somers",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. J. Hutchins and H. L. Somers. 1992. An introduc- tion to machine translation. Academic Press, Lon- don.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A space-economical suffix tree construction algorithm",
"authors": [
{
"first": "Edward",
"middle": [
"M"
],
"last": "Mccreight",
"suffix": ""
}
],
"year": 1976,
"venue": "Journal of the Association for Computing Machinery",
"volume": "23",
"issue": "",
"pages": "262--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward M. McCreight. 1976. A space-economical suffix tree construction algorithm. Journal of the Association for Computing Machinery, 23:262-272, April.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Embedding knowledge elicitation and MT systems within a single architecture",
"authors": [
{
"first": "Marjorie",
"middle": [],
"last": "Mcshane",
"suffix": ""
},
{
"first": "Sergei",
"middle": [],
"last": "Nirenburg",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Cowie",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Zacharski",
"suffix": ""
}
],
"year": 2002,
"venue": "Machine Translation",
"volume": "17",
"issue": "",
"pages": "271--305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marjorie McShane, Sergei Nirenburg, James Cowie, and Ron Zacharski. 2002. Embedding knowledge elicitation and MT systems within a single architec- ture. Machine Translation, 17:271-305.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "ParaMor: From Paradigm Structure to Natural Language Morphology Induction",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Monson",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Monson. 2009. ParaMor: From Paradigm Structure to Natural Language Morphology Induc- tion. Ph.D. thesis, Carnegie Mellon University.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A literature survey of active machine learning in the context of natural language processing",
"authors": [
{
"first": "Fredrik",
"middle": [],
"last": "Olsson",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fredrik Olsson. 2009. A literature survey of active machine learning in the context of natural language processing. Technical report, School of Electronics and Computer Science, University of Southampton.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Social Translation: How Massive Online Collaboration Could Take Machine Translation to the Next Level",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Antonio P\u00e9rez-Ortiz",
"suffix": ""
}
],
"year": 2010,
"venue": "Second European Language Resources and Technologies Forum: Language Resources of the Future, FlarenetNet Forum 2010",
"volume": "",
"issue": "",
"pages": "64--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Antonio P\u00e9rez-Ortiz. 2010. Social Translation: How Massive Online Collaboration Could Take Ma- chine Translation to the Next Level. In Second Euro- pean Language Resources and Technologies Forum: Language Resources of the Future, FlarenetNet Fo- rum 2010, pages 64-65, Barcelona, Spain.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Tradubi: open-source social translation for the Apertium machine translation platform",
"authors": [
{
"first": "M",
"middle": [],
"last": "V\u00edctor",
"suffix": ""
},
{
"first": "Juan Antonio P\u00e9rez-Ortiz",
"middle": [],
"last": "S\u00e1nchez-Cartagena",
"suffix": ""
}
],
"year": 2010,
"venue": "Open Source Tools for Machine Translation",
"volume": "",
"issue": "",
"pages": "47--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V\u00edctor M. S\u00e1nchez-Cartagena and Juan Antonio P\u00e9rez- Ortiz. 2010. Tradubi: open-source social transla- tion for the Apertium machine translation platform. In Open Source Tools for Machine Translation, MT Marathon 2010, pages 47-56.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Inferring shallow-transfer machine translation rules from small parallel corpora",
"authors": [
{
"first": "Felipe",
"middle": [],
"last": "S\u00e1nchez",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Mart\u00ednez",
"suffix": ""
},
{
"first": "Mikel",
"middle": [
"L"
],
"last": "Forcada",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Artificial Intelligence Research",
"volume": "34",
"issue": "",
"pages": "605--635",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felipe S\u00e1nchez-Mart\u00ednez and Mikel L. Forcada. 2009. Inferring shallow-transfer machine translation rules from small parallel corpora. Journal of Artificial In- telligence Research, 34:605-635.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Computer Sciences Technical Report 1648",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles. 2010. Active learning literature survey. Technical report, Computer Sciences Technical Re- port 1648, University of WisconsinMadison.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Enriching morphological lexica through unsupervised derivational rule acquisition",
"authors": [
{
"first": "G\u00e9raldine",
"middle": [],
"last": "Walther",
"suffix": ""
},
{
"first": "Lionel",
"middle": [],
"last": "Nicolas",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the International Workshop on Lexical Resources",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00e9raldine Walther and Lionel Nicolas. 2011. En- riching morphological lexica through unsupervised derivational rule acquisition. In Proceedings of the International Workshop on Lexical Resources.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Histogram representing the distribution of the position of the right candidate in the initial list of candidates L for each word in the test set."
}
}
}
}