|
{ |
|
"paper_id": "R11-1048", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:05:09.869865Z" |
|
}, |
|
"title": "Morphological Analysis of Biomedical Terminology with Analogy-Based Alignment", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Claveau", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IRISA-Univ. Rennes", |
|
"location": {} |
|
}, |
|
"email": "vincent.claveau@irisa.fr" |
|
}, |
|
{ |
|
"first": "Ewa", |
|
"middle": [], |
|
"last": "Kijak", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IRISA-Univ. Rennes", |
|
"location": {} |
|
}, |
|
"email": "ewa.kijak@irisa.fr" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In the biomedical domain, many terms are neoclassical compounds (composed of several Greek or Latin roots). The study of their morphology is important for numerous applications since it makes it possible to structure, translate, retrieve them efficiently... In this paper, we propose an original yet fruitful approach to carry out this morphological analysis by relying on Japanese, more precisely on terms written in kanjis, as a pivot language. In order to do so, we have developed a specially crafted alignment algorithm relying on analogy learning. Aligning terms with their kanji-based counterparts provides at the same time a decomposition of the term into morphs, and a kanji label for each morph. Evaluated on a dataset of French terms, our approach yields a precision greater than 70% and shows its relevance compared with existing techniques. We also illustrate the interest of this approach through two direct applications of the produced alignments: translating unknown terms and discovering relationships between morphs for terminological structuring.", |
|
"pdf_parse": { |
|
"paper_id": "R11-1048", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In the biomedical domain, many terms are neoclassical compounds (composed of several Greek or Latin roots). The study of their morphology is important for numerous applications since it makes it possible to structure, translate, retrieve them efficiently... In this paper, we propose an original yet fruitful approach to carry out this morphological analysis by relying on Japanese, more precisely on terms written in kanjis, as a pivot language. In order to do so, we have developed a specially crafted alignment algorithm relying on analogy learning. Aligning terms with their kanji-based counterparts provides at the same time a decomposition of the term into morphs, and a kanji label for each morph. Evaluated on a dataset of French terms, our approach yields a precision greater than 70% and shows its relevance compared with existing techniques. We also illustrate the interest of this approach through two direct applications of the produced alignments: translating unknown terms and discovering relationships between morphs for terminological structuring.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In many domains, accessing the information in documents or collections of documents is guided by the use of well-defined terms, which form a terminology of the domain. This is particularly true in the biomedical domain where there is a long tradition of terminologies development for structuring the knowledge as well as accessing it. An example is the MeSH (Medical Subject Headings) www.nlm.nih.gov/mesh terminology which is used to index the very popular PubMED database (www.pubmed.gov). Knowing how to handle these terms, understanding them, translating them or building semantic relationships between them are thus essential operations for applications like enrichment of bilingual lexicons, or more generally machine translation, information retrieval...", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this framework, the work presented here is interested in the morphology of simple terms from the biomedical domain as a basis for the terminological analysis. More precisely, we present a technique aiming at breaking up a term into its morphological components, namely morphs, and associating in the same time semantic knowledge to these morphs. Note that in this paper, we distinguish morphs, elementary linguistic signs (segments), from morphemes, equivalence classes with identical signified and close significants (Mel'\u010duk, 2006) . We therefore tackle the same issue already raised in some studies (Del\u00e9ger et al., 2008; Mark\u00f3 et al., 2005 , for example), but we try here to suppress the costly human operations required by these studies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 521, |
|
"end": 536, |
|
"text": "(Mel'\u010duk, 2006)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 605, |
|
"end": 627, |
|
"text": "(Del\u00e9ger et al., 2008;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 628, |
|
"end": 646, |
|
"text": "Mark\u00f3 et al., 2005", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The original idea at the heart of our approach is to use the multilingualism of existing terminological databases. We exploit Japanese as a pivot language, or more precisely terms written in kanjis, to help decomposing the terms of other languages into morphs and associate them with the corresponding kanjis, in a fully automatic way. Thus, kanjis play the role of a semantic representation for morphs. The main advantage of kanjis in this respect is that Japanese terms can be seen as a concatenation of elementary words which are easier to find in general language dictionaries. For example, the term photochimiotherapy can be translated in Japanese by IfB\u00d5; splitting and aligning these two terms gives: photo \u2194 I ('light'), chimio \u2194 f ('chemistry'), th\u00e9rapie \u2194 B\u00d5 ('therapy'). Our approach chiefly relies on the hypothesis that the composition of terms in kanjis is the same than those of English or French simple terms. This hypothesis can be seen as peremptory, but the results presented below in this paper show that it is a reasonable hypothesis. Finally, our approach provides, at the same time 1) an effective way to split terms into morphs, 2) the semantic meaning of each morph as they are actually used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This morphological analysis thus relies on an essential step which consists in aligning English or French terms with Japanese ones taken from a multilingual terminology. To do so, we propose a new alignment technique, particularly suited to this kind of data, which mixes Forward-Backward algorithm and analogy-based machine learning. After a presentation of related work in Section 2, either in terms of applications or methods, we describe this alignment technique in Section 3. Results of the morphological analysis are detailed in Section 4. In Section 5, we illustrate the interest of such analysis through two applications. The first one shows that our technique can be used to translate and analyse never-seen-before terms. The second application illustrates how the morphs and their obtained semantic labels can be used from a terminological point of view.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Many studies have used morphology for terminological analysis. This is more particularly the case in the biomedical domain where terminologies are central to many applications and where terms are constructed by operations like neo-classical composition (e.g. chemotherapy, built from the Greek pseudo-word chemo, and therapy), which are very regular, and very productive. Unfortunately, no comprehensive database of morphs with semantic information is available, and splitting a term into morphs is still an issue. One can distinguish two views of the use of morphology as a tool for term (or word) analysis. In the lexematic view, relations between terms rely on the word form, but without the need to split them into morphs (Grabar and Zweigenbaum, 2002; Claveau and L'Homme, 2005, for example) . Beside this implicit use of morphology, the morphemic view chiefly relies on splitting the term into morphs as a first step. Many studies have been made in this framework. They either rely on partially manual approaches, as the already mentioned ones (Del\u00e9ger et al., 2008; Mark\u00f3 et al., 2005) in which morphs and combination rules are provided by an expert, or on more automatic approaches. The latter usually try to find recurrent letter patterns as morphcandidate. But such techniques cannot associate a semantic meaning with these morphs. To our knowledge, no existing work makes the most of a pivot language to perform an automatic morphological analysis, as we propose in this study.", |
|
"cite_spans": [ |
|
{ |
|
"start": 726, |
|
"end": 756, |
|
"text": "(Grabar and Zweigenbaum, 2002;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 757, |
|
"end": 796, |
|
"text": "Claveau and L'Homme, 2005, for example)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1050, |
|
"end": 1072, |
|
"text": "(Del\u00e9ger et al., 2008;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1073, |
|
"end": 1092, |
|
"text": "Mark\u00f3 et al., 2005)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "From a more technical point of view, the use of a bilingual terminology also evokes studies in transliteration, particularly Katakana or Arabic (Tsuji et al., 2002; Knight and Graehl, 1998 , for example), or in translation. In this framework, let us cite the work of Morin and Daille (2010) . They propose to map complex terms written in kanjis with French ones, by using morphological rules. Yet, here again, these rules are to be given by an expert, and this study only concerns a special case of derivation. Moreover such an approach cannot handle neo-classical compounds. In other studies, translation methods for biomedical terms which considers terms as simple sequences of letters have been proposed (Claveau, 2009, inter alia) . Even if the goal is different here, such approaches share some similarities with the one presented here. Indeed, they all require aligning the words at the letter level. In most cases, this is performed with 1-1 alignment algorithm, that is, algorithm only capable to align one character, which can be empty, of the source language word with one another character of the target language word. Yet, in recent work about phonetization (Jiampojamarn et al., 2007) , authors have shown that manyto-many alignment could yield interesting results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 164, |
|
"text": "(Tsuji et al., 2002;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 165, |
|
"end": 188, |
|
"text": "Knight and Graehl, 1998", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 290, |
|
"text": "Morin and Daille (2010)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 707, |
|
"end": 734, |
|
"text": "(Claveau, 2009, inter alia)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1170, |
|
"end": 1197, |
|
"text": "(Jiampojamarn et al., 2007)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our alignment technique is mainly based on an Expectation-Maximization (EM) algorithm that we briefly present in the next sub-section (Jiampojamarn et al., 2007, for more details and examples of its use). The second sub-section explains the modification made to this standard algorithm so that it can naturally and automatically handle morphological variation, which is a phenomenon inherent to our morph splitting problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analogy for alignment", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The alignment algorithm at the heart of our approach is standard: it is a Baum-Welch algorithm, extended to map symbol sub-sequences and not only 1-1 alignments. In our case, it takes as input French terms with their kanji translations, taken from a multilingual terminology for instance. The maximum length of the sub-sequences of letters and kanjis considered for alignment are parametrized by maxX and maxY .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM Alignment", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For each term pair (x T , y V ) to be aligned (T and V being the lengths of the terms in letters or kanjis), the EM algorithm (see Algorithm 1) proceeds as follows. It first computes the partial counts of every possible mapping between subsequences of kanjis and letters (Expectation step). These counts are stored in table \u03b3, and are then used to estimate the alignment probabilities in table \u03b4 (Maximization step).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM Alignment", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The Expectation step relies on a forwardbackward approach (Algorithm 2): it computes the forward probabilities \u03b1 and backward probabilities \u03b2. For each position t, v in the terms, \u03b1 t,v is the sum of the probabilities of all the possible alignments of (x t 1 , y v 1 ), that is, from the beginning of the terms to the current position, according to the current alignment probabilities in \u03b4 (cf. Algorithm 4). \u03b2 t,v is computed in a similar way by considering (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM Alignment", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "x T t , y V v )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM Alignment", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ". These probabilities are then used to re-estimate the counts in \u03b3. In this version of the EM algorithm, the Maximization (Algorithm 3) simply consists in computing the \u03b4 alignment probabilities by normalizing the counts in \u03b3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM Alignment", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Input: list of pairs (x T , y V ) , maxX, maxY while changes in \u03b4 do initialization of \u03b3 to 0 for all pair (x T , y V ) do \u03b3 = Expectation(x T , y V , maxX , maxY , \u03b3) \u03b4 = Maximization(\u03b3) return \u03b4 Algorithm 2 Expectation Input: (x T , y V ) , maxX, maxY , \u03b3 \u03b1 := Forward-many2many( x T , y V , maxX, maxY ) \u03b2 := Backward-many2many( x T , y V , maxX, maxY ) if \u03b1 T,V > 0 then for t = 1...T do for v = 1...V do for i = 1...maxX s.t. t \u2212 i \u2265 0 do for j = 1...maxY s.t. v \u2212 j \u2265 0 do \u03b3(x t t\u2212i+1 , y v v\u2212j+1 ) += \u03b1 t\u2212i,v\u2212j \u03b4(x t t\u2212i+1 ,y v v\u2212j+1 )\u03b2 t,v \u03b1 T ,V return \u03b3 Algorithm 3 Maximization Input: \u03b3 for all sub-sequence a s.t. \u03b3(a, \u2022) > 0 do for all sub-sequence b s.t. \u03b3(a, b) > 0 do \u03b4(a, b) = \u03b3(a,b) P x \u03b3(a,x) return \u03b4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 EM Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Algorithm 4 Forward-many2many", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 EM Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Input: (x T , y V ) , maxX, maxY \u03b1 0,0 := 1 for t = 0...T do for v = 0...V do if (t > 0 \u2228 v > 0) then \u03b1t,v = 0 if (v > 0 \u2227 t > 0) then for i = 1...maxX s.t. t \u2212 i \u2265 0 do for j = 1...maxY s.t. v \u2212 j \u2265 0 do \u03b1 t,v += \u03b4(x t t\u2212i+1 , y v v\u2212j+1 )\u03b1 t\u2212i,v\u2212j return \u03b1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 EM Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The EM process is repeated until the probabilities \u03b4 are stable. When the convergence is reached, the alignment simply consists in finding the mapping that maximizes \u03b1(T, V ). In addition to this resulting alignment, we also store the final alignment probabilities \u03b4, which are used to split unseen terms (cf. Section 5.1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 EM Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This technique is not very different from the one used in statistical translation. Yet, some particularities are worth noting: this approach allows us to handle fertility, that is the capacity to align from or to empty substrings (for lack of space, it does not appear in the above simplified version); conversely, distortion, that is reordering of morphs, cannot be handled easily without major changes in this algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 EM Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The maximization step simply compute the translation probabilities of a kanji sequence into a letter sequence. For example, for the kanji \u00cc ('bacteria'), there may exist one entry in \u03b4 associating it with bact\u00e9rie, one with bact\u00e9rio (as in bact\u00e9rio/lyse) and another one with bact\u00e9ri (in myco/bact\u00e9ri/ose), each with a certain probability. This dispersion of probabilities, which is of course harmful for the algorithm, is caused by morphemic variation: bact\u00e9rio, bact\u00e9rie, and bact\u00e9ri are 3 morphs of the same morpheme, and we would like their probabilities to reinforce each other. The adaptation we propose aims at making the maximization phase able to automatically group the different morphs belonging to a same morpheme. To achieve this goal, we use a simple but well suited technique relying on formal analogical calculus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic morphological normalisation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "An analogy is a relation between 4 elements that we note: a : b :: c : d which can be read a is for b what c is for d (Lepage, 2000 , for more details about analogies). Analogies have been used in many NLP studies, especially for translation of sentences (Lepage, 2000) or terms (Langlais and Patry, 2007; Langlais et al., 2008) . Analogies are also a key component in the previously mentioned work on terminology structuring (Claveau and L'Homme, 2005) . We rely on this latter work to formalize our normalization problem. In our framework, one possible analogy may be: dermato : dermo :: h\u00e9mato : h\u00e9mo. Knowing that dermato and dermo belong to a same morpheme, one can infer that this is the case for h\u00e9mato and h\u00e9mo. Such an analogy, build on the graphemic representation of words, is said a formal analogy. After Stroppa and Yvon (2005) , formal analogies can be defined in terms of factorizations. Let a be a string (a term in our case) over an alphabet \u03a3, a factorization of a, noted f a , is a sequence of n factors", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 131, |
|
"text": "(Lepage, 2000", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 269, |
|
"text": "(Lepage, 2000)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 279, |
|
"end": 305, |
|
"text": "(Langlais and Patry, 2007;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 306, |
|
"end": 328, |
|
"text": "Langlais et al., 2008)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 426, |
|
"end": 453, |
|
"text": "(Claveau and L'Homme, 2005)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 817, |
|
"end": 840, |
|
"text": "Stroppa and Yvon (2005)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analogy", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "f a = (f 1 a , ..., f n a ), such that a = f 1 a \u2295 f 2 a \u2295 ... \u2295 f n a ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analogy", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "where \u2295 denotes the concatenation operator. A formal analogy can be defined by as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analogy", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "Definition 1 \u2200(a, b, c, d) \u2208 \u03a3, [a : b :: c : d] iff there exist factorizations (f a , f b , f c , f d ) \u2208 (\u03a3 * n ) 4 of (a, b, c, d) such that, \u2200i \u2208 [1, n], (f i b , f i c ) \u2208 (f i a , f i d ), (f i d , f i a )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analogy", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": ". The smallest n for which this definition holds is called the degree of the analogy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analogy", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "As for most European languages, French morphology is mostly concerned with prefixation and suffixation. Thus, we are looking for formal analogies of degree at most 3 (ie, 3 factors: prefix \u2295 base \u2295 suffix). In our approach, such analogies are searched by trying to build a rule rewriting the prefixes and the suffixes to move from dermato to dermo and to check that this rule also applies to h\u00e9mato-h\u00e9mo. The base is considered as the longest common sub-string (lcss) between the 2 words. In the previous example, the rewriting rule r would be: r = lcss(morph 1 ,morph 2 ) ato \u2295 o. This rule makes it possible to rewrite dermato into dermo and h\u00e9mato into h\u00e9mo; thus, h\u00e9mato,h\u00e9mo is in analogy with dermato,dermo.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analogy", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "The main problem is that we do not have examples of morphs that are known a priori to be related (like dermato and dermo in the previous example). Thus, we use a simple bootstrapping technique: if two morphs are stored in \u03b3 as possible translations of the same kanji sequence, and if these two morphs share a sub-string longer than a certain threshold, then we assume that they both belong to the same morpheme. From these bootstrap pairs, we build the prefixation and suffixation rewriting rules allowing us to detect analogies, and thus to group pairs of morphs (which can be very short, unlike the bootstrapping pairs). The more a rule is found, the more certain it will be. Therefore, we keep all the analogical rules generated at each iteration along with their number of occurrence, and we only apply the most frequently found ones. The whole process is thus completely automatic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using analogy for normalization", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "This new Maximization step is summarized in Algorithm 5. It ensures that all the morphs supposed to belong to the same morpheme have equal and reinforced alignment probabilities. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using analogy for normalization", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "The data used for our experiments are extracted from the UMLS MetaThesaurus (Tuttle et al., 1990) , which group several terminologies for several languages. In the MetaThesaurus, each term is associated with a concept identifier (CUI) which facilitates the Japanese/French pairs extraction. We only consider Japanese terms composed of kanjis, and only simple (one-word) French terms. About 8,000 pairs are formed this way. An ending mark (';') is added to each term.", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 97, |
|
"text": "(Tuttle et al., 1990)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We randomly selected 1,600 pairs among these 8,000 pairs in order to evaluate the performance of our alignment technique. These 1,600 pairs have been aligned manually to serve as gold standard.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We evaluate our approach in terms of precision: an alignment is considered as correct only if all the components of the pair are correctly aligned (thus, it is equivalent to the sentence error rate in standard machine translation).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For each pair, the EM algorithm indicates the probability of the proposed alignment. Therefore, it is possible to only consider alignments having a probability greater than a given threshold. By varying this threshold, we can compute a precision according to the number of terms aligned. Figure 1 presents the results obtained on the 1,600 test pairs. We indicate the curves produced by the EM algorithm with and without our morphemic normalization. For comparison purpose, we also report the results of GIZA++ (Och and Ney, 2003) , a reference tool in machine translation. The different IBM models and sets of parameters available in GIZA++ were tested; the results reported are the best ones (obtained with IBM model 4). As expected, the interest of the morphemic normalization appears clearly in this figure; it yields a 70% precision in the worst case (that is, when all the terms are kept for alignment). Indeed, the normalization brings a 10% improvement whatever the number of aligned pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 511, |
|
"end": 530, |
|
"text": "(Och and Ney, 2003)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 288, |
|
"end": 294, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Alignment results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "A manual examination of the results shows that most of the errors are caused by the falsification of our hypothesis: some French-Japanese pairs cannot be decomposed in a similar way. For ex-ample, the French term anxiolytiques (anxiolytics) is translated by a sequence of kanjis meaning literally 'drugs for depression'. Among these errors, some pairs imply terms that are not neoclassical compounds in French, Japanese or both (eg. m\u00e9ninges (meninges) is translated by 3 'brain membrane'). Other errors are caused by a lack of training data: some morphs or sequences only appear once, or only combined with another morph, which mislead the segmentation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this section, we present two ways of exploiting the results produced by our morphological analysis technique. The first one aims at translating unseen terms and the second one aims at structuring terminologies by finding related terms or morphs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using the morph/kanji alignments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The alignment technique that we propose can be used as a first step to translate an unknown term (i.e a term absent from the training data of our alignment algorithm). Translating terms has already been tackled in several studies, mostly to reduce the out-of-vocabulary errors in machine translation tasks. Most of these studies look for translations in textual resources: parallel or comparable corpora (Chiao and Zweigenbaum, 2002; Fung and Yee, 1998) , Web (Lu et al., 2005) . Others have considered this problem without external resources; in this case, the approach rely on the similarities between the terms in the two languages (cognates) (Schulz et al., 2004 , for example), or on the similarities of rewriting operations to go from one term to its equivalent in the other language (Langlais and Patry, 2007; Claveau, 2009) . Our work falls into this category. In the experiment reported here, we translate French terms into Japanese. In practice, we use the probabilities from \u03b4 to generate the most probable translation. The approach is straightforward: the morph translation probabilities in \u03b4 are used in a Viterbi-like algorithm; thus, we do not use a language model in addition to the translation model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 404, |
|
"end": 433, |
|
"text": "(Chiao and Zweigenbaum, 2002;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 453, |
|
"text": "Fung and Yee, 1998)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 460, |
|
"end": 477, |
|
"text": "(Lu et al., 2005)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 646, |
|
"end": 666, |
|
"text": "(Schulz et al., 2004", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 790, |
|
"end": 816, |
|
"text": "(Langlais and Patry, 2007;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 817, |
|
"end": 831, |
|
"text": "Claveau, 2009)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translating and analysing unknown terms", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "It is important to note that this translation process also produce the alignment of the source term into its translation. As a result, it also segments the initial term and label them with the corresponding kanjis. Therefore, it corresponds to the morphosemantic analysis of the unknown term.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translating and analysing unknown terms", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For the need of this experiment, 128 terms and their kanji translations have been selected at random to form the test set (of course, they have been removed from the alignment training set). These French terms are translated as explained above with the help of the delta table, and the generated translations are compared with the expected ones.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translating and analysing unknown terms", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Correctly translated (and segmented) 58 82", |
|
"cite_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 36, |
|
"text": "(and segmented)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference UMLS Web", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Incorrectly translated (or segmented)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference UMLS Web", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Not translated 36 36 Table 1 : Unknown terms translation results", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 28, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "10", |
|
"sec_num": "34" |
|
}, |
|
{ |
|
"text": "The results of this small experiment are presented in Table 1 . 58 of 128 terms, that is 45%, have been correctly translated and segmented. There are two types of errors: either a wrong translation has been proposed (it concerns 34 terms), or no translation was found (36 terms). When examining these untranslated terms, we find without any surprise that they are either words which are not neo-classical compounds, or compounds having one or several components that do not appear in the training data of the alignment algorithm. The precision on the terms for which a translation is proposed is thus 63%; this result is very promising given the simplicity of our implementation of the translation. It is also worth noting that, among the errors, most of the proposed translations are correct paraphrase, absent from the UMLS but attested on the Web in bio-medical Japanese websites; with this wider reference, the precision on translated terms reaches 89 %.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 61, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "10", |
|
"sec_num": "34" |
|
}, |
|
{ |
|
"text": "Once all the terms are aligned, one can study the recurrent correspondences between French morphs and kanjis. These correspondences can be shed into light through different techniques: Galois lattices (kanjis would be the intention and morph the extension), in a distributional analysis manner, or by analysing the kanji-morph graph with small-world, connected components... In this paper we propose to use such a graph representation: the vertices represent kanjis and morphemes (i.e a set of morphs grouped during the analogical step of the alignment), and the edges are weighted according to the number of times that a particular morpheme is aligned with a kanji sequence among the 8,000 training pairs from the UMLS. Figure 2 shows a small excerpt of the resulting graph. The size of the edge lines is proportional to the associated weight. This representation allows us to easily explore the different kinds of neighbourhood of a morpheme: each vertex receives an amount of energy which is propagated to the connected vertices proportionally to the edge's weight. Figures 3 and 4 respectively present the kanjis (manually translated in English in this figure) and the morphemes reached, in the form of tag clouds, for the French morpheme ome (oma in English, a suffix for cancer-related terms). The size and color represent the energy that reach the neighbouring kanji (respectively the morpheme) vertices. The reached vertices are expected to be conceptually related and to exhibit translation relations or synonymy, as one can see in these examples. Thus, Figure 3 represents a sort of semantic profile of the morpheme ome, in which the kanjis are used as semantic tags, while Figure 4 proposes synonyms and quasi-synonyms morphemes of the suffix ome. It is interesting to see that other related suffixes are found, but also prefixes like onco.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 721, |
|
"end": 729, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 1069, |
|
"end": 1085, |
|
"text": "Figures 3 and 4", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 1564, |
|
"end": 1572, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 1685, |
|
"end": 1693, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Morph analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The alignment and the segmentation produced by our algorithm also make it possible to study One can study first-order affinities (which morphemes are frequently associated with other morphemes) and, more interesting, second order affinities (morphemes sharing the same cooccurring morphemes). The second-order affinity allows us to group morpheme according to their paradigm. For instance, the tag cloud in Figure 5 illustrates the morphemes associated with gastro (morpheme for stomach) according to this second order affinity. Most of the morphemes identify organs, and the closest ones are for biologically close organs. This information of different nature (other benefits from these alignments can be derived) makes it possible to identify relationships between terms, or build synonyms, or explore the termbase using these morphological elements. Yet, to our knowledge, such specialized morpho-semantic resources do not exist. It makes a direct evaluation of these three different uses of the alignment results impossible.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 407, |
|
"end": 415, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Morph analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The original idea of making the most of another language like Japanese in order to help the morphologically decomposition and analysis of compounds offers many new opportunities to automatically handle biomedical terms. The new alignment approach based on analogy that we propose takes the particularities of the data into account in order to yield high quality results. Since this whole process is entirely automatic, it makes it possible to overcome the limits of terminological systems, like the one of Del\u00e9ger et al. (2008) , which heavily rely on manually populating a morphological database.", |
|
"cite_spans": [ |
|
{ |
|
"start": 506, |
|
"end": 527, |
|
"text": "Del\u00e9ger et al. (2008)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Many perspectives are foreseen for this work. First, from a technical point of view, we plan to consider more complex segmentation than the linear one we implemented. Indeed, the syntactic properties of the kanjis (some of them expect an agent or object), could help to better structure the different morphemes. One could also exploit the semantic relations between kanjis that can be easily found in general Japanese dictionaries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Concerning the analysis aspects illustrated in the last section, many possibilities are also under consideration. As the links between morphs that we produce are not typed, the use of heuristics (such as string inclusion used by Grabar and Zweigenbaum (2002) ) or techniques from distributional analysis could provide useful additional information to better characterize the relationships. Yet, the problem of evaluating this type of work arises, especially the ground truth construction, since such resources do not exist.", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 258, |
|
"text": "Grabar and Zweigenbaum (2002)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Finally, an adaptation of these principles for complex terms is under study. The main difficulty in this case is to manage the reordering of the words composing these terms, and thus manage the distortion in the alignment algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Looking for French-English translations in comparable medical corpora", |
|
"authors": [ |
|
{ |
|
"first": "Yun-Chuang", |
|
"middle": [], |
|
"last": "Chiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Zweigenbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yun-Chuang Chiao and Pierre Zweigenbaum. 2002. Looking for French-English translations in compa- rable medical corpora. Journal of the American Medical Informatics Association, 8(suppl).", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Structuring terminology by analogy-based machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Claveau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Claude L'", |
|
"middle": [], |
|
"last": "Homme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. of the 7th International Conference on Terminology and Knowledge Engineering, TKE'05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vincent Claveau and Marie-Claude L'Homme. 2005. Structuring terminology by analogy-based ma- chine learning. In Proc. of the 7th International Conference on Terminology and Knowledge Engineering, TKE'05, Copenhaguen, Denmark.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Translation of biomedical terms by inferring rewriting rules", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Claveau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Information Retrieval in Biomedicine: Natural Language Processing for Knowledge Integration", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vincent Claveau. 2009. Translation of biomedi- cal terms by inferring rewriting rules. In Violaine Prince and Mathieu Roche, editors, Information Retrieval in Biomedicine: Natural Language Processing for Knowledge Integration. IGI -Global.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Morphosemantic parsing of medical compound words: Transferring a french analyzer to english", |
|
"authors": [ |
|
{ |
|
"first": "Louise", |
|
"middle": [], |
|
"last": "Del\u00e9ger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fiammetta", |
|
"middle": [], |
|
"last": "Namer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Zweigenbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "International Journal of Medical Informatics", |
|
"volume": "78", |
|
"issue": "", |
|
"pages": "48--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Louise Del\u00e9ger, Fiammetta Namer, and Pierre Zweigenbaum. 2008. Morphosemantic parsing of medical compound words: Transferring a french an- alyzer to english. International Journal of Medical Informatics, 78(Supplement 1):48-55.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "An IR approach for translating new words from non-parallel, comparable texts", |
|
"authors": [ |
|
{ |
|
"first": "Pascale", |
|
"middle": [], |
|
"last": "Fung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yee", |
|
"middle": [], |
|
"last": "Lo Yuen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proc. of 36th Annual Meeting of the Association for Computational Linguistics ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pascale Fung and Lo Yuen Yee. 1998. An IR approach for translating new words from non-parallel, com- parable texts. In Proc. of 36th Annual Meeting of the Association for Computational Linguistics ACL, Montr\u00e9al, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Lexically-based terminology structuring: Some inherent limits", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proc. of International Workshop on Computational Terminology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lexically-based terminology structuring: Some in- herent limits. In Proc. of International Workshop on Computational Terminology, COMPUTERM, Taipei, Taiwan.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Applying many-to-many alignments and hidden markov models to letter-to-phoneme conversion", |
|
"authors": [ |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Sittichai Jiampojamarn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tarek", |
|
"middle": [], |
|
"last": "Kondrak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sherif", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proc. of the conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sittichai Jiampojamarn, Grzegorz Kondrak, , and Tarek Sherif. 2007. Applying many-to-many alignments and hidden markov models to letter-to-phoneme conversion. In Proc. of the conference of the North American Chapter of the Association for Computational Linguistics, Rochester, New York, USA.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Machine transliteration", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Graehl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Computational Linguistics", |
|
"volume": "24", |
|
"issue": "4", |
|
"pages": "599--612", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Knight and Jonathan Graehl. 1998. Ma- chine transliteration. Computational Linguistics, 24(4):599-612.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Translating unknown words by analogical learning", |
|
"authors": [ |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Langlais", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Patry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proc. of Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "877--886", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philippe Langlais and Alexandre Patry. 2007. Trans- lating unknown words by analogical learning. In Proc. of Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 877-886, Prague, Czech Republic, June.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Translating medical words by analogy", |
|
"authors": [ |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Langlais", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fran\u00e7ois", |
|
"middle": [], |
|
"last": "Yvon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Zweigenbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proc. of the workshop on Intelligent Data Analysis in bioMedicine and Pharmacology (IDAMAP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philippe Langlais, Fran\u00e7ois Yvon, and Pierre Zweigen- baum. 2008. Translating medical words by anal- ogy. In Proc. of the workshop on Intelligent Data Analysis in bioMedicine and Pharmacology (IDAMAP) 2008, Washington, DC.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Languages of analogical strings", |
|
"authors": [ |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Lepage", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proc. of the 18th conference on Computational linguistics, COLING'00", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yves Lepage. 2000. Languages of analogical strings. In Proc. of the 18th conference on Computational linguistics, COLING'00, Universit\u00e4t des Saarlan- des, Saarbr\u00fccken, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Semi-automatic construction of the Chinese-English MeSH using web-based term translation method", |
|
"authors": [ |
|
{ |
|
"first": "Wen-Hsiang", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shih-Jui", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi-Che", |
|
"middle": [], |
|
"last": "Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kuan-Hsi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. of AMIA annual symposium", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wen-Hsiang Lu, Shih-Jui Lin, Yi-Che Chan, and Kuan- Hsi Chen. 2005. Semi-automatic construction of the Chinese-English MeSH using web-based term translation method. In Proc. of AMIA annual symposium.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Morphosaurus -design and evaluation of an interlingua-based, cross-language document retrieval engine for the medical domain", |
|
"authors": [ |
|
{ |
|
"first": "Korn\u00e9l", |
|
"middle": [], |
|
"last": "Mark\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Schulz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Udo", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Methods of Information in Medicine", |
|
"volume": "44", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Korn\u00e9l Mark\u00f3, Stefan Schulz, and Udo Han. 2005. Morphosaurus -design and evaluation of an interlingua-based, cross-language document re- trieval engine for the medical domain. Methods of Information in Medicine, 44(4).", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Aspects of the Theory of Morphology", |
|
"authors": [ |
|
{ |
|
"first": "Igor", |
|
"middle": [], |
|
"last": "Mel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "'", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Trends in Linguistics. Studies and Monographs. Mouton de Gruyter", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Igor Mel'\u010duk. 2006. Aspects of the Theory of Morphology. Trends in Linguistics. Studies and Monographs. Mouton de Gruyter, Berlin, March.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Compositionality and lexical alignment of multi-word terms. Language Resources and Evaluation (LRE)", |
|
"authors": [ |
|
{ |
|
"first": "Emmanuel", |
|
"middle": [], |
|
"last": "Morin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B\u00e9atrice", |
|
"middle": [], |
|
"last": "Daille", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emmanuel Morin and B\u00e9atrice Daille. 2010. Com- positionality and lexical alignment of multi-word terms. Language Resources and Evaluation (LRE), 44.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A systematic comparison of various statistical alignment models", |
|
"authors": [ |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "29", |
|
"issue": "1", |
|
"pages": "19--51", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A sys- tematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Cognate Mapping -A Heuristic Strategy for the Semi-Supervised Acquisition of a Spanish Lexicon from a Portuguese Seed Lexicon", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Schulz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kornel", |
|
"middle": [], |
|
"last": "Mark\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduardo", |
|
"middle": [], |
|
"last": "Sbrissia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Nohama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Udo", |
|
"middle": [], |
|
"last": "Hahn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. of the 20 th International Conference on Computational Linguistics, COLING'04", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefan Schulz, Kornel Mark\u00f3, Eduardo Sbrissia, Percy Nohama, and Udo Hahn. 2004. Cognate Mapping -A Heuristic Strategy for the Semi- Supervised Acquisition of a Spanish Lexicon from a Portuguese Seed Lexicon. In Proc. of the 20 th International Conference on Computational Linguistics, COLING'04, Geneva, Switzerland.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "An analogical learner for morphological analysis", |
|
"authors": [ |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Stroppa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fran\u00e7ois", |
|
"middle": [], |
|
"last": "Yvon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceeedings of the 9th CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "120--127", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicolas Stroppa and Fran\u00e7ois Yvon. 2005. An analogical learner for morphological analysis. In Proceeedings of the 9th CoNLL, pages 120-127, Ann Arbor, MI, USA.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Extracting French-Japanese word pairs from bilingual corpora based on transliteration rules", |
|
"authors": [ |
|
{ |
|
"first": "Keita", |
|
"middle": [], |
|
"last": "Tsuji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B\u00e9atrice", |
|
"middle": [], |
|
"last": "Daille", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyo", |
|
"middle": [], |
|
"last": "Kageura", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proc. of the 3 rd International Conference on Language Resources and Evaluation, LREC'02", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Keita Tsuji, B\u00e9atrice Daille, and Kyo Kageura. 2002. Extracting French-Japanese word pairs from bilin- gual corpora based on transliteration rules. In Proc. of the 3 rd International Conference on Language Resources and Evaluation, LREC'02, Las Palmas de Gran Canaria, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Using meta-1 -the 1 st version of the UMLS metathesaurus", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Tuttle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Sherertz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nels", |
|
"middle": [], |
|
"last": "Olson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Erlbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Sperzel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lloyd", |
|
"middle": [], |
|
"last": "Fuller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Neslon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proc. of the 14 th annual Symposium on Computer Applications in Medical Care (SCAMC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "131--135", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Tuttle, David Sherertz, Nels Olson, Mark Erl- baum, David Sperzel, Lloyd Fuller, and Stuart Nes- lon. 1990. Using meta-1 -the 1 st version of the UMLS metathesaurus. In Proc. of the 14 th annual Symposium on Computer Applications in Medical Care (SCAMC), pages 131-135, Washington, USA.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "Input: \u03b3 for all sub-sequence a s.t. \u03b3(a, \u2022) > 0 do for all m1, m2 s.t. \u03b3(a, m1) > 0 \u2227 \u03b3(a, m2) > 0\u2227 lcss(m1, m2) > threshold do build the prefixation and suffixation rule r for m1, m2 increment the score of r for all sub-sequence b s.t. \u03b3(a, b) > 0 do build the set M of all morphs associated to b with the help of the n most frequent analogical rules from the previous iteration", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"text": "Precision of alignment according to the number of test pairs aligned", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"text": "Morpheme-kanji graph", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"text": "Morpheme cloud for morpheme ome", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF5": { |
|
"uris": null, |
|
"text": "Kanji cloud for ome", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF6": { |
|
"uris": null, |
|
"text": "Morpheme cloud for gastro secondorder affinities the co-occurrences of morphemes in French terms.", |
|
"num": null, |
|
"type_str": "figure" |
|
} |
|
} |
|
} |
|
} |