{ "paper_id": "R11-1025", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:03:48.101988Z" }, "title": "An Incremental Entity-Mention Model for Coreference Resolution with Restrictive Antecedent Accessibility", "authors": [ { "first": "Manfred", "middle": [], "last": "Klenner", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Zurich Institute of Computational Linguistics", "location": {} }, "email": "klenner@cl.uzh.ch" }, { "first": "Don", "middle": [], "last": "Tuggener", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Zurich", "location": {} }, "email": "tuggener@cl.uzh.ch" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce an incremental entitymention model for coreference resolution. Our experiments show that it is superior to a non-incremental version in the same environment. The benefits of an incremental architecture are: a reduction of the number of candidate pairs, a means to overcome the problem of underspecified items in pairwise classification and the natural integration of global constraints such as transitivity. Additionally, we have defined a simple salience measure that-coupled with the incremental model-proved to establish a challenging baseline which seems to be on par with machine learning based systems of the 2010's SemEval shared task.", "pdf_parse": { "paper_id": "R11-1025", "_pdf_hash": "", "abstract": [ { "text": "We introduce an incremental entitymention model for coreference resolution. Our experiments show that it is superior to a non-incremental version in the same environment. The benefits of an incremental architecture are: a reduction of the number of candidate pairs, a means to overcome the problem of underspecified items in pairwise classification and the natural integration of global constraints such as transitivity. Additionally, we have defined a simple salience measure that-coupled with the incremental model-proved to establish a challenging baseline which seems to be on par with machine learning based systems of the 2010's SemEval shared task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "With notable exceptions (Luo et al., 2004; Yang et al., 2004; Daume III and Marcu, 2005; Culotta et al., 2007; Rahman and Ng, 2009; Cai and Strube, 2010; Raghunathan et al., 2010) supervised approaches to coreference resolution are often realised by pairwise classification of anaphorantecedent candidates. A popular and often reimplemented approach is presented in (Soon et al., 2001) . As recently discussed in (Ng, 2010) , the so called mention-pair model suffers from several design flaws which originate from the locally confined perspective of the model:", "cite_spans": [ { "start": 24, "end": 42, "text": "(Luo et al., 2004;", "ref_id": "BIBREF12" }, { "start": 43, "end": 61, "text": "Yang et al., 2004;", "ref_id": "BIBREF24" }, { "start": 62, "end": 88, "text": "Daume III and Marcu, 2005;", "ref_id": "BIBREF4" }, { "start": 89, "end": 110, "text": "Culotta et al., 2007;", "ref_id": "BIBREF2" }, { "start": 111, "end": 131, "text": "Rahman and Ng, 2009;", "ref_id": "BIBREF17" }, { "start": 132, "end": 153, "text": "Cai and Strube, 2010;", "ref_id": "BIBREF1" }, { "start": 154, "end": 179, "text": "Raghunathan et al., 2010)", "ref_id": "BIBREF16" }, { "start": 366, "end": 385, "text": "(Soon et al., 2001)", "ref_id": "BIBREF22" }, { "start": 413, "end": 423, "text": "(Ng, 2010)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Generation of (transitively) redundant pairs, as the formation of coreference sets (coreference clustering) is done after pairwise classification", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Skewed training sets based on pair generation mechanics which lead to classifiers biased towards negative classification", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 No means to enforce global constraints such as transitivity", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Underspecification of antecedent candidates Mention-pair systems operate in a nonincremental mode, i.e. all pairs are classified prior to the construction of the coreference sets. A clustering step is needed where, additionally, inconsistencies (e.g. transitively incompatible pairs) can be removed. This often is realised as an optimisation step, where scores derived from pairwise classification are used as weights in a decision taking process that incorporates linguistic constraints, e.g. (Finkel and Manning, 2008) . Although this overcomes the limitations of the strictly local perspective of pairwise classifiers, it still suffers from the problem of unbalanced data (much more negative than positive examples are generated). The large number of candidate pairs, in general, is a problem, e.g. (Wunsch et al., 2009) . These problems can be remedied by an incremental entity-mention model, where candidate pairs are evaluated on the basis of emerging coreference sets. The amount of candidate pairs is reduced, since only one (virtual prototype) example of each coreference set needs to be compared to a new anaphor candidate 1 . Moreover, the problem of inconsistent decisions vanishes, since the virtual prototype of a coreference set bears all the known morphological and semantic information of the elements of the set. If an anaphor candidate is compatible with the prototype then it is compatible with each member of the coreference set. A clustering phase on top of the pairwise classifier no longer is needed.", "cite_spans": [ { "start": 496, "end": 522, "text": "(Finkel and Manning, 2008)", "ref_id": "BIBREF7" }, { "start": 804, "end": 825, "text": "(Wunsch et al., 2009)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We have compared our incremental entitymention model to a non-incremental mentionpair version. The memory-based learner TiMBL (Daelemans et al., 2007) was used for pairwise classification. To define a simple baseline, we adopted previous work on salience-based models for coreference resolution. It turns out that our salience measure coupled with the incremental model performs quite well, e.g. it outperformes the systems from the 2010's SemEval shared task on 'coreference resolution in multiple languages' in our own post-task evaluation.", "cite_spans": [ { "start": 126, "end": 150, "text": "(Daelemans et al., 2007)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our system uses real preprocessing (i.e. the use of a parser (Schneider, 2008; Sennrich et al., 2009) ) and extracts markables (nouns, named entities and pronouns) from the chunks based on POS tags delivered by the preprocessing pipeline.", "cite_spans": [ { "start": 61, "end": 78, "text": "(Schneider, 2008;", "ref_id": "BIBREF20" }, { "start": 79, "end": 101, "text": "Sennrich et al., 2009)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We first introduce the incremental model, present constraints on buffer list access, discuss our filtering system and our approximation of the binding theory. We then turn to our simple salience measure initially used as a baseline. In the empirical section, the impact of the incremental entity-mention model on the number of candidate pairs is quantified and a comparison of the variants (incremental, non-incremental etc.) of our German system on the T\u00fcBa-D/Z (Naumann, 2006) is given. We also describe our post-task evaluation with the 2010's SemEval data, the results from the BioNLP shared task on coreference resolution in the biomedical domain and our results on the CoNLL 2011 shared task development set. Fig. 1 shows the base algorithm. Let I be the chronologically ordered list of markables, C the set of coreference sets (i.e. the coreference partition) and B a buffer where markables are stored, if they are not anaphoric (but might be valid antecedents). Furthermore, m i is the current markable and \u2295 means concatenation of a list and a single item. The algorithm proceeds as follows: a set of antecedent candidates is determined for each markable m i (steps 1 to 7) from the coreference sets (r j ) and the buffer (b k ). A valid candidate r j or b k must be compatible with m i . The definition of compatibility depends on the POS tags of the anaphor-antecedent pair (in order to be coreferent, e.g. two pronouns must agree in person, number and gender, while two nouns, at least in German, need not necessarily agree in gender).", "cite_spans": [ { "start": 463, "end": 478, "text": "(Naumann, 2006)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 715, "end": 721, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "If an antecedent candidate is already in a coreference set (r j ), m i is compared to the virtual prototype of the set in order to reduce underspecification. The virtual prototype bears information accumulated from all elements of the coreference set. For instance, assume a candidate pair 'Clinton ... she'. Since the gender of 'Clinton' is unspecified, the pair might or might not be a good candidate. But if 'Clinton' is part of a coreference set, let's say: {'Hillary Clinton', 'she', 'her', 'Clinton'} then we can derive the gender from the other members and are more safe in our decision. The virtual prototype here would be: singular, feminine, human.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Incremental Entity-mention Model", "sec_num": "2" }, { "text": "In languages such as German, where morphological information is much more discriminatory than in English and where at the same time underspecification appears quite often (e.g. the reflexive pronoun 'sich' might refer to any third person noun phrase, be it singular or plural, masculine, feminine or neutral), this is particularly helpful.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Incremental Entity-mention Model", "sec_num": "2" }, { "text": "If no compatible antecedent candidates are found, m i is added to the buffer (Step 8). If there are compatible candidates in the candidate list Cand, the most salient ante i \u2208 Cand (or, in the machine learning setting, the most probable) is selected (step 10) and the coreference partition is augmented (step 11). If ante i comes from a coreference set, m i is added to that set. Otherwise (ante i is from the buffer), a new set is formed, {ante i , m i }, and added to the set of coreference sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Incremental Entity-mention Model", "sec_num": "2" }, { "text": "As already discussed, access to coreference sets is restricted to the virtual prototype -the concrete members are invisible. This reduces the number of considered pairs (from the cardinality of a set to 1). Moreover, we restrict access to buffer elements: if an antecedent candidate, r j , from a coreference set exists, then elements from the buffer, b k , are only licensed if they are more recent than r j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Accessibility of Antecedent Candidates", "sec_num": "2.1" }, { "text": "Although this rule is heuristic and no evaluation of the impact of different versions of such a 'discourse model' have been carried out yet, we believe that 'accessibility' of antecedent candidates along these lines is a fruitful notion. It might 1 for i=1 to length(I) 2 for j=1 to length(C) 3 r j := virtual prototype of coreference set C j 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Accessibility of Antecedent Candidates", "sec_num": "2.1" }, { "text": "Cand := Cand \u2295 r j if compatible(r j , m i ) 5 for k= length(B) to 1 6 b k := the k-th licensed buffer element 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Accessibility of Antecedent Candidates", "sec_num": "2.1" }, { "text": "Cand :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Accessibility of Antecedent Candidates", "sec_num": "2.1" }, { "text": "= Cand \u2295 b k if compatible(b k , m i ) 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Accessibility of Antecedent Candidates", "sec_num": "2.1" }, { "text": "if Cand = {} then B := B \u2295 m i 9 if Cand = {} then 10 ante i := most salient element of Cand 11 C := augment(C,ante i ,m i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Accessibility of Antecedent Candidates", "sec_num": "2.1" }, { "text": "Figure 1: Incremental model: base algorithm lead to cognitively adequate models for coreference resolution, where cognitive burden determines which antecedent candidates are valid at all. Clearly, future work must start with an evaluation of our current setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Accessibility of Antecedent Candidates", "sec_num": "2.1" }, { "text": "There is a number of conditions not shown in the basic algorithm in Fig. 1 that define compatibility of antecedent and anaphor candidates based on POS tags: Reflexive pronouns must be bound to the subject governed by the same verb. Relative pronouns are bound to the next NP in the left context. Personal and possessive pronouns are licensed to bind to morphologically compatible antecedent candidates (named entities, nouns 2 and pronouns) within a window of three sentences. Named entities must either match completely or the antecedent must be longer than one token and all tokens of the anaphor must be contained in the antecedent (e.g. 'Hillary Clinton' ... 'Clinton'). Demonstrative NPs are mapped to nominal NPs by matching their heads (e.g. 'The recent findings' ... 'these findings'). Definite NPs match with noun chunks that are longer than one token 3 and must be contained completely without the determiner (e.g. 'Recent events' ... 'the events'). To licence non-matching (bridging) nominal anaphora, we apply hyponymy and synonymy searches in WordNet (Fellbaum, 1998) and GermaNet (Hamp and Feldweg, 1997) respectively. For the machine learning approaches we used the standard features of mention-pair models (e.g. (Soon et al., 2001) ). We trained individual classifiers per anaphora type, i.e. for nominal anaphora, reflexive, possessive, relative and personal pronouns. We manually tuned the feature selection of each classifier. Both the mention-pair and the entity-mention model share these features and filters.", "cite_spans": [ { "start": 1064, "end": 1080, "text": "(Fellbaum, 1998)", "ref_id": "BIBREF6" }, { "start": 1094, "end": 1118, "text": "(Hamp and Feldweg, 1997)", "ref_id": "BIBREF9" }, { "start": 1228, "end": 1247, "text": "(Soon et al., 2001)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 68, "end": 74, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Filtering and Training Based on Anaphora Type", "sec_num": "2.2" }, { "text": "There is another principle that nicely combines with our incremental model and helps reducing the number of candidates even further: binding theory (e.g. (B\u00fcring, 2005) ). We know that 'Clinton' and 'her' cannot be coreferent in the sentence 'Clinton met her'. Thus, the pair 'Clinton'-'her' need not be considered at all. Furthermore, all mentions of the 'Clinton' coreference set, say {'Hillary Clinton', she, her, 'Clinton'}, are transitively exclusive and can be discarded as antecedent candidates.", "cite_spans": [ { "start": 154, "end": 168, "text": "(B\u00fcring, 2005)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Binding Theory as a Filter", "sec_num": "2.3" }, { "text": "Actually, there are subtle restrictions to be captured here. We have not implemented a full-blown binding theory on top of our dependency parsers. Instead, we approximated binding restrictions by subclause detection. 'Clinton' and 'her' are in the same subclause (the main clause) and are, thus, exclusive. This is true for nouns and personal pronouns, only. Possessive and reflexive pronouns are allowed to be bound in the same subclause.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binding Theory as a Filter", "sec_num": "2.3" }, { "text": "In the pioneer work of (Lappin and Leass, 1994) , salience calculation included manually specified weights for grammatical functions (e.g. subject got the highest score). The distance between the candidates and other properties are also taken into account in order to determine salience. Such approaches suffered from a proper empirical justification 4 . Consequently, machinelearning approaches have replaced manually designed salience measures. Now it is the classifier that determines 'salience'.", "cite_spans": [ { "start": 23, "end": 47, "text": "(Lappin and Leass, 1994)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "An Empirically-based Salience Measure", "sec_num": "2.4" }, { "text": "Our salience measure is a variant of the one in (Lappin and Leass, 1994) . Instead of manually specifying the weights, we derived them empirically on the basis of the coreference gold standard (for German, this is the coreference annotated treebank T\u00fcBa-D/Z ; for English, OntoNotes 5 was used). The salience of a dependency label, D, is estimated by the number of true mentions in the gold standard that bear D (i.e. are connected to their heads with D), divided by the total number of true mentions. The salience of the label subject is thus calculated by:", "cite_spans": [ { "start": 48, "end": 72, "text": "(Lappin and Leass, 1994)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "An Empirically-based Salience Measure", "sec_num": "2.4" }, { "text": "N umber of true mentions bearing subject T otal number of true mentions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Empirically-based Salience Measure", "sec_num": "2.4" }, { "text": "For a given dependency label, this fraction indicates how strong is the label a clue for bearing a true mention. We get a hierarchical ordering of the dependency labels (subject > object > pobject ...) according to which antecedent candidates are ranked. Clearly, future work will have to establish a more elaborate calculation of salience to be used for classification without machine learning. To our surprise, however, this salience measure performed quite well together with our incremental architecture.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Empirically-based Salience Measure", "sec_num": "2.4" }, { "text": "We evaluate our system in two languages (German and English) and in two domains (newswire text and abstracts from the biomedical domain). We directly compare our incremental entity mention model to the generative mention-pair model on the basis of the German T\u00fcBa-D/Z corpus in a 5-fold cross-validation. We also investigate the competitiveness of the incremental model compared to other systems in two tasks and languages: SemEval 6 (English and German) and BioNLP 7 (English). Results of the CoNLL 2011 8 shared task development data (English) are also provided. Fig. 2 shows the number of training instances of the first fold (about 5'000 sentences) from the T\u00fcBa-D/Z both for the incremental and the non-incremental algorithm. Overall a huge reduction by a factor of 4 (-131297 instances, -76.55 %) can be observed when moving from the non-incremental mention-pair to the incremental entity-mention model. As we use the same filter set in all runs, no true mentions are deleted in the incremental approach. The reduction in positives results from pairing an anaphor candidate with only one virtual prototype of the coreference set it belongs to as opposed to redundantly pairing it with all members of its set. As during testing only pairs consisting of the set's virtual prototype and the anaphor candidate are considered, this is sufficient and the additional pairs are not needed. The reduction in negatives results from the same mechanism. Instead of pairing the anaphor with all mentions of a set it does not belong to, only one negative pair with the prototype is generated. Additionally, some pairs are created with compatible members from the buffer list.", "cite_spans": [], "ref_spans": [ { "start": 565, "end": 571, "text": "Fig. 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "The reason for the relatively minor reduction in reflexive and relative pronouns is that the search for antecedents is limited to the same sentence or even a specific (sub-) clause. On the other hand, we allow for possessive and personal pronouns a window of three sentences wherein antecedent candidates may be found. In the latter two cases, the incremental approach to pair generation has a more drastic impact on the number of training instances (-64.44%, -84.05% resp.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reducing the Number of Candidate Pairs", "sec_num": "3.1" }, { "text": "We can see from the results (Fig. 3) that the incremental entity-mention model outperforms the mention-pair model. The entity-mention model with the TiMBL classifier performed best by improving recall (+ 7.01%) and losing some precision (-0.79%) compared to the mention-pair model. To our surprise, the simple salience approach performed quite well, losing only 0.85% precision and 1.88% recall compared to its machine learning variant.", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 36, "text": "(Fig. 3)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "T\u00fcBa-D/Z Model Comparison", "sec_num": "3.2" }, { "text": "Given that bridging anaphora is not resolved in the salience mode, a reduction in recall was to be expected. It still outperforms the mention-pair model that implements machine learning. Overall the results of the T\u00fcBa-D/Z evaluation are low, indicating that end-to-end coreference resolution with real preprocessing is still a difficult problem. It is important to note that we implemented a version of the CEAF metric which does not account for singletons (i.e. coreference sets with only one mention) because we believe that finding singletons is not a crucial part of the coreference resolution task and that it improves results artificially. We can see the difference of evaluating with or without singletons if we compare these results with the ones from SemEval (Fig. 5) , where singletons are considered in the evaluation process. The SemEval German task also uses data from the T\u00fcBa-D/Z , allowing an approximate comparison of the results to illustrate the effects of considering singletons in evaluation. The CEAF F1-measure of our incremental model reaches 76.8% on the SemEval data ( Fig. 5) , while without singletons, we reach 52.79% in the T\u00fcBa-D/Z evaluation (Fig. 3) .", "cite_spans": [], "ref_spans": [ { "start": 769, "end": 778, "text": "(Fig. 5)", "ref_id": "FIGREF4" }, { "start": 1097, "end": 1104, "text": "Fig. 5)", "ref_id": "FIGREF4" }, { "start": 1176, "end": 1184, "text": "(Fig. 3)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "T\u00fcBa-D/Z Model Comparison", "sec_num": "3.2" }, { "text": "We simulated perfect resolution of the individual classifiers of the best performing system (Entitymention(TiMBL)) from the model comparison (Fig. 4) . We ran the system on the first fold (ca. 5000 sentences) of the T\u00fcBa-D/Z , resolving one type of anaphora (e.g. nominal anaphora) using gold standard information per run, while the other anaphora types were resolved by the system.This gives us an indication of the upper bounds of the system: How good would our system be, if it resolved e.g. nominal anaphora perfectly?", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 149, "text": "(Fig. 4)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3" }, { "text": "with filters means that only pairs that pass the filters are resolved. In the without filters mode, all pairs of the corresponding anaphora type are resolved correctly, disregarding filter decisions. The other anaphora types are resolved by the system in both modes. The difference in performance between the with and without filtering mode indicates how good our filters are: the smaller the difference, the better the filters (compare values horizontally). The performance difference of the individual classifiers with perfect resolution compared to the overall system performance (right column, compare vertically) indicates the difficulty of resolving that anaphora type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3" }, { "text": "For example, in the first row that indicates resolution performances of nominal anaphora we can see that we roughly lose 10% in F1 measure due to our nominal filters (72.70% -62.61%). Compared to the actual system performance in the last row in the right column (53.86%) we see that we lose an additional 9% in F1 measure because of imperfect resolution of nominal anaphora (62.61% -53.86%). This sums up to a total loss of 19% in F1 measure compared to system performance with perfect resolution of nominal anaphora. Compared to the minor difference of 1.8% F1 measure between perfect and imperfect resolution of reflexive pronouns (-1.5% through filtering and -0.3% through imperfect classification) the difficulty of resolving nominal anaphora becomes obvious.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3" }, { "text": "To get an indication of the competitiveness of our incremental approach we carried out evaluations over recent shared task data sets. The SemEval coreference task (Recasens et al., 2010) focused on coreference resolution in multiple languages and comparing different evaluation metrics. The test data for German was composed of the T\u00fcBa-D/Z whereas the English data was gathered from the OntoNotes corpus. The main goal of the BioNLP protein/gene coreference task was to resolve non-namecontaining mentions in protein/gene-interactions to their appropriate name-containing antecedents and thereby improving overall recall of interaction extraction (i.e. the main task). The test data consists of abstracts gathered from PubMed. As the SemEval training data for English and German were not available at the time of our posttask experiments, we were only able to evaluate the salience based classification.", "cite_spans": [ { "start": 163, "end": 186, "text": "(Recasens et al., 2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "SemEval 2010, BioNLP 2011 and CoNLL 2011", "sec_num": "3.4" }, { "text": "The SemEval coreference task offers many different settings. Since we are interested in real end-to-end coreference resolution we evaluated the open/regular setting, meaning that real preprocessing components are used as opposed to perfect gold standard preprocessing data. Results of the SemEval task are given in Figure 5 .", "cite_spans": [], "ref_spans": [ { "start": 315, "end": 323, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "SemEval 2010, BioNLP 2011 and CoNLL 2011", "sec_num": "3.4" }, { "text": "Except for the (recently questioned, e.g. (Luo, 2005; Cai and Strube, 2010) ) MUC metric in the English evaluation, the incremental model (incr) achieved best results throughout the SemEval experiments in both languages. All other systems that competed in the task implemented a mentionpair model. Overall, an improvement can be observed compared to the other systems, mainly in precision.", "cite_spans": [ { "start": 42, "end": 53, "text": "(Luo, 2005;", "ref_id": "BIBREF13" }, { "start": 54, "end": 75, "text": "Cai and Strube, 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "SemEval 2010, BioNLP 2011 and CoNLL 2011", "sec_num": "3.4" }, { "text": "The simple salience based measure is not suited for resolving bridging anaphora. Therefore, bridging anaphora was not resolved by the system in these experiments (but still included in the evaluation) which might be a reason for the relatively low recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SemEval 2010, BioNLP 2011 and CoNLL 2011", "sec_num": "3.4" }, { "text": "More recently, we have adapted our saliencebased incremental architecture to the biomedical domain. Our results in the recent BioNLP 2011 shared task are competitive as well (see Fig. 6 ).", "cite_spans": [], "ref_spans": [ { "start": 179, "end": 185, "text": "Fig. 6", "ref_id": null } ], "eq_spans": [], "section": "SemEval 2010, BioNLP 2011 and CoNLL 2011", "sec_num": "3.4" }, { "text": "The results of our evaluation over the CoNLL 2011 shared task development set are given in Fig. 7 . CEAF and BCUB scores are considerably lower compared to the SemEval results. We believe these differences originate from the updated scoring algorithms for CEAF and BCUB. They were modified for the CoNLL scorer according to suggestions by (Cai and Strube, 2010 ", "cite_spans": [ { "start": 340, "end": 361, "text": "(Cai and Strube, 2010", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 91, "end": 98, "text": "Fig. 7", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "SemEval 2010, BioNLP 2011 and CoNLL 2011", "sec_num": "3.4" }, { "text": "The work of (Soon et al., 2001 ) is a prototypical and often re-implemented (baseline) model that is based on pairwise classification and machine learning. Our non-incremental mention-pair model can be seen as an adaption of this system and its features. Coreference clustering is discussed e.g. in (Denis and Baldridge, 2009; Finkel and Manning, 2008) . Our mention-pair model uses the Balas algorithm for clustering as discussed in (Klenner, 2007) . Direct empirical comparison of supervised mention-pair and entity-mention models can be found in e.g. (Luo et al., 2004; Yang et al., 2004; Rahman and Ng, 2009) . Only in (Rahman and Ng, 2009) a clear improvement by the entity-mention model is observed. Other supervised entity-mention models such as (Daume III and Marcu, 2005; Culotta et al., 2007; Raghunathan et al., 2010) Our work differs from the research mentioned above as it focuses on using an incremental entitymention architecture to impose constraints on candidate pair generation as opposed to generating cluster-level features for (machine learning-based) classification. Our hypothesis, also for future work, is that progress is possible by not only improving classifier performance but by improving other steps of the coreference resolution pipeline that lead up to the classifier, namely pair generation and antecedent candidate accessibility.", "cite_spans": [ { "start": 12, "end": 30, "text": "(Soon et al., 2001", "ref_id": "BIBREF22" }, { "start": 299, "end": 326, "text": "(Denis and Baldridge, 2009;", "ref_id": "BIBREF5" }, { "start": 327, "end": 352, "text": "Finkel and Manning, 2008)", "ref_id": "BIBREF7" }, { "start": 434, "end": 449, "text": "(Klenner, 2007)", "ref_id": "BIBREF10" }, { "start": 554, "end": 572, "text": "(Luo et al., 2004;", "ref_id": "BIBREF12" }, { "start": 573, "end": 591, "text": "Yang et al., 2004;", "ref_id": "BIBREF24" }, { "start": 592, "end": 612, "text": "Rahman and Ng, 2009)", "ref_id": "BIBREF17" }, { "start": 635, "end": 644, "text": "Ng, 2009)", "ref_id": "BIBREF17" }, { "start": 753, "end": 780, "text": "(Daume III and Marcu, 2005;", "ref_id": "BIBREF4" }, { "start": 781, "end": 802, "text": "Culotta et al., 2007;", "ref_id": "BIBREF2" }, { "start": 803, "end": 828, "text": "Raghunathan et al., 2010)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "We have introduced an incremental entity-mention algorithm for coreference resolution and evaluated its impact on pair generation and the performance of architectural variants. A performance comparison of our model to systems from different shared tasks produced good results. We also discussed a simple and very fast salience-based approach that performed quite well, i.e. it outperformed all systems of the 2010's SemEval shared task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "The benefits of an incremental model are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "\u2022 due to the restricted access to potential antecedent candidates, the number of generated candidate pairs can be reduced drastically", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "\u2022 no additional coreference clustering is necessary", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "\u2022 global constraints (e.g. transitivity) are easily integrated", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "\u2022 underspecification of antecedent candidates can often be compensated by other members of the emerging coreference sets Our theory on how to restrict the accessibility of antecedent candidates has proven to be (empirically) successful, as it outperformed other systems. However, we are aware of the fact that we need to explore in a more principled and empirically grounded way, what the parameters of such an evolving discourse model are. We strive for a theory whose decisions, in the best case, relate to the restrictions of human cognitive capacity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Finally, our implementation of a binding theory is incomplete. Since binding theory provides hard restrictions, it is a crucial component of any theory on antecedent accessibility.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Web demos of the salience based system for English and German are available 9 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "We are aware of the fact that, linguistically speaking, anaphoric expressions depend on previously mentioned entities (e.g. 'she' \u2192 'Clinton'), whereas coreferent expressions do not always (e.g. 'Hillary Clinton' ... 'United States Secretary of State'). We use the terms 'anaphoric' and 'anaphora' to subsume both relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To identify animacy and gender of NEs, we use a list of known first names annotated with gender information and look up Wikipedia categories to map NEs to Word-Net/GermaNet synsets. To obtain animacy information for common nouns, we conduct a WordNet search.3 If we do not apply this restriction, too many false positives are produced -simple head matching appears to be very noisy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "There are notable exceptions, e.g.(Ge et al., 1998), where salience calculation is combined with statistics.5 http://www.bbn.com/ontonotes/ 6 http://stel.ub.edu/semeval2010-coref/ 7 https://sites.google.com/site/bionlpst/home/proteingene-coreference-task/ 8 http://conll.bbn.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Acknowledgements. Our project is funded by the Swiss National Science Foundation (grant 105211-118108). We are grateful to OntoGene 10 for their help and advice regarding the BioNLP shared task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Binding Theory. Cambridge Textbooks in Linguistics", "authors": [ { "first": "Daniel", "middle": [], "last": "B\u00fcring", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel B\u00fcring. 2005. Binding Theory. Cam- bridge Textbooks in Linguistics. Cambridge Univer- sity Press, Cambridge.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Evaluation metrics for end-to-end coreference resolution systems", "authors": [ { "first": "Jie", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL '10", "volume": "", "issue": "", "pages": "28--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jie Cai and Michael Strube. 2010. Evaluation met- rics for end-to-end coreference resolution systems. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL '10, pages 28-36, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "First-order probabilistic models for coreference resolution", "authors": [ { "first": "Aron", "middle": [], "last": "Culotta", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wick", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2007, "venue": "HLT '07: The Conference of the North American Chapter of ACL; Proceedings of the Main Conference", "volume": "", "issue": "", "pages": "81--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aron Culotta, Michael Wick, and Andrew McCallum. 2007. First-order probabilistic models for corefer- ence resolution. In HLT '07: The Conference of the North American Chapter of ACL; Proceedings of the Main Conference, pages 81-88, Rochester, New York, April. Association for Computational Linguis- tics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Ko van der Sloot, and Antal van den Bosch", "authors": [ { "first": "Walter", "middle": [], "last": "Daelemans", "suffix": "" }, { "first": "Jakub", "middle": [], "last": "Zavrel", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Walter Daelemans, Jakub Zavrel, Ko van der Sloot, and Antal van den Bosch. 2007. Timbl: Tilburg memory-based learner. Technical report, Induction of Linguistic Knowledge, Tilburg University and CNTS Research Group, University of Antwerp.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A large-scale exploration of effective global features for a joint entity detection and tracking model", "authors": [ { "first": "Hal", "middle": [], "last": "Daume", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2005, "venue": "HLT '05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "97--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daume III and Daniel Marcu. 2005. A large-scale exploration of effective global features for a joint entity detection and tracking model. In HLT '05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Lan- guage Processing, pages 97-104, Morristown, NJ, USA. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Global joint models for coreference resolution and named entity classification", "authors": [ { "first": "Pascal", "middle": [], "last": "Denis", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2009, "venue": "Procesamiento del Lenguaje Natural 42", "volume": "", "issue": "", "pages": "87--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascal Denis and Jason Baldridge. 2009. Global joint models for coreference resolution and named entity classification. In Procesamiento del Lenguaje Natu- ral 42, pages 87-96, Barcelona: SEPLN.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "WordNet: An Electronic Lexical Database (Language, Speech, and Communication)", "authors": [ { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database (Language, Speech, and Commu- nication). The MIT Press, May.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Enforcing transitivity in coreference resolution", "authors": [ { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "HLT '08: Proceedings of the 46th Annual Meeting of the ACL on Human Language Technologies", "volume": "", "issue": "", "pages": "45--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Rose Finkel and Christopher D. Manning. 2008. Enforcing transitivity in coreference resolution. In HLT '08: Proceedings of the 46th Annual Meeting of the ACL on Human Language Technologies, pages 45-48, Morristown, NJ, USA. Association for Com- putational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A statistical approach to anaphora resolution", "authors": [ { "first": "Niye", "middle": [], "last": "Ge", "suffix": "" }, { "first": "John", "middle": [], "last": "Hale", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Sixth Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "161--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niye Ge, John Hale, and Eugene Charniak. 1998. A statistical approach to anaphora resolution. In Pro- ceedings of the Sixth Workshop on Very Large Cor- pora, pages 161-171, Montreal, Canada.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "GermaNeta lexical-semantic net for German", "authors": [ { "first": "Birgit", "middle": [], "last": "Hamp", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Feldweg", "suffix": "" } ], "year": 1997, "venue": "Proceedings of ACL workshop Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications", "volume": "", "issue": "", "pages": "9--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Birgit Hamp and Helmut Feldweg. 1997. GermaNet - a lexical-semantic net for German. In Proceedings of ACL workshop Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications, pages 9-15, Somerset, NJ, USA. As- sociation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Enforcing consistency on coreference sets", "authors": [ { "first": "Manfred", "middle": [], "last": "Klenner", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "323--328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manfred Klenner. 2007. Enforcing consistency on coreference sets. In Proceedings of the International Conference on Recent Advances in Natural Lan- guage Processing, pages 323-328, Borovets, Bul- garia.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An algorithm for pronominal anaphora resolution", "authors": [ { "first": "Shalom", "middle": [], "last": "Lappin", "suffix": "" }, { "first": "J", "middle": [], "last": "Herbert", "suffix": "" }, { "first": "", "middle": [], "last": "Leass", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "20", "issue": "", "pages": "535--561", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shalom Lappin and Herbert J Leass. 1994. An algo- rithm for pronominal anaphora resolution. Compu- tational Linguistics, 20:535-561.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A mentionsynchronous coreference resolution algorithm based on the bell tree", "authors": [ { "first": "Xiaoqiang", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Abe", "middle": [], "last": "Ittycheriah", "suffix": "" }, { "first": "Hongyan", "middle": [], "last": "Jing", "suffix": "" }, { "first": "Nanda", "middle": [], "last": "Kambhatla", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting of ACL, ACL '04", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqiang Luo, Abe Ittycheriah, Hongyan Jing, Nanda Kambhatla, and Salim Roukos. 2004. A mention- synchronous coreference resolution algorithm based on the bell tree. In Proceedings of the 42nd Annual Meeting of ACL, ACL '04, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "On coreference resolution performance metrics", "authors": [ { "first": "Xiaoqiang", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2005, "venue": "HLT '05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqiang Luo. 2005. On coreference resolution per- formance metrics. In HLT '05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Process- ing, pages 25-32, Morristown, NJ, USA. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Manual for the Annotation of", "authors": [ { "first": "Karin", "middle": [], "last": "Naumann", "suffix": "" } ], "year": 2006, "venue": "Indocument Referential Relations. SFS (Seminar f\u00fcr Sprachwissenschaft", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karin Naumann, 2006. Manual for the Annotation of Indocument Referential Relations. SFS (Seminar f\u00fcr Sprachwissenschaft), http://www.sfs.uni- tuebingen.de/tuebadz.shtml.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Supervised noun phrase coreference research: the first fifteen years", "authors": [ { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of ACL, ACL '10", "volume": "", "issue": "", "pages": "1396--1411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vincent Ng. 2010. Supervised noun phrase corefer- ence research: the first fifteen years. In Proceedings of the 48th Annual Meeting of ACL, ACL '10, pages 1396-1411, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A multipass sieve for coreference resolution", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Karthik Raghunathan", "suffix": "" }, { "first": "Sudarshan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Rangarajan", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10", "volume": "", "issue": "", "pages": "492--501", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karthik Raghunathan, Heeyoung Lee, Sudarshan Ran- garajan, Nathanael Chambers, Mihai Surdeanu, Dan Jurafsky, and Christopher Manning. 2010. A multi- pass sieve for coreference resolution. In Proceed- ings of the 2010 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP '10, pages 492-501, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Supervised models for coreference resolution", "authors": [ { "first": "Altaf", "middle": [], "last": "Rahman", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "2", "issue": "", "pages": "968--977", "other_ids": {}, "num": null, "urls": [], "raw_text": "Altaf Rahman and Vincent Ng. 2009. Supervised models for coreference resolution. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 -Volume 2, EMNLP '09, pages 968-977, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Association for Computational Linguistics", "authors": [], "year": null, "venue": "Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval '10", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Semeval-2010 task 1: Coreference resolution in multiple languages. In Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval '10, pages 1-8, Stroudsburg, PA, USA. As- sociation for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Hybrid Long-Distance Functional Dependency Parsing", "authors": [ { "first": "Gerold", "middle": [], "last": "Schneider", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerold Schneider. 2008. Hybrid Long-Distance Func- tional Dependency Parsing. Doctoral Thesis, Insti- tute of Computational Linguistics, Univ. of Zurich.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A New Hybrid Dependency Parser for German", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Gerold", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Volk", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Warin", "suffix": "" } ], "year": 2009, "venue": "Proc. of the German Society for Computational Linguistics and Language Technology 2009", "volume": "", "issue": "", "pages": "115--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Gerold Schneider, Martin Volk, and Martin Warin. 2009. A New Hybrid Dependency Parser for German. In Proc. of the German Society for Computational Linguistics and Language Tech- nology 2009 (GSCL 2009), pages 115-124, Pots- dam, Germany.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A machine learning approach to coreference resolution of noun phrases", "authors": [ { "first": "M", "middle": [], "last": "Wee", "suffix": "" }, { "first": "", "middle": [], "last": "Soon", "suffix": "" }, { "first": "T", "middle": [], "last": "Hwee", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "4", "pages": "521--544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wee M. Soon, Hwee T. Ng, and Daniel. 2001. A machine learning approach to coreference resolu- tion of noun phrases. Computational Linguistics, 27(4):521-544, December.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Instance sampling methods for pronoun resolution", "authors": [ { "first": "Holger", "middle": [], "last": "Wunsch", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Rachael", "middle": [], "last": "Cantrell", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Proceedings of the International Conference on Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Wunsch, Sandra K\u00fcbler, and Rachael Cantrell. 2009. Instance sampling methods for pronoun reso- lution. In Proceedings of Proceedings of the Inter- national Conference on Recent Advances in Natural Language Processing, Borovets, Bulgaria.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "An np-cluster based approach to coreference resolution", "authors": [ { "first": "Xiaofeng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Su", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Chew Lim", "middle": [], "last": "Tan", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th International Conference on Computational Linguistics, COLING '04", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaofeng Yang, Jian Su, Guodong Zhou, and Chew Lim Tan. 2004. An np-cluster based ap- proach to coreference resolution. In Proceedings of the 20th International Conference on Computational Linguistics, COLING '04, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Number of training instances per anaphora type of Fold 1 of the T\u00fcBa-D/Z", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "CEAF scores of the 5-fold T\u00fcBa-D/Z cross-validation", "num": null }, "FIGREF2": { "type_str": "figure", "uris": null, "text": "CEAF scores for the simulation of perfect classification (upper bounds) of the individual classifiers for the first 5000 sentences of the T\u00fcBa-D/Z .", "num": null }, "FIGREF3": { "type_str": "figure", "uris": null, "text": "CoNLL 2011 Development Set Results", "num": null }, "FIGREF4": { "type_str": "figure", "uris": null, "text": "Our SemEval 2010 post-task evaluation results mention-pair models. Also, in the recent SemEval 2010 and BioNLP 2011 shared tasks no entitymention models participated.", "num": null }, "TABREF4": { "content": "
CEAFMUCBCUBBLANC
SystemRPF1RPF1RPF1RPF1
German, open regular
bart61.4 7 60.6
corry-c70.9 67.9 69.4 54.7 55.5 55.1 73.8 73.1 73.5 57.4 63.8 59.4
corry-m 66.3 63.5 64.8 61.5 53.4 57.2 76.8 66.5 71.3 58.5 56.2 57.1
incr67.67370.23462.5 44.1 66.78675.1 57.1 78.4 61.1
are not directly compared to
", "num": null, "html": null, "text": "61.2 61.3 61.4 36.1 45.5 75.3 58.3 65.7 55.9 60.3 57.3 incr 76.8 70.4 73.4 50.4 47.1 48.7 81.7 75.6 78.5 55 72.6 57.8 English, open regular bart 70.1 64.3 67.1 62.8 52.4 57.1 74.9 67.7 71.1 55.3 73.2 57.7 corry-b 70.4 67.4 68.9 55.0 54.2 54.6 73.7 74.1 73.9 57.1 75.", "type_str": "table" } } } }