|
{ |
|
"paper_id": "R11-1022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:04:44.005126Z" |
|
}, |
|
"title": "Multilabel Tagging of Discourse Relations in Ambiguous Temporal Connectives", |
|
"authors": [ |
|
{ |
|
"first": "Yannick", |
|
"middle": [], |
|
"last": "Versley", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of T\u00fcbingen", |
|
"location": {} |
|
}, |
|
"email": "versley@sfs.uni-tuebingen.de" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Many annotation schemes for discourse relations allow combinations such as tem-poral+cause (for events that are temporally and causally related to each other) and temporal+contrast (for contrasts between subsequent time spans, or between events that are temporally coextensive). However, current approaches for the automatic classification of discourse relations are limited to producing only one relation and disregard the others. We argue that the information contained in these 'additional' relations is indeed useful and present an approach to tag multiple fine-grained discourse relations in ambiguous connectives from the German T\u00fcBa-D/Z corpus. Using a rich feature set, we show that good accuracy is possible even for inferred relations that are not part of the connective's 'core' meaning.", |
|
"pdf_parse": { |
|
"paper_id": "R11-1022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Many annotation schemes for discourse relations allow combinations such as tem-poral+cause (for events that are temporally and causally related to each other) and temporal+contrast (for contrasts between subsequent time spans, or between events that are temporally coextensive). However, current approaches for the automatic classification of discourse relations are limited to producing only one relation and disregard the others. We argue that the information contained in these 'additional' relations is indeed useful and present an approach to tag multiple fine-grained discourse relations in ambiguous connectives from the German T\u00fcBa-D/Z corpus. Using a rich feature set, we show that good accuracy is possible even for inferred relations that are not part of the connective's 'core' meaning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In order to account for the structure of text beyond the level of single clauses, it is common to postulate discourse relations holding between clauses or groups of clauses. Discourse relations are frequently marked by connectives such as because, as or while, which give an indication both of (syntactic or anaphoric) linking possibilities for the spans and of the possible relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Many connectives (such as because or for instance) always signal one specific discourse relation. This fact has, after initial successes in purely structural discourse parsing (Soricut and Marcu, 2003) , led to decreased attention from researchers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 201, |
|
"text": "(Soricut and Marcu, 2003)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Other connectives, however, are ambiguous between multiple readings and their disambiguation necessitates similar semantic information as implicit (connective-less) discourse relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Ambiguous temporal markers such as after, as or while usually occur with a purely temporal reading, but also with additional non-temporal discourse relations, such as causal and contrastive readings. When these non-temporal relations occur instead of, or in addition to, the temporal reading, they require similar similar inferences from the reader as in connective-less discourse relations, but may be easier to detect automatically. For our goal of accurate classification, multilabel classification becomes necessary when the nontemporal discourse relations co-occur with the temporal ones:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "( In the examples from (1), the sentence in (b) is clearly temporal (and non-causal), and the one in (c) is clearly causal (and non-temporal), whereas in (a) the connective contributes both a causal and a temporal aspect to the coherence of the text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the Penn Discourse Treebank (Prasad et al., 2008) , which uses multiple labels as a last resort when annotators cannot reach an agreement or feel that an instance is inherently ambiguous, 5.5% of discourse connectives are assigned multiple discourse relations. The proportion of multiple vs. single discourse relation varies from connective to connective, with a higher proportion in ambiguous temporal connectives, where it ranges from after's 9% and while's 12.7% over as (23.6%) and when (21%) to meanwhile with 70% of the instances that have multiple labels.", |
|
"cite_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 52, |
|
"text": "(Prasad et al., 2008)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The annotation of discourse connectives in the T\u00fcBa-D/Z (Telljohann et al., 2009) , which we used in our experiments, uses combinations of temporal and other relations to signal causation between successive events or a contrast between co-temporal events, yielding 64.6% of multilabel instances for nachdem (after/since), and 53.8% of multilabel instances for w\u00e4hrend (while).", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 81, |
|
"text": "(Telljohann et al., 2009)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Hence, it is necessary for accurate classification to identify both of the discourse relations holding in such a case, whereas most recent research, such as or Wellner (2009) has focused on single-relation classification. 1 A notable exception is Bethard and Martin's (2008) work on instances of and, where the presence of a temporal or causal relation is classified independently of the other.", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 174, |
|
"text": "Wellner (2009)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 222, |
|
"end": 223, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 247, |
|
"end": 274, |
|
"text": "Bethard and Martin's (2008)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In terms of the features used in classification, the perception that most connectives are unambiguous has created a disparity in terms of features between approaches that target discourse relations signaled by a connective (so-called explicit relations) and those that are inferred between adjacent discourse segments in the absence of connectives (implicit relations).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Work on explicit (i.e., connective-bearing) relations has emphasized simpler features, such as the syntactic neighbourhood of the connective or features based on tense and mood of the argument clauses (Miltsakaki et al., 2005) . In contrast, work targeting implicit discourse relations harnesses a larger variety of features, including word pairs (Marcu and Echihabi, 2002; Sporleder and Lascarides, 2008) , structural properties of the argument clauses (Lin et al., 2009) , semantic parallelism between arguments'", |
|
"cite_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 226, |
|
"text": "(Miltsakaki et al., 2005)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 373, |
|
"text": "(Marcu and Echihabi, 2002;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 405, |
|
"text": "Sporleder and Lascarides, 2008)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 454, |
|
"end": 472, |
|
"text": "(Lin et al., 2009)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1 Both Pitler and Nenkova, and Wellner classify only the first relation. Pitler and Nenkova count the system response as correct when it includes any of the discourse relations in the gold standard, while Wellner counts a system-generated relation as correct if it reproduces the first of the two relations of a multi-relation instance. main verbs' classes, emotive polarity, and other special word categories .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the remainder of this paper, we formulate the disambiguation of ambiguous temporal connectives as a multilabel classification task (where the system can, and should, assign more than one discourse relation). The results (sections 5, 6) show that a rich feature set -partly inspired by the state of the art for implicit relations -is instrumental in detecting the 'non-obvious' discourse relations in temporal connectives.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Connectives in the T\u00fcBa-D/Z", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotating Ambiguous Temporal", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For our study on automatic classification, we use instances of two German temporal connectives that can also carry a non-temporal discourse relation, namely w\u00e4hrend and nachdem:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotating Ambiguous Temporal", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The default reading of nachdem (corresponding to English after/as/since) signals a temporal relation between subsequent events, which is also compatible with a causal discourse relation, or a contrast between two events or states. Nachdem is also used in contexts where it confers an argumentative relation between propositions (evidence), or between a licensing proposition and a question or imperative (speech-act). As seen in example (1), rhetorical relations such as 'evidence' and 'speech-act' can occur with arguments that would be incompatible with the temporal reading of nachdem:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotating Ambiguous Temporal", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotating Ambiguous Temporal", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Und nachdem ja die vertraglichen Bindungen noch weiterlaufen, und zwar bis zum Jahre 2006, werden heuer und in den kommenden Jahren noch weitere 250 Millionen Euro zur Auszahlung gelangen.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotating Ambiguous Temporal", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "And as the contractual obligations are still in force, and run up to 2006, this year and in the coming years a further EUR 250 million will be paid out.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotating Ambiguous Temporal", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Similar to its English counterpart while, German w\u00e4hrend has a temporal reading that locates the sub-clause in the phase of the matrix clause, but also allows a contrast reading where two propositions are contrasted with respect to a common integrator.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotating Ambiguous Temporal", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In a prototypical example such as (3), we find a parallel structure with one pair of entities being compared (Mary and Peter) and an attribute in which they differ (liking bananas versus prefering Such a structure, which we can describe using a common integrator such as \"People like fruits\", receives the contrast relation. In cases where a contrast coincides with cotemporal states, or a temporal relation coincides with an inferred contrast, a secondary temporal or contrast relation is annotated to reflect the ambiguity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotating Ambiguous Temporal", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our data set -the connective occurrences from the current extent of the T\u00fcBa-D/Z plus additional texts that are scheduled for the inclusion in one of the next releases, totaling about 60 000 senteces -contains 294 instances of nachdem and 527 instances of w\u00e4hrend. Where available, we used the syntactic annotation from the treebank; in the remaining cases, we used a syntactic parser (Versley and Rehbein, 2009) to provide syntax trees for the feature extraction. Table 1 shows the full taxonomy of relations for the ambiguous connectives considered in the experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 465, |
|
"end": 472, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotating Ambiguous Temporal", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Reproducing the connective annotation in the T\u00fcBa-D/Z presents a hierarchical multi-label classifcation task: more than one label may apply to a given instance, and labels are arranged in taxonomical categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilabel classification", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As in classical multi-label tagging, the classifier should take into account the suitability of individual classification labels for a given example; however, the context of discourse relation classification shows stronger interdependence of labels (e.g., a non-temporal example is bound to have an evidence or contrast relation).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilabel classification", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As multilabel classification goes beyond assigning exactly one atomic label, scoring whether the proposed label combination is identical to the gold standard (equal in the results table) fails to give partial credit to a system response that reproduces some, but not all of the correct discourse relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating multilabel classification", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The dice evaluation measure accounts for the overlap between the gold standard label combination and the label combination in the system response, calculated as 2|A\u2229B|", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating multilabel classification", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "|A|+|B| . Both equal and dice measure can be calculated at each level of the taxonomy, yielding values for d = 1 (the topmost level) up to d = 3 (the finest taxonomic level).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating multilabel classification", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In addition, the assignment of any particular relation can be evaluated using the standard Fmeasure and precision/recall.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating multilabel classification", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "One of the classical approaches to multilabel classification is to decompose the labeling decision into binary decisions for each possible label (onevs-all reduction) and using confidence values to choose one or several labels among those that are most confidently classified as positive examples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Greedy classification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To yield the finer-grained distinctions from the taxonomy (such as Comparison.contrast vs. Comparison.parallel), the classifier makes an additional decision on the fine-grained class corresponding to the coarse-grained one, which is again realized through training separate classifiers for each fine-grained relation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Greedy classification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In our experiments, we use SVMperf, an SVM implementation that is able to train classifiers optimized for performance on positive instances (Joachims, 2005) . To improve the separability of the data (SVMperf, like the AMIS package used for CRF training, uses linear classifiers), we use feature combinations up to degree 2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 156, |
|
"text": "(Joachims, 2005)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Greedy classification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "One disadvantage of the greedy decomposition into a sequence of binary decisions outlined above is that this variant is unable to model dependencies between the labels assigned by the system; similarly, the greedy decomposition is unable to use evidence for or against individual fine-grained relations in the decision regarding the coarsegrained relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A CRF-based approach", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "As an alternative approach, we consider a classifier that directly ranks possible label combinations, considering all (fine-grained) labels at once. The model ranks all label combinations Y \u2208 Y using a feature function \u03a6 and the learned weight vector w:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A CRF-based approach", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Y = arg max Y \u2208Y w, \u03a6(x, Y )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A CRF-based approach", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where Y contains all allowable label combinations and \u03a6 extracts a feature vector containing the information about the problem instance (x) and the label combination under consideration (Y ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A CRF-based approach", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In order to describe each instance, we factor \u03a6 as \u03a6(x, Y ) := \u03a6 lab (Y ) \u00d7 \u03a6 data (x) (i.e., assuming a label feature Temporal and a data feature main-present, \u03a6 would contain the combined feature (Temporal, main-present)).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A CRF-based approach", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In our case, the label information from \u03a6 lab contains the set of coarse-grained relations assigned (e.g.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A CRF-based approach", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Temporal+Result), as well as the fine-grained relations, individually (in the example, both Temporal and Result.situational.enable). It is easy to see that the problem size increases superlinearly with the number of possible relations, because the set Y of possible labelings can grow quadratically. Keeping the problem size in check provides a gain in efficiency that is already helpful at the current data size, and becomes crucial as the label set and amount of data grow with the addition of more connectives.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A CRF-based approach", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "To mitigate this problem, we factor the actual feature vector into a feature forest (Miyao and Tsujii, 2002) that contains shared nodes for each element, which means that the necessary computations become linear in (number of fine-grained re-lations+ number of coarse-grained relation combinations).", |
|
"cite_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 108, |
|
"text": "(Miyao and Tsujii, 2002)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A CRF-based approach", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Since the CRF approach optimizes for likelihood of the correct (fine-grained) solution, the results of the CRF classifier may not always give optimal results with respect to a given evaluation metric. To compensate for this, we introduce a bias parameter that is added to the score of candidate labelings with more than one label, which forces the classifier towards including (more) labels even when it is not completely certain about them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A CRF-based approach", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In contrast to newer work in this area, earlier approaches for explicit discourse relations, such as Miltsakaki et al. (2005) , have mainly relied on linguistic features indicating the clause or event type, which allows to separate temporal from atemporal uses of a connective in some cases. For our classification experiments, we include a set of baseline features reflecting these linguistic properties as well as more specific features aiming at the differences between different types of argument clauses, but also features that target broader lexical information -in this case, those aimed at the semantics of each argument clause (by taking the head itself, or a characterization), but also co-taxonomic relations between the argument clauses as well as pairs of lemmas and (syntactic) productions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 125, |
|
"text": "Miltsakaki et al. (2005)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification features", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "A first set of baseline features include basic linguistic features, such as clause order (i.e., topicalization/fronting), as the non-temporal discourse relations are more likely to occur with fronted subclauses than with postposed ones; tense features include indicators for perfect, passives, and modal verbs as well as the tense of the finite verb in each clause; a binary negation feature indicates the presence of negating adverb (e.g., English not), determiners (no) or pronouns (none).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification features", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Beyond the information from clause order and tense, punctuation after the sentence helps identify different types of sentences (since questions and imperatives can be an indication of the discourse-internal speech act relation).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clause type and status", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For each clause, a number of modifying adverbials such as temporal, causal or concessive adverbials (excluding the nachdemor w\u00e4hrendclause), conjunctive focus adverbs (also, as well), and commentary adverbs (doubtlessly, actually, probably. . . ). Additional temporal or causal adverbials, which fill the respective function for the main clause, make it less likely that the subordinate clause temporally locates or causally explains the main clause, whereas conjunctive focus adverbs often indicate a parallel relation. Finally commentary adverbs are indicative of discourseinternal relations since they indicate deviations from purely factual reporting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clause type and status", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In order to capture event contingency between clauses (which is typical for temporal and causal relations, but not for contrastive relations), we included both referential and lexico-semantical indicators: the compatible subject pronoun feature indicates that the subject of one clause is a compatible antecedent for the subject of the other clause (which, due to parallelism and subject preference, is a relatively robust indicator for the subjects being coreferential). In this context, morphological compatibility is relatively simple to derive from the morphological tags in the treebank (which include number and grammatical gender), but it would be expected that the same information can be reliably derived from the output of a morphological analyzer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clause type and status", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In general, targeting specific linguistic properties of the clauses linked by the connective will provide crucial information in some cases (as, for example, the co-temporal reading of w\u00e4hrend can be excluded when tenses disagree), but is not sufficient when the choice of discourse relation is influenced by the kind of event that is denoted by the argument clauses, or more general aspects of their meaning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shallow lexical-semantical features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Some predicates occur often enough to be used as a generalization, and often provide either linguistic hints (in the case of verbs that are typically individual-level, rather than stage-level predicates and would not be located or be used to locate temporally, e.g. exist) or are typically thought of as causer, or causee, of an event (as, e.g., crash is more likely to be the result or explanation to another event than fly). The semantic head feature includes the semantic head (i.e., main verb) of each clause, which can provide this kind of information where the main verb is informative and occurs often enough in the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shallow lexical-semantical features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Since most predicates are not frequent enough to occur in a significant number, we need informative statistics that can uncover relevant aspects of their meaning. One such distributional statistic considers the type of (sub-)clauses in which verbs typically appear: verbs such as require, suspect, or fear often occur as part of a because clause, while arrest, resign or conclude often occur as part of a after adverbial clause. Bethard and Martin (2008) , who use this strategy for the prediction In the case of German, morphological flexibility and the verb order in subclauses mean that it is necessary to consider a larger context. For the association feature in our experiments, we extracted counts from subclause occurrences in the DE-WaC corpus (Baroni and Kilgariff, 2006) using the subordinating conjunctions bevor (before), nachdem (after/as/since), weil (because) and obwohl (although). Using (local) pointwise mutual information (MI) scores, each pair of conjunction and verb lemma is assigned binary features indicating whether it has a negative score, or the quantile of lemmas for that connective, according to positive MI values. The lexical relation feature targets pairs of words across both clauses that are taxonomically related and thus could form a contrast pair. As an example, consider the current regulations occurring in one clause and the new law in the other, which would yield a pair of time-related adjectives current-new, and a pair regulation-law of concepts that are both hyponyms of prescription/rule (cf. figure 1) . To find these pairs of taxonomically related concepts, we use the hyperonymy hierarchy in GermaNet 5.0 (Kunze and Lemnitzer, 2002) to produce the least common subsumer of two terms plus two superordinate terms. For adjectives and verbs, requiring a least common subsumer always yields related pairs. In contrast, the upper levels of the noun hierarchy are very general, and we ensure that only related pairs are used by ignoring the upper three levels of the noun hierarchy for this feature.", |
|
"cite_spans": [ |
|
{ |
|
"start": 429, |
|
"end": 454, |
|
"text": "Bethard and Martin (2008)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 752, |
|
"end": 780, |
|
"text": "(Baroni and Kilgariff, 2006)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1655, |
|
"end": 1682, |
|
"text": "(Kunze and Lemnitzer, 2002)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1540, |
|
"end": 1549, |
|
"text": "figure 1)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Shallow lexical-semantical features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Another, shallower way of representing the relation(s) between the words in each argument clause has proven to be effective in research on unlabeled relations: The pairs of lemmas feature extracts Table 2 : Results for w\u00e4hrend pairs of lemmas occurring across the two argument clauses. On one hand, this feature can detect co-taxonomic pairs such as current-new or risefall (as well as nontaxonomic relations such as accident-injured) whenever these occur very frequently. On the othe hand, such a feature can also uncover the presence of a personal pronouns, or two definite articles, in each of both clauses, or particular adjectives. Among all pairs of lemmas, we only select those that occur at least 5 times in the training data, and select the 500 most 'interesting' the by using overall entropy as a selection criterion. Using entropy in this way serves to exclude very frequent word pairs (which occur in -nearly -every pair of clauses that has been seen) as well as very infrequent ones.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 204, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Shallow lexical-semantical features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In order to account for structure, we include the productions feature, which is based on nonterminal and preterminal productions (e.g., NX \u2192 ART ADJX NN for an NP with a determiner, an adjective and a noun, or ART \u2192 der for der occurring as a determiner). Among those productions that occur in at least 500 of the clause pairs, the 500 with the highest entropy are used (filtering out those that are very rare, or frequent enough to appear in nearly each sentence).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural information", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "An overview on the evaluation results for w\u00e4hrend and nachdem is provided in tables 2 and 3, whereas table 4 contains more detail on the impact of each feature. In general, all of the evaluation metrics (cf. section 3.1) are improved by the rich set of features. Fine-grained accuracy (dice [2] and dice [3] ) benefits more by the ranking-based CRF approach, and the best coarse-grained accuracy (eq[1] and dice [1] ) is achieved by the greedy SVM classification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 291, |
|
"end": 294, |
|
"text": "[2]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 304, |
|
"end": 307, |
|
"text": "[3]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 415, |
|
"text": "[1]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Impact of Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Due to space reasons, we limited the feature analysis in table 4 to feature sets containing either (i) base features plus any single feature, or (ii) all but a single one of the features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Impact of Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As can be seen in the table, the most difficult relations to identify are minority relations such as contrast, parallel, evidence, and speechact. Speech-act is rare enough that no better-thanbaseline feature set ever produces it. In contrast, the best feature set achieves F-measures of 0.41 (contrast), 0.39 (parallel) and 0.33 (evidence) on these relations, with precision values between 0.33 (evidence) and 0.36 (contrast), and recall values between 0.33 (evidence) and 0.47 (contrast). Considering that these relations are quite rare (the most frequent of them, contrast, occurs in 5.8% of the nachdem instances),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Impact of Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The feature that has most impact by itself is the presence of modifying adverbials (mod.adv.), especially for parallel and cause relations. The association feature (assoc) is the most effective in identifying cause and evidence relations, as it provides information on kinds of events that a verbs refers to. Co-occurrence of a verb in the subor main clause with the introducing or modifying connective can help to distinguish temporallylocating events (which can, e.g., occur in before subclauses), or states of affairs that can serve as a reason for something (which would occur in because or although subclauses).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Impact of Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Both of the shallow features, productions and lemma pairs (wordpairs) have a relatively broad effect and lead to successful identification of some of the minority relations (cause, contrast, evidence). However, they are noisy enough that overall performance drops below the baseline (in the case of word pairs, the dice measure for the finer taxonomy level and strict equality seem to improve, however).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Impact of Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In the reverse feature selection, however, we see that the noisy information brought in by the shallow lexical features (productions and wordpairs) is quite useful: performance drops very visibly without these features (0.844 to 0.835 for removing the productions feature, to 0.817 for wordpairs).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Impact of Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Looking at the learning curves (for the full feature set minus the assoc feature), in figure 2, we find that the identification of cause and enable relations seems to be relatively robust to sparse data problem, as the improvement from 20% of training data (i.e., randomly subsampling each train- ing fold to 20% of its size) to the complete data only yields limited improvement, whereas relations such as evidence, contrast and parallel seem to profit strongly from more data (which is understandable, however, since these relations are less frequent than the others).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Impact of Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Although the annotated instances stem from a relatively large corpus (slightly over one million words), it seems very plausible that larger training data would benefit the disambiguation results. For connective annotation on a fixed-size corpus (such as the T\u00fcBa-D/Z, or the Penn Treebank used for the Penn Discourse Treebank), combining the benefits of connective-specific and non-specific disambiguation would be especially relevant, as the former allows to model the specific connective meaning, whereas connective-independent models would be less sensitive to sparse data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Impact of Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We carried out multilabel tagging experiments on two datasets: one containing occurrences of nach-dem from the T\u00fcBa-D/Z corpus (shown in table 3), one containing occurrences of w\u00e4hrend, using 10-fold cross-validation on the training set. For both the CRF-based approach and the SVM-based one-versus-all reduction, the best-performing feature set we found contains all features minus the association feature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "For both nachdem and w\u00e4hrend, the most frequent sense (Temporal+enable, or Tempo-ral+contrast) is by far predominant and yields a very strong baseline, which the CRF-based classifier only surpasses for nachdem with an appropriate setting for the bias parameter to prevent the classifier from under-labeling (i.e., assigning fewer relations than optimal). Both the biased CRF classifier and the greedy SVM-based approach outperform the most-frequent sense baseline for all aggregate measures, which is more difficult for the top level of the taxonomy where one single coarse-grained relation combination often accounts for over 50% of all instances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "To our knowledge, this study is the first successful study on disambiguating German connectives, after the results of (Bayerl, 2004) who studied the explicit connective wenn (if/when), which stay further below the most-frequent sense baseline. We take this to confirm the intuition that problems in large-scale discourse classification, including those thought to be unrewarding such as ambiguous explicit connectives, are best tackled with a combination of an annotation scheme that is appropriate to the task (i.e., focused on coherence relations rather than speaker intentions), informative features, and a machine learning approach that can make use of these features to reproduce all the distinctions that are present in the annotation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 132, |
|
"text": "(Bayerl, 2004)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We also hope that the general direction of (i) reproducing all of the information present in the gold annotation and (ii) using a rich set of features for the disambiguation of ambiguous explicit con-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": " (cl. order, tense, neg.) 0.829 0.789 0.678 0.541 0.000 0.000 0.000 0.752 0.054 0.485 0.000 0.000 0.968 base + assoc 0.809 0.768 0.676 0.507 0.075 0.000 0.067 0.728 0.338 0.477 0.276 0.000 0.968 base + csubj 0.829 0.789 0.678 0.541 0.000 0.000 0.000 0.751 0.073 0.488 0.000 0.000 0.968 base + sem.head 0.829 0.789 0.680 0.541 0.000 0.000 0.000 0.752 0.103 0.485 0.000 0.000 0.968 base + lexrel 0.829 0.789 0.678 0.541 0.000 0.000 0.000 0.752 0.133 0.485 0.000 0.000 0.968 base + mod.adv. Table 4 : Impact of features (for nachdem, SVMperf) nectives will be a fruitful direction for discourse relation disambiguation also in other languages than German.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1, |
|
"end": 25, |
|
"text": "(cl. order, tense, neg.)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 488, |
|
"end": 495, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Large linguistically-processed web corpora for multiple languages", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kilgariff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baroni, M. and Kilgariff, A. (2006). Large linguistically-processed web corpora for multi- ple languages. In EACL 2006.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Disambiguierung deutschsprachiger Diskursmarker: Eine Pilot-Studie. Linguistik Online", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Bayerl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bayerl, P. S. (2004). Disambiguierung deutschsprachiger Diskursmarker: Eine Pilot-Studie. Linguistik Online, 18.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Learning semantic links from a corpus of parallel temporal and causal relations", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "ACL/HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bethard, S. and Martin, J. (2008). Learning se- mantic links from a corpus of parallel temporal and causal relations. In ACL/HLT 2008.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A support vector method for multivariate performance measures", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the International Conference on Machine Learning (ICML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joachims, T. (2005). A support vector method for multivariate performance measures. In Pro- ceedings of the International Conference on Machine Learning (ICML).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "GermaNet -representation, visualization, application", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Kunze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Lemnitzer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kunze, C. and Lemnitzer, L. (2002). GermaNet -representation, visualization, application. In Proceedings of LREC 2002.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Recognizing implicit discourse relations in the Penn Discourse Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M.-Y", |
|
"middle": [], |
|
"last": "Kan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lin, Z., Kan, M.-Y., and Ng, H. T. (2009). Recog- nizing implicit discourse relations in the Penn Discourse Treebank. In EMNLP 2009.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "An unsupervised approach to recognizing discourse relations", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Echihabi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcu, D. and Echihabi, A. (2002). An unsuper- vised approach to recognizing discourse rela- tions. In ACL 2002.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Experiments on sense annotations and sense disambiguation of discourse connectives", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Miltsakaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Dinesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Webber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "TLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miltsakaki, E., Dinesh, N., Prasad, R., Joshi, A., and Webber, B. (2005). Experiments on sense annotations and sense disambiguation of dis- course connectives. In TLT 2005.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Maximum entropy estimation for feature forests", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miyao, Y. and Tsujii, J. (2002). Maximum entropy estimation for feature forests. In HLT 2002.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Automatic sense prediction for implicit discourse relations in text", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Pitler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Louis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pitler, E., Louis, A., and Nenkova, A. (2009). Au- tomatic sense prediction for implicit discourse relations in text. In ACL-IJCNLP 2009.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Using syntax to disambiguate explicit discourse connectives in text", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Pitler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "ACL 2009 short papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pitler, E. and Nenkova, A. (2009). Using syntax to disambiguate explicit discourse connectives in text. In ACL 2009 short papers.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The Penn Discourse Treebank 2.0", |
|
"authors": [], |
|
"year": 2008, |
|
"venue": "Proceedings of LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "The Penn Discourse Treebank 2.0. In Proceed- ings of LREC 2008.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Sentence level discourse parsing using syntactic and lexical information", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Soricut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proc. HLT/NAACL-2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soricut, R. and Marcu, D. (2003). Sentence level discourse parsing using syntactic and lexical in- formation. In Proc. HLT/NAACL-2003.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Using automatically labelled examples to classify rhetorical relations: An assessment", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Sporleder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lascarides", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Natural Language Engineering", |
|
"volume": "14", |
|
"issue": "3", |
|
"pages": "369--416", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sporleder, C. and Lascarides, A. (2008). Using au- tomatically labelled examples to classify rhetor- ical relations: An assessment. Natural Lan- guage Engineering, 14(3):369-416.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Stylebook for the T\u00fcbingen Treebank of Written German (T\u00fcBa-D/Z)", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Telljohann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Hinrichs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "K\u00fcbler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Zinsmeister", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Beck", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Seminar f\u00fcr Sprachwissenschaft", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Telljohann, H., Hinrichs, E. W., K\u00fcbler, S., Zins- meister, H., and Beck, K. (2009). Stylebook for the T\u00fcbingen Treebank of Written German (T\u00fcBa-D/Z). Technical report, Seminar f\u00fcr Sprachwissenschaft, Universit\u00e4t T\u00fcbingen.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Scalable discriminative parsing for German", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Versley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Rehbein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proc. IWPT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Versley, Y. and Rehbein, I. (2009). Scalable dis- criminative parsing for German. In Proc. IWPT 2009.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Sequence Models and Ranking Methods for Discourse Parsing", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Wellner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wellner, B. (2009). Sequence Models and Rank- ing Methods for Discourse Parsing. PhD thesis, Brandeis University.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "Lexical relation feature of causal and temporal readings of and, are able to use n-gram search for such frequency statistics.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "Learning curves for single relations (nachdem only)", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"content": "<table/>", |
|
"text": "1) a. As [ arg2 individual investors have turned away from the stock market over the years], [ arg1 securities firms have scrambled to find new products that brokers find easy to sell]. b. [ arg1 \"Forget it,\" he said] as [ arg2 he handed her a paper]. c. But as [ arg2 the French embody a Zenlike state of blase when it comes to athletics] (try finding a Nautilus machine in Paris), [ arg1 my fellow conventioners were having none of it].", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"text": "", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td>0,5</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>0,4</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>0,3</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>0,2</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>0,1</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>0</td><td>20</td><td>30</td><td>40</td><td>50</td><td>60</td><td>70</td><td>80</td><td>90</td><td>100</td></tr><tr><td/><td/><td>cause</td><td/><td>enable</td><td/><td>evidence</td><td/><td>contrast</td><td/></tr><tr><td/><td/><td>parallel</td><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"text": "Results for nachdem", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |