Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R11-1030",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:04:48.191195Z"
},
"title": "Temporal Relation Extraction Using Expectation Maximization",
"authors": [
{
"first": "Seyed",
"middle": [
"Abolghasem"
],
"last": "Mirroshandel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sharif University or Technology",
"location": {
"country": "Iran"
}
},
"email": "mirroshandel@ce.sharif.edu"
},
{
"first": "Gholamreza",
"middle": [],
"last": "Ghassem-Sani",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sharif University or Technology",
"location": {
"country": "Iran"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The ability to accurately determine temporal relations between events is an important task for several natural language processing applications such as Question Answering, Summarization, and Information Extraction. Since current supervised methods require large corpora, which for many languages do not exist, we have focused our attention on approaches with less supervision as much as possible. This paper presents a fully generative model for temporal relation extraction based on the expectation maximization (EM) algorithm. Our experiments show that the performance of the proposed algorithm, regarding its little supervision, is considerable in temporal relation learning.",
"pdf_parse": {
"paper_id": "R11-1030",
"_pdf_hash": "",
"abstract": [
{
"text": "The ability to accurately determine temporal relations between events is an important task for several natural language processing applications such as Question Answering, Summarization, and Information Extraction. Since current supervised methods require large corpora, which for many languages do not exist, we have focused our attention on approaches with less supervision as much as possible. This paper presents a fully generative model for temporal relation extraction based on the expectation maximization (EM) algorithm. Our experiments show that the performance of the proposed algorithm, regarding its little supervision, is considerable in temporal relation learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Lately, the increasing attention to the practical NLP applications such as question answering, information extraction, and summarization have resulted in a growing demand of temporal information processing (Tatu and Srikanth, 2008) . In question answering, one may expect the system to answer questions such as \"when an event occurred\", or \"what is the chronological order of some desired events\". In text summarization, especially in the multi-document type, knowing the order of events is a useful source of correctly merging related information.",
"cite_spans": [
{
"start": 206,
"end": 231,
"text": "(Tatu and Srikanth, 2008)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unlike problems such as part-of-speech tagging, morphological analysis, parsing, and named entity recognition which have been recently addressed with satisfactory results by combining statistical and symbolic methods (Mani et al., 2006) , temporal relation extraction that requires deeper semantic analysis are yet to be worked on. One of recent efforts has disclosed that this task is a complicated task, even for human annotators (Mani et al., 2006) .",
"cite_spans": [
{
"start": 217,
"end": 236,
"text": "(Mani et al., 2006)",
"ref_id": "BIBREF15"
},
{
"start": 432,
"end": 451,
"text": "(Mani et al., 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based on the type of corpora that different temporal relation learning methods use, these methods are divided into three major categories: supervised, semi-supervised, and unsupervised. Supervised methods normally rely on the correct temporal relations of training sentences of a manually tagged corpus. Semi-supervised methods often rely on a partially tagged corpus and need less supervision. Finally, unsupervised methods rely only on raw sentences without any temporal relation annotation. It is obvious that producing the necessary training data (corpora) of supervised and to a less extent semisupervised methods is a time consuming, hard, and expensive work. Besides, it is very difficult to adapt such methods for new tasks, languages, and/or domains. Consequently, it is in fact the corpus availability that directs the research in this area. For mentioned reasons, we have focused on unsupervised and weakly supervised temporal relation learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents a novel usage of expectation maximization (EM) algorithm for temporal relation learning. The algorithm also employs Allen's interval algebra (Allen, 1984) . Our experiments show that the performance of the proposed algorithm is acceptable with respect to little usage of tagged corpora which is used.",
"cite_spans": [
{
"start": 161,
"end": 174,
"text": "(Allen, 1984)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper is organized as follows: section 2 is about previous works on temporal relation extraction. Section 3 explains our proposed method. Section 4 briefly presents the characteristic of the corpora that we have used. Section 5 demonstrates the evaluation of the proposed algorithm. Finally, section 6 includes our conclusions and some possible future works.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For a given ordered pair of components (x 1 , x 2 ), where x 1 and x 2 are times and/or events, a temporal information processing system identifies the type of relation that temporally links x 1 to x 2 . The relation type can for instance be one of the 14 types proposed in TimeML (Pustejovsky et al., 2003) . For example, in \"If all the debt is converted (e 7 ) to common, Automatic Data will issue (e 8 ) about 3.6 million shares; last Monday (t 24 ), the company had (e 25 ) nearly 73 million shares outstanding.\", taken from document wsj_0541 of TimeBank (Pustejovsky et al., 2003) , there are two temporal relations between pairs (e 7 , e 8 ) and (t 24 , e 25 ). The task of temporal relation extraction is to automatically tag these pairs respectively with the BEFORE and INCLUDES relations.",
"cite_spans": [
{
"start": 281,
"end": 307,
"text": "(Pustejovsky et al., 2003)",
"ref_id": "BIBREF21"
},
{
"start": 559,
"end": 585,
"text": "(Pustejovsky et al., 2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Relation Extraction",
"sec_num": "2"
},
{
"text": "There are numerous ongoing researches focused on temporal relation extraction. Existing methods of temporal relation learning, which are mainly fully supervised, can be divided into three categories: 1) Pattern based; 2) Rule based, and 3) Anchor based. These categories are respectively discussed in the next three subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2.1"
},
{
"text": "Pattern based methods extract some generic lexico-syntactic patterns for events cooccurrence. Extracting such patterns can be done manually or automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Based Methods",
"sec_num": null
},
{
"text": "Perhaps the simplest pattern based method is the one that was developed using a knowledge resource called VerbOcean (Chklovski and Pantel, 2005) . VerbOcean has a small number of manually selected generic patterns. The style of patterns is in the form of <Verb-X> and then <Verb-Y>. Similar to other manual methods, a major drawback of this method is its tendency to have a high recall but a low precision. Several heuristics have been proposed to resolve the low precision problem (Chklovski and Pantel, 2005; Torisawa, 2006) .",
"cite_spans": [
{
"start": 116,
"end": 144,
"text": "(Chklovski and Pantel, 2005)",
"ref_id": "BIBREF10"
},
{
"start": 482,
"end": 510,
"text": "(Chklovski and Pantel, 2005;",
"ref_id": "BIBREF10"
},
{
"start": 511,
"end": 526,
"text": "Torisawa, 2006)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Based Methods",
"sec_num": null
},
{
"text": "On the other hand, automatic methods try to learn a classifier from an annotated corpus, and attempt to improve classification accuracy by feature engineering. MaxEnt classifier is an example of this group (Mani et al., 2006) . The state of the art of supervised methods in this group is very similar to the MaxEnt classifier (Chambers et al., 2007) . This classifier tries to learn event attributes and event-event features in two consecutive stages. It also uses WordNet to find words' synsets.",
"cite_spans": [
{
"start": 206,
"end": 225,
"text": "(Mani et al., 2006)",
"ref_id": "BIBREF15"
},
{
"start": 326,
"end": 349,
"text": "(Chambers et al., 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Based Methods",
"sec_num": null
},
{
"text": "Some of researches on pattern based temporal relation classification only work on corpora with specific characteristics, rather than general corpora such as TimeBank (Bethard and Martin, 2008; Bethard et al, 2007a; Lapata and Lascarides 2006; Bethard et al, 2007b; Bethard, 2007) . There are also algorithms that work on only limited types of relations (Lapata and Lascarides 2006; Bethard, 2007; Bethard and Martin, 2007; Chambers and Jurafsky, 2008) .",
"cite_spans": [
{
"start": 166,
"end": 192,
"text": "(Bethard and Martin, 2008;",
"ref_id": "BIBREF5"
},
{
"start": 193,
"end": 214,
"text": "Bethard et al, 2007a;",
"ref_id": "BIBREF1"
},
{
"start": 215,
"end": 242,
"text": "Lapata and Lascarides 2006;",
"ref_id": "BIBREF13"
},
{
"start": 243,
"end": 264,
"text": "Bethard et al, 2007b;",
"ref_id": "BIBREF2"
},
{
"start": 265,
"end": 279,
"text": "Bethard, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 353,
"end": 381,
"text": "(Lapata and Lascarides 2006;",
"ref_id": "BIBREF13"
},
{
"start": 382,
"end": 396,
"text": "Bethard, 2007;",
"ref_id": "BIBREF4"
},
{
"start": 397,
"end": 422,
"text": "Bethard and Martin, 2007;",
"ref_id": "BIBREF3"
},
{
"start": 423,
"end": 451,
"text": "Chambers and Jurafsky, 2008)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Based Methods",
"sec_num": null
},
{
"text": "In another work, a weakly-supervised algorithm was proposed to classify temporal relation between events (Mirroshandel and Ghassem-Sani, 2010). In that work, it was shown that by applying a bootstrapping technique to some unlabeled documents that were related to the test documents and without any additional annotated data, temporal relations can be classified with satisfactory results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Based Methods",
"sec_num": null
},
{
"text": "The common idea behind rule based methods is to design a number of rules for classifying temporal relations. In most existing works, these rules, which are manually defined, are based on Allen's interval algebra (Allen, 1984) . One usage of these rules is enlarging the training set (Mani et al., 2006) . Reasoning about the certainty of predicted temporal relations is the other utilization of these rules.",
"cite_spans": [
{
"start": 212,
"end": 225,
"text": "(Allen, 1984)",
"ref_id": "BIBREF0"
},
{
"start": 283,
"end": 302,
"text": "(Mani et al., 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Based Methods",
"sec_num": null
},
{
"text": "Anchor based methods use information of argument fillers (called anchors) of every event expression as a valuable clue for recognizing temporal relations. These methods rely on the distributional hypothesis (Harris, 1968) , and by looking at a set of event expressions whose argument fillers have a similar distribution, try to recognize synonymous event expressions. Algorithms such as DIRT (Lin and Pantel, 2001) , TE/ASE (Szpektor et al., 2004) , and that of Pekar's system (Pekar, 2006) are examples of anchor based methods.",
"cite_spans": [
{
"start": 207,
"end": 221,
"text": "(Harris, 1968)",
"ref_id": "BIBREF11"
},
{
"start": 392,
"end": 414,
"text": "(Lin and Pantel, 2001)",
"ref_id": "BIBREF14"
},
{
"start": 424,
"end": 447,
"text": "(Szpektor et al., 2004)",
"ref_id": "BIBREF22"
},
{
"start": 477,
"end": 490,
"text": "(Pekar, 2006)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Anchor Based Methods",
"sec_num": null
},
{
"text": "Due to appropriate results of the expectation maximization (EM) algorithm in some unsupervised tasks of natural language processing such as unsupervised grammar induction (Klein, 2005) , unsupervised anaphora resolution (Cherry and Bergsma, 2005; Charniak and Elsner, 2009) , and unsupervised coreference resolution (Ng, 2008) , we decided to apply EM to temporal relation extraction. Currently, there is no reported work in temporal relation extraction based on EM. Here, we explain how EM can be successfully applied to the task of temporal relation extraction and show that the results are notable in this task. Before that, we first introduce definitions and notations that will be later used in subsequent sections.",
"cite_spans": [
{
"start": 171,
"end": 184,
"text": "(Klein, 2005)",
"ref_id": "BIBREF12"
},
{
"start": 220,
"end": 246,
"text": "(Cherry and Bergsma, 2005;",
"ref_id": "BIBREF9"
},
{
"start": 247,
"end": 273,
"text": "Charniak and Elsner, 2009)",
"ref_id": "BIBREF8"
},
{
"start": 316,
"end": 326,
"text": "(Ng, 2008)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using EM for Temporal Relation Learning",
"sec_num": "3"
},
{
"text": "In temporal relation learning, system must be able to determine temporal relation r between two events e 1 and e 2 . Here, we assume that events are annotated and the learner must find out the relation type r. In general, the relation type can be one of the 14 types proposed in TimeML (Pustejovsky et al., 2003) plus relation NONE (which indicates there is no temporal relation between respected pair of events). In this paper, context means the sentence (or sentences) containing pairs of examined events.",
"cite_spans": [
{
"start": 286,
"end": 312,
"text": "(Pustejovsky et al., 2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.1"
},
{
"text": "The proposed algorithm operates at the corpus level, inducing valid temporal clustering for all event pairs of a given corpus. More specifically, our algorithm, over a corpus, works in two steps: first, according to some temporal clustering distribution P(TC), a temporal clustering TC is applied to the event pairs of the corpus, and then given that temporal clustering, the corpus is generated by using equation 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ) ( ) TC corpus P TC P TC corpus P | , =",
"eq_num": "(1)"
}
],
"section": "The Model",
"sec_num": "3.2"
},
{
"text": "To easily incorporate linguistic constraints defined on event pairs, corpus is represented by its event pairs, EventPairs(corpus). Now we can assume event pairs are independent and generated by using the following equation: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.2"
},
{
"text": "For inducing temporal relations, algorithm runs the EM algorithm on this model. We used a uniform distribution over P(TC).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.2"
},
{
"text": "If we expand the equations, each e i e j can be represented by its features, which can potentially be used for determining temporal relation type between events e i and e j . Therefore, P(corpus | TC) is rewritten using equation 4. Where e i e j l is the value of the l th feature of e i e j . These features, which are similar to those mentioned in (Chambers and Jurafsky, 2008) ",
"cite_spans": [
{
"start": 350,
"end": 379,
"text": "(Chambers and Jurafsky, 2008)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.2"
},
{
"text": "To induce a temporal clustering TC on a corpus, EM was applied to our proposed model. In the EM algorithm, corpus (its event pairs) and temporal clustering TC are respectively the observed and unobserved (the hidden) random variables. The EM algorithm includes the following two steps to iteratively estimate the parameters of the model, \u03b8: E-step: Fix current \u03b8 and obtain the conditional temporal clustering likelihoods P(TC.| corpus, \u03b8). As a result, for each event pair candidate, a temporal relation type will be selected based on current \u03b8. Due to inability to consider other relations in pairwise relation learning, some contradictions will be introduced in this step. For example, figure 1 shows an inconsistency in the relations between following events: Figure 1 : A contradiction in temporal relations between three events A, B, and C.",
"cite_spans": [],
"ref_spans": [
{
"start": 689,
"end": 697,
"text": "figure 1",
"ref_id": null
},
{
"start": 764,
"end": 772,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Induction Algorithm",
"sec_num": "3.3"
},
{
"text": "There are several ways for eliminating such inconsistencies (Mani et al., 2007; Tatu and Srikanth, 2008; 2008) . In this paper, we propose a best-first greedy search strategy for temporal reasoning and removing inconsistencies among predicted relations. First the contradictions in the connected graphs of the text will be discovered with applying a set of rules (e.g., Before(x, y) ^ Before(y, z) \u2192 Before(x, z)), which are based on Allen's interval algebra (1984) . Then the inconsistent relations of each connected graph will be sorted in a list named SL based on computed confidence score (P (TC | corpus, \u03b8) ).",
"cite_spans": [
{
"start": 60,
"end": 79,
"text": "(Mani et al., 2007;",
"ref_id": "BIBREF16"
},
{
"start": 80,
"end": 104,
"text": "Tatu and Srikanth, 2008;",
"ref_id": "BIBREF23"
},
{
"start": 105,
"end": 110,
"text": "2008)",
"ref_id": "BIBREF19"
},
{
"start": 459,
"end": 465,
"text": "(1984)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 596,
"end": 612,
"text": "(TC | corpus, \u03b8)",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Induction Algorithm",
"sec_num": "3.3"
},
{
"text": "In SL, the first and the last elements are the most and the least confident relations, respectively. Now, the algorithm starts from the first relation of SL, and pops off this relation and adds it to another list named FL. In adding a new relation (r new ) to FL, the algorithm verifies the consistency between relations of FL. If r new is a relation between events e i and e j , which introduces an inconsistency into the graph, it will be replaced by the next confident relation between e i and e j . These replacements are repeated until FL relations will be consistent. When there are no more contradictions in FL, algorithm will try to move the next element of SL to FL. These operations are iterated until there will be no more relations in SL. Then the resultant consistent relations in FL can be used in the next stages of EM. M-step: Find \u03b8 new that maximizes the equation \u2211 TC P(TC | corpus, \u03b8 old ) log P(corpus, TC | \u03b8 new ) with fixed \u03b8 old . In order to predict \u03b8 new , different optimization algorithms such as conjugate gradient can be used. However, these methods are slow and costly. In addition, it is difficult to smooth these methods in a desired manner. Therefore, we used smoothed relative frequency estimates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Induction Algorithm",
"sec_num": "3.3"
},
{
"text": "Step or the M-step, which we start the induction algorithm at the M-step. It is clear that P(TC | corpus, \u03b8 old ) is not available in the first iteration of EM. Instead, an initial distribution over temporal clustering, P(TC | Corpus), can be used. Now, there is an important question: how should we initialize P(TC | Corpus)?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Now, the EM algorithm can either begin at the E-",
"sec_num": null
},
{
"text": "Initialization is an important task in EM, because EM only guarantees to find a local maximum of the likelihood. The quality of such a local maximum is highly dependent on the initial start point. We tested three different ways of initialization: first, we used a uniform distribution over all temporal clustering. Second, we used a small part of a labeled corpus for setting P(TC | Corpus). Third, we used some rules for initial estimation of temporal relation types and then used those types for the initial estimation to compute P(TC | Corpus). The detailed accounts of the second and the third methods are discussed in subsection 5.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Now, the EM algorithm can either begin at the E-",
"sec_num": null
},
{
"text": "Like many statistical NLP tasks in which smoothing is required to alleviate the problem of data sparseness, smoothing is vital here, too. In particular, in the first few iterations, much more smoothing is required than in later iterations. In our experiments, we used an additive smoothing technique.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Now, the EM algorithm can either begin at the E-",
"sec_num": null
},
{
"text": "In our experiments, we used two standard corpora which had been utilized in evaluation of most previous works: TimeBank (v. 1.2) and Opinion Corpus (Mani et al., 2006) . TimeBank includes 183 newswire documents and 64077 words, and Opinion Corpus comprises 73 documents with 38709 words. These two datasets have been annotated based on TimeML (Pustejovsky et al., 2003) . There are 14 temporal relations (Event-Event and Event-Time relations) in the TLink class of TimeML. Relation NONE, which indicates there is no temporal relation between respected event pairs, must also be considered. For the sake of alleviating the data sparseness problem, we used a converted version of these temporal relations, which contains only four following temporal relations:",
"cite_spans": [
{
"start": 148,
"end": 167,
"text": "(Mani et al., 2006)",
"ref_id": "BIBREF15"
},
{
"start": 343,
"end": 369,
"text": "(Pustejovsky et al., 2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Description",
"sec_num": "4"
},
{
"text": "As it was shown in (Bethard et al, 2007a) , it is easy to convert 14 TimeML relations into just BEFORE, AFTER, and OVERLAP relations. Here, we merged BEFORE and IBEFORE relations into only BEFORE relations. Similarly AFTER and IAFTER relations were also merged into AFTER relations. All the remaining 10 relation types were collapsed in OVERLAP relations. In our experiments, like several previous works, we merged Opinion and TimeBank to generate a single corpus, which is called OTC. Table 2 shows the converted TLink class distribution over TimeBank and OTC corpora for intrasentential and general (intra-and inter-sentential) event pairs which are situated in the same document.",
"cite_spans": [
{
"start": 19,
"end": 41,
"text": "(Bethard et al, 2007a)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 486,
"end": 493,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "BEFORE , AFTER , OVERLAP , NONE",
"sec_num": null
},
{
"text": "Intra ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TimeBank Corpus OTC Corpus Relation Type",
"sec_num": null
},
{
"text": "We applied our algorithm to both TimeBank and OTC corpora, using the five-fold cross validation method. The results were evaluated by measuring accuracy. One important point that we should mention is the parameter initialization of EM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "As it was mentioned in section 3.3, we used three different initializations: first, a uniform distribution over all temporal clustering was used; therefore, all temporal clustering in the first step had equal probability. Second, we used a small part of labeled corpora (10% of each relation type) for setting P(TC | Corpus).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "Relations were selected randomly. Third, we used some rules for initial estimation of temporal relation types and used this initial estimation for computing P(TC | Corpus). The rules were the combination of GTag rules (Mani et al., 2006) , VerbOcean (Chklovski and Pantel, 2005) , and some rules derived from certain signal words (e.g., \"on\", \"during\", \"when\", and \"if\") of the text.",
"cite_spans": [
{
"start": 218,
"end": 237,
"text": "(Mani et al., 2006)",
"ref_id": "BIBREF15"
},
{
"start": 250,
"end": 278,
"text": "(Chklovski and Pantel, 2005)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "As it is shown in table 2 (in General columns), NONE relations dwarf all other relations. As a result, temporal relation learning, because of heavy bias of learner to NONE relations, will be very hard (even useless). Regarding this problem, we set up two different types of experiments: 1) Algorithms were applied only for intrasentential event pairs, considering all relation types (including NONE). The results of these experiments are shown in table 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.2"
},
{
"text": "2) The NONE relations were removed, and algorithms were applied to both intra-sentential and general (intra-and inter-sentential) event pairs. Table 4 shows the results of experiments without considering NONE relations.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 150,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.2"
},
{
"text": "One important issue in the results of table 3 is that in our experiments, all four mentioned relation types (BEFORE, AFTER, OVERLAP, and NONE) have been considered, but in reporting the results, we have reported the aggregated accuracy of only BEFORE, AFTER, and OVERLAP relations, and excluded the accuracy results of NONE relations. That is because by considering NONE, one could design a simple system which tags all relations to NONE, and would get a very high accuracy. But, in that case the comparison would be inappropriate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.2"
},
{
"text": "In our evaluations, both table 3 and 4, the baselines have been the majority classes for event pair relations ignoring NONE relations of the evaluated corpora (i.e., BEFORE and OVERLAP relations as it is depicted in table 2). The Mani's method is in fact a supervised method which exclusively uses gold-standard features (Mani et al., 2007) . The Chambers' method is similar to Mani's, except that it uses some external resources such as WordNet (Chambers et al., 2007) . The Mani and Chambers results are different from (or even lower than) their reported results, because of two differences: first, we considered only three temporal relation types while in their experiments, there were six relation types. Second, the results of EM 1 , EM 2 , and EM 3 are the results of our proposed method with three different initializations. The initializations of EM 1 , EM 2 , and EM 3 were random, with little supervision (10%), and by using a number of rules, respectively. For EM 1 , one question is how this method can determine the label of different classes. In our experiments, EM 1 , depending on the type of experiment, only determines three or four different classes (Class 1 , Class 2 , Class 3 , and/or Class 4 ). To label these unlabeled classes, using annotated data, we assigned the labels in such a way that resulted in maximum similarity between predicted and annotated temporal relation types for each event pair.",
"cite_spans": [
{
"start": 321,
"end": 340,
"text": "(Mani et al., 2007)",
"ref_id": "BIBREF16"
},
{
"start": 446,
"end": 469,
"text": "(Chambers et al., 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.2"
},
{
"text": "In tables 3 and 4, the numbers inside parentheses show the results of our proposed algorithm without applying temporal reasoning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.2"
},
{
"text": "As it is shown in tables 3, all mentioned methods generally demonstrate a weak performance. That is due to the problem's nature. As distribution of different columns of table 2 shows, the number of NONE relations, even in the intra-sentential case, is about 7 to 10 times greater than other relations. Therefore, it is very hard for a learning algorithm to precisely determine the relation types. On the other hand, results of table 4, which ignores NONE relations, are satisfactory. Comparing proposed method with the baseline, shows that in the cases that supervised methods can beat the baseline method, our weakly supervised method can also work better than the baseline or close to it.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 176,
"text": "As distribution of different columns of table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.2"
},
{
"text": "It should be noted that the Chambers' method, which is the most successful method of tables 3 and 4, is in fact the state of the art supervised method, while our proposed method is, based on the initialization approaches, unsupervised or weakly supervised. Among different settings of the proposed method, EM 3 achieved the best results except for the general case of OTC in table 4, where EM 2 achieved better results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.2"
},
{
"text": "The results show that EM 1 is not very efficient in either first or second type of experiments. It seems that randomized initialization in this hard problem, may cause some divergence in the probability distribution. On the other hand, both EM 2 and EM 3 showed satisfactory results in these problems. Therefore, initialization is a critical factor in our EM method, and some little source of supervision seems crucial for achieving better results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.2"
},
{
"text": "Comparison of the results of proposed EM algorithm with and without utilization of temporal reasoning shows that using temporal reasoning can be effective on the accuracy of the algorithm. By using temporal reasoning, some inconsistencies are removed in step E of the algorithm and the predicted relations will be more reliable. Then in step M, the update of parameters will be performed more accurately and thus the accuracy of the algorithm iteratively will increase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.2"
},
{
"text": "Another important point in the comparison of accuracy results is the existence of NONE relations. As it is shown in tables 3 and 4, the accuracies in table 3 is much lower than that of in table 4. These differences are all due to the existence of NONE relations, which makes problem hard. Figure 2 demonstrates the effects of NONE relations on the accuracy of our proposed algorithm. All the experiments have been performed using OTC. We repeated our experiments for different percentage of NONE relations. As it is shown, NONE relations have had a great impact on the accuracy of the system.",
"cite_spans": [],
"ref_spans": [
{
"start": 289,
"end": 297,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.2"
},
{
"text": "The larger gap between the accuracy of ignoring and consideration of NONE relations on TimeBank (in contrast that of OTC) implies that NONE relations would have an even greater impact on the accuracy of the algorithm if applied to TimeBank. Figure 2 shows the impact of NONE relations on the accuracy (or recall) of the algorithm. Our experiments showed that this impact is even more substantial on the precision of the proposed algorithm. That is because although the algorithm can determine BEFORE, AFTER, and OVERLAP relations with an acceptable rate, but a lot of NONE relations will also be recognized. As a result, the precision will substantially decrease. Due to lack of space, we have not reported the precision of the algorithm.",
"cite_spans": [],
"ref_spans": [
{
"start": 241,
"end": 249,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.2"
},
{
"text": "In this paper, we have addressed the problem of learning temporal relations between event pairs, which is an interesting topic in natural language processing. Building a suitable corpus is a hard, expensive, and time consuming task. Therefore, we focused on unsupervised and weakly supervised types of learning. We proposed a novel generative model that uses the EM algorithm with some interval algebra reasoning for temporal relation learning. We compared our work with some of successful supervised methods. Our experiments showed that the result of the proposed algorithm, considering its little supervision, is satisfactory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We think but have not yet verified that using other source of information like narrative information, global relationship between events and times, time expressions, and/or some other useful features of related documents might even further improve the accuracy of the new algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Towards a General Theory of Action and Time",
"authors": [
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 1984,
"venue": "Artificial Intelligence",
"volume": "23",
"issue": "",
"pages": "123--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Allen. 1984. Towards a General Theory of Action and Time. Artificial Intelligence, 23, 2, 123-154.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Finding Temporal Structure in Text: Machine Learning of Syntactic Temporal Relations",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Klingenstein",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Semantic Computing",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bethard, James H. Martin, and Sara Klingenstein. 2007a. Finding Temporal Structure in Text: Machine Learning of Syntactic Temporal Relations. Journal of Semantic Computing, 1, 4.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Timelines from Text: Identification of Syntactic Temporal Relations. Proceeding of ICSC",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Klingenstein",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "11--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bethard, James H. Martin, and Sara Klingenstein. 2007b. Timelines from Text: Identification of Syntactic Temporal Relations. Proceeding of ICSC, 11-18.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "CU-TMP: Temporal Relation Classification Using Syntactic and Semantic Features. Proceeding of SemEval-2007",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "129--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bethard and James H. Martin. 2007. CU- TMP: Temporal Relation Classification Using Syntactic and Semantic Features. Proceeding of SemEval-2007, 129-132.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Finding Event, Temporal and Causal Structure in Text: A Machine Learning Approach",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bethard. 2007. Finding Event, Temporal and Causal Structure in Text: A Machine Learning Approach. PhD thesis, University of Colorado at Boulder.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning Semantic Links from a Corpus of Parallel Temporal and Causal Relations",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceeding of ACL-2008",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bethard and James H. Martin. 2008. Learning Semantic Links from a Corpus of Parallel Temporal and Causal Relations. Proceeding of ACL-2008, 177-180.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Classifying Temporal Relations between Events. Proceeding of ACL-2007",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Shan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "173--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers, Shan Wang, and Dan Jurafsky. 2007. Classifying Temporal Relations between Events. Proceeding of ACL-2007, 173-176.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Jointly Combining Implicit Constraints Improves Temporal Ordering. Proceeding of EMNLP-2008",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "698--706",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers and Dan Jurafsky. 2008. Jointly Combining Implicit Constraints Improves Temporal Ordering. Proceeding of EMNLP- 2008, 698-706.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "EM Works for Pronoun Anaphora Resolution",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EACL-2009",
"volume": "",
"issue": "",
"pages": "148--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Micha Elsner. 2009. EM Works for Pronoun Anaphora Resolution. Proceedings of EACL-2009, 148-154.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An Expectation Maximization Approach to Pronoun Resolution. Proceeding of CoNLL",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "88--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Cherry and Shane Bergsma. 2005. An Expectation Maximization Approach to Pronoun Resolution. Proceeding of CoNLL 2005. 88-95.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Global Path-based Refinement of Noisy Graphs Applied to Verb Semantics",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Chklovski",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceeding of IJCNLP-05",
"volume": "",
"issue": "",
"pages": "792--803",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Chklovski and Patrick Pantel. 2005. Global Path-based Refinement of Noisy Graphs Applied to Verb Semantics. Proceeding of IJCNLP-05, 792-803.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Mathematical Structure of Language",
"authors": [
{
"first": "Zellig",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1968,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig Harris. 1968. Mathematical Structure of Language. John Wiley Sons, New York, 1968.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Unsupervised Learning of Natural Language Structure",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein. 2005. The Unsupervised Learning of Natural Language Structure. Ph.D. Thesis, Department of Computer Science, Stanford University.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning Sentence-Internal Temporal Relations",
"authors": [
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Artificial Intelligence Research",
"volume": "27",
"issue": "",
"pages": "85--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirella Lapata and Alex Lascarides. 2006. Learning Sentence-Internal Temporal Relations. Journal of Artificial Intelligence Research, 27, 85-117.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Dirt-Discovery of Inference Rules From Text",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceeding of ACM SIGKDD-2001",
"volume": "",
"issue": "",
"pages": "323--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin and Patrick Pantel. 2001. Dirt- Discovery of Inference Rules From Text. Proceeding of ACM SIGKDD-2001, 323-328.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Machine Learning of Temporal Relations",
"authors": [
{
"first": "Inderjeet",
"middle": [],
"last": "Mani",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Verhagen",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Wellner",
"suffix": ""
},
{
"first": "Chong",
"middle": [
"M"
],
"last": "Lee",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ACL-2006",
"volume": "",
"issue": "",
"pages": "753--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inderjeet Mani, Marc Verhagen, Ben Wellner, Chong M. Lee, and James Pustejovsky. 2006. Machine Learning of Temporal Relations. Proceedings of ACL-2006, 753-760.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Three Approaches to Learning Tlinks in TimeML",
"authors": [
{
"first": "Inderjeet",
"middle": [],
"last": "Mani",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Wellner",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Verhagen",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inderjeet Mani, Ben Wellner, Marc Verhagen, and James Pustejovsky. 2007. Three Approaches to Learning Tlinks in TimeML. Technical Report CS-07-268. Brandeis University, Waltham, USA.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Using Tree Kernels for Classifying Temporal Relations between Events",
"authors": [
{
"first": "Gholamreza",
"middle": [],
"last": "Seyed Abolghasem Mirroshandel",
"suffix": ""
},
{
"first": "Mahdy",
"middle": [],
"last": "Ghassem-Sani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Khayyamian",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of PACLIC-2009",
"volume": "",
"issue": "",
"pages": "355--364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seyed Abolghasem Mirroshandel, Gholamreza Ghassem-Sani, and Mahdy Khayyamian. 2009. Using Tree Kernels for Classifying Temporal Relations between Events. Proceedings of PACLIC-2009, 355-364.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Temporal Relations Learning with a Bootstrapped Crossdocument Classifier",
"authors": [
{
"first": "Abolghasem",
"middle": [],
"last": "Seyed",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Mirroshandel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ghassem-Sani",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ECAI-2010",
"volume": "",
"issue": "",
"pages": "829--834",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seyed Abolghasem Mirroshandel and Gholamreza Ghassem-Sani. 2010. Temporal Relations Learning with a Bootstrapped Cross- document Classifier. Proceedings of ECAI-2010, 829-834.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Unsupervised Models for Coreference Resolution",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP-2008",
"volume": "",
"issue": "",
"pages": "640--649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Ng, 2008. Unsupervised Models for Coreference Resolution. Proceedings of EMNLP-2008, 640-649.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Acquisition of Verb Entailment from Text",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Pekar",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceeding of NAACL/HLT-2006",
"volume": "",
"issue": "",
"pages": "49--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Pekar. 2006. Acquisition of Verb Entailment from Text. Proceeding of NAACL/HLT-2006, 49- 56.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The TimeBank Corpus. Corpus Linguistics",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
},
{
"first": "Roser",
"middle": [],
"last": "Saur\u00ed",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Setzer",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "Beth",
"middle": [],
"last": "Sundheim",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Day",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Ferro",
"suffix": ""
},
{
"first": "Marcia",
"middle": [],
"last": "Lazo",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "647--656",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky, Patrick Hanks, Roser Saur\u00ed, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro and Marcia Lazo. 2003. The TimeBank Corpus. Corpus Linguistics, 647-656.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Scaling Web-based Acquisition of Entailment Relations",
"authors": [
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Hristo",
"middle": [],
"last": "Tanev",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Bonaventura",
"middle": [],
"last": "Coppola",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Idan Szpektor, Hristo Tanev, Ido Dagan, and Bonaventura Coppola. 2004. Scaling Web-based Acquisition of Entailment Relations.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Experiments with Reasoning for Temporal Relations between Events",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Tatu",
"suffix": ""
},
{
"first": "Munirathnam",
"middle": [],
"last": "Srikanth",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceeding of Coling-2008",
"volume": "",
"issue": "",
"pages": "857--864",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Tatu and Munirathnam Srikanth. 2008. Experiments with Reasoning for Temporal Relations between Events. Proceeding of Coling- 2008, 857-864.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Acquiring Inference Rules with Temporal Constraints by Using Japanese Coordinated Sentences and Noun-Verb Co-Occurrences",
"authors": [
{
"first": "Kentaro",
"middle": [],
"last": "Torisawa",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of NAACL-2006",
"volume": "",
"issue": "",
"pages": "57--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kentaro Torisawa. 2006. Acquiring Inference Rules with Temporal Constraints by Using Japanese Coordinated Sentences and Noun-Verb Co- Occurrences. Proceedings of NAACL-2006, 57- 64.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Temporal Relation Identification by Syntactico-Semantic",
"authors": [
{
"first": "G",
"middle": [],
"last": "Puscasu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wvali",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Puscasu G. Wvali. 2007. Temporal Relation Identification by Syntactico-Semantic",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "The effect of NONE relations on the accuracy"
},
"TABREF0": {
"content": "<table><tr><td>P</td><td colspan=\"3\">TC | corpus</td><td/><td/><td/><td/><td/><td>P</td><td>i e</td><td>e</td><td>j</td><td>|</td><td>ij TC</td><td>(2)</td></tr><tr><td/><td/><td/><td/><td>e i</td><td>e</td><td>j</td><td colspan=\"3\">EventPairs</td><td>corpus</td></tr><tr><td colspan=\"2\">where e i e j ( ) corpus P</td><td>=</td><td/><td/><td/><td colspan=\"2\">\u2211</td><td>P</td><td>( ) ( corpus P TC</td><td>|</td><td>TC</td><td>)</td></tr><tr><td/><td/><td>All</td><td>possile</td><td colspan=\"4\">temporal</td><td colspan=\"2\">clustering</td><td>TC</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "are event pairs, and TC ij are the specified temporal relation type of e i e j . The marginal probability of corpus is computed as follows:"
},
"TABREF1": {
"content": "<table><tr><td>, are (4) The lemmatized first and second ) ) corpus shown in table 1. ( ( \u220f \u2208 EventPairs e e ij k j i j i j i j i TC e e e e e e P | ..., , , 2 1 Feature Description Word 1 &amp; Word 2 The text of first and second events Lemma 1 &amp; Lemma 2 events heads Synset 1 &amp; Synset 2 The WordNet synset for first and second events heads POS 1 &amp; POS 2 The POS of the first and second events Event Government Verb 1 &amp; Verb 2 The verbs that govern the first and second events Event Government Verb 1 &amp; Verb 2 POS The verbs' POS that govern the first and second events Auxiliary Any auxiliary adverbs and verbs that modifies the governing verbs Class 1 &amp; Class 2 The Class of the first and second events Tense 1 &amp; Tense 2 The tense of the first and second events Aspect 1 &amp; Aspect 2 The aspect of the first and second events Modality 1 &amp; Modality 2 The modality of the first and second events Polarity 1 &amp; Polarity 2 The polarity of the first and second events Tense Match If two events have the same tense Aspect Match If two events have the same aspect Class Match If two events have the same class Tense Pair Pair of two events' tense Aspect Pair Pair of two events' aspect Class Pair Pair of two events' class POS pair Pair of two events' POS Preposition 1 If first event is in a prepositional phrase or not Preposition 2 If second event is in a prepositional phrase or not Text order If the first event occurs first in the document or not Dominates If the first event syntactically dominates second event or not Entity Match If an entity as an argument is shared between two events Table 1define temporal location and event structure, and considering these features together is a powerful source of information in any temporal relation extraction system. By conditional independence assumption, the value of P(corpus | TC) can be rewritten as ( ) ( ) \u220f \u220f corpus EventPairs e e l features All j i \u2208 ij l j i TC e e P |</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "The features of events which are used in our algorithm for temporal relation learning To reduce data sparseness and improve probability estimation, conditional independence assumption is made on these features' value generation. We only assume that tense and aspect are not independent (i.e., tense i and aspect i are dependent), because tense and aspect"
},
"TABREF3": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": "The converted TLink class distribution in TimeBank and OTC for intra-sentential and general event pairs."
},
"TABREF5": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": "The results of different methods for intra-sentential and general event pairs by ignoring NONE relations."
}
}
}
}