Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R11-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:03:38.942986Z"
},
"title": "Efficient Algorithm for Context Sensitive Aggregation in Natural Language Generation",
"authors": [
{
"first": "Hemanth",
"middle": [],
"last": "Sagar Bayyarapu",
"suffix": "",
"affiliation": {},
"email": "hemanth.sagar@research.iiit.ac.in"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Aggregation is a sub-task of Natural Language Generation (NLG) that improves the conciseness and readability of the text outputted by NLG systems. Till date, approaches towards the aggregation task have been predominantly manual (manual analysis of domain specific corpus and development of rules). In this paper, a new algorithm for aggregation in NLG is proposed, that learns context sensitive aggregation rules from a parallel corpus of multi-sentential texts and their underlying semantic representations. Additionally, the algorithm accepts external constraints and interacts with the surface realizer to generate the best output. Experiments show that the proposed context sensitive probablistic aggregation algorithm performs better than the deterministic hand crafted aggregation rules.",
"pdf_parse": {
"paper_id": "R11-1012",
"_pdf_hash": "",
"abstract": [
{
"text": "Aggregation is a sub-task of Natural Language Generation (NLG) that improves the conciseness and readability of the text outputted by NLG systems. Till date, approaches towards the aggregation task have been predominantly manual (manual analysis of domain specific corpus and development of rules). In this paper, a new algorithm for aggregation in NLG is proposed, that learns context sensitive aggregation rules from a parallel corpus of multi-sentential texts and their underlying semantic representations. Additionally, the algorithm accepts external constraints and interacts with the surface realizer to generate the best output. Experiments show that the proposed context sensitive probablistic aggregation algorithm performs better than the deterministic hand crafted aggregation rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Aggregation is the process in which two or more linguistic structures are merged to form a single sentence. It helps in generating concise and fluent text and hence is an essential component in any NLG system (Reiter and Dale 2000) . Figure 1(a) presents an example of de-aggregated text while Figure 1 (b) shows its aggregated counterpart. Clearly, the aggregated text is fluent while the de-aggregated text is artificial with lot of redundancy. Reiter (1994) proposed a consensus pipeline architecture for NLG systems with three stages:",
"cite_spans": [
{
"start": 209,
"end": 231,
"text": "(Reiter and Dale 2000)",
"ref_id": "BIBREF11"
},
{
"start": 234,
"end": 245,
"text": "Figure 1(a)",
"ref_id": null
},
{
"start": 447,
"end": 460,
"text": "Reiter (1994)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 294,
"end": 302,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Content-Determination: Selects the information (propositions) to be conveyed and organizes the information in a rhetorically coherent manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Sentence-Planning: Generates referring expressions, combines multiple propositions, selects appropriate lexical items and syntactic structures for each (aggregated) proposition and adds cohesion devices (eg, discourse markers) to make the text flow smoothly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Surface-Realizer: Converts the lexicalized linguistic structure into a linearized string while ensuring grammaticality, proper punctuation, correct morphology. The input to the process of aggregation, a submodule of Sentence-Planning in the consensus architecture described above, is a set of propositions selected by Content-Determination module which are organized using rhetorical relations between the propositions. Typical NLG systems use a twostage aggregation process (Wilkinson, 1995) . In the first stage, i.e., semantic grouping, the input set of propositions are partitioned into multiple sets, each of which is realized as a sentence. In the second stage, decisions related to actual realization of each set partition are taken.",
"cite_spans": [
{
"start": 477,
"end": 494,
"text": "(Wilkinson, 1995)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The essential idea behind semantic grouping is that the propositions that form a set and get realized as a meaningful sentence are related somehow. For example in Figure 1 , the first two propositions (Bacteria are unicellular. Bacteria are prokaryotic.) are two assertive sentences about Bacteria and hence are aggregated. But it is not true that these two propositions will always be aggregated into a single sentence as shown in Figure 2 . This shows that semantic grouping depends not only on the similarity between propositions, but also on the context (communicative goal of the text). The issue of context in semantic grouping gains importance especially in systems that present the same information in different views (Example: QA systems). For example, the two propositions (Bacteria are unicellular. Bacteria are prokaryotic) occur in examples shown in Figures 1 & 2. In the example in Figure 1 , these propositions are aggregated while in the example in Figure 2 they are not. If we look at the context of these texts, the text in Figure 1 is a short description about Bacteria. On the other hand, the text in Figure 2 talks about the fundamental difference between Bacteria and Fungi.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 171,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 432,
"end": 441,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 864,
"end": 871,
"text": "Figures",
"ref_id": null
},
{
"start": 897,
"end": 905,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 966,
"end": 974,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1043,
"end": 1051,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1122,
"end": 1130,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The problem that is considered in this paper is as follows: Given a parallel corpus of multisentential texts and their underlying semantic representations along with the communicative goal of the text, can we learn semantic grouping rules automatically? The semantic representation assumed in this paper is a conceptual graph (Figure 3 shows an example of a conceptual graph), but the applicability of the approach is generic and can be customised to accomodate any semantic representation. A context-dependent discriminative model is learned which, given a proposition set and the context, estimates the probability of aggregation of the propositions. The prob-lem of semantic grouping is modelled as a hypergraph partitioning problem that uses the probabilities outputted by the context-dependent discriminative model. To address the problem of hypergraph partioning, Multi-level Fiduccia-Mattheyses Framework (MLFM) is used (Karypis and Kumar, 1999) . The approach is evaluated in the biology domain against two alternatives, namely handcrafted rules (HC) and a greedy clustering approach (GC) using the probabilities outputted by the context-dependent discriminative model. Additionally, we also test the impact of context by ignoring context while learning the discriminative model (Context-independent discriminative model).",
"cite_spans": [
{
"start": 927,
"end": 952,
"text": "(Karypis and Kumar, 1999)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 326,
"end": 335,
"text": "(Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An overview of related work is presented in Section 2. The corpus used in the experiments is discussed in Section 3. Then, in Section 4, the approach is discussed followed by Section 5 which presents the experiments done and their results. Finally, Section 6 concludes the paper with discussions and future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Aggregation has been employed since the early NLG systems. In PROTEUS, a computer program that generates commentaries on a tic-tac-toe game, Davey (1979) used conjunctions to express SE-QUENCE and CONTRASTIVE relations. Derr and McKeown (1984) showed how focus of attention helps in taking decisions related to choice between a sequence of simple sentences and a complex one. ANA (Kukich, 1983) , used financial domain specific aggregation rules to generate complex sentences upto 34 words. Logical derivations were used to combine clauses and to remove easily inferrable clauses in (Mann and Moore, 1980) . Hand-crafted aggregation rules developed as a result of corpus analysis are employed by (Scott and de Souza, 1990; Hovy, 1990; Dalianis, 1999; Shaw, 1998) . Walker et al. (2001) proposed a overgenerate-and-select approach in which the over-generate stage lists out large number of potential sentence plans while the ranking stage selects the top ranked sentence plan using rules that are learned automatically from the training data. Cheng and Mellish (2000) propose a genetic algorithm coupled with a preference function. Barzilay and Lapata (2006) view the problem of semantic grouping as a set partioning problem. They employ a local classifier that learns similarity between the propositions and then use ILP (Branchand-bound algorithm) to infer a globally optimal partition.",
"cite_spans": [
{
"start": 141,
"end": 153,
"text": "Davey (1979)",
"ref_id": "BIBREF4"
},
{
"start": 220,
"end": 243,
"text": "Derr and McKeown (1984)",
"ref_id": "BIBREF5"
},
{
"start": 380,
"end": 394,
"text": "(Kukich, 1983)",
"ref_id": "BIBREF9"
},
{
"start": 583,
"end": 605,
"text": "(Mann and Moore, 1980)",
"ref_id": "BIBREF10"
},
{
"start": 696,
"end": 722,
"text": "(Scott and de Souza, 1990;",
"ref_id": "BIBREF6"
},
{
"start": 723,
"end": 734,
"text": "Hovy, 1990;",
"ref_id": "BIBREF7"
},
{
"start": 735,
"end": 750,
"text": "Dalianis, 1999;",
"ref_id": "BIBREF3"
},
{
"start": 751,
"end": 762,
"text": "Shaw, 1998)",
"ref_id": "BIBREF13"
},
{
"start": 765,
"end": 785,
"text": "Walker et al. (2001)",
"ref_id": "BIBREF15"
},
{
"start": 1042,
"end": 1066,
"text": "Cheng and Mellish (2000)",
"ref_id": "BIBREF2"
},
{
"start": 1131,
"end": 1157,
"text": "Barzilay and Lapata (2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This work is different from the earlier work in two aspects. We use contextual information to obtain better grouping that is applicable across different systems (even QA systems) while their work does not use the contextual information. Also, we assume a more generic hypergraph representation and use MLFM technique which works well even with large number of propositions. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A total of 717 QA pairs are collected from various sources in the biology domain. Concepts are extracted from the question which acts as contex-tual information. For example, when the question is What is a binary fission?, the concept Binary-Fission becomes the context. The answer is converted into sets of triples, each set corresponding to a sentence. Each triple consists of two concepts (or instances of concepts) connected by a relation. For example, the triple (Mitosis next-event Cytokinesis) contains two concepts namely Mitosis and Cytokinesis connected by the relation next-event. Figure 4 shows a QA pair and its triple representation. The context and sets of triples are extracted from each QA pair manually. The manual annotation process uses the component library described in (Barker et al., 2001 ). 1 A total of 6337 triples are collected corresponding to 717 answers with each answer having 8.839 triples on an average. The highest number of triples for an answer is 46 while the lowest is 1. The total number of sentences in the answers is 1862, i.e., 2.596 sentences per an answer.",
"cite_spans": [
{
"start": 792,
"end": 812,
"text": "(Barker et al., 2001",
"ref_id": "BIBREF0"
},
{
"start": 816,
"end": 817,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 592,
"end": 600,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3"
},
{
"text": "A hypergraph (H) is a generic graph wherein edges can connect any number of vertices and are called hyperedges. In other words, each edge is a set of vertices. It is formally represented by a pair (V,E) where V is the set of vertices and E is the set of hyperedges. Each edge e i \ufffd E has associated weight w i . An edge with zero weight means that the the edge does not exist.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach 4.1 Hypergraphs",
"sec_num": "4"
},
{
"text": "Let P be a k-tuple (p 0 , p 1 , p 2 ...) where each p i is a set of vertices from V such that \u2229 i=k\u22121 i=0 p i = \u03c6 and \u222a i=k\u22121 i=0 p i = V . The k-way Hypergraph partitioning problem can be formulated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k-way Hypergraph Partitioning problem",
"sec_num": "4.2"
},
{
"text": "Given a hypergraph H = (V,E), find a k-way partitionment \u03b4 : V \u2192 P that maps each of the vertices of H to one of the k disjoint partitions such that some cost function \u03b3 : P \u2192 R is minimized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k-way Hypergraph Partitioning problem",
"sec_num": "4.2"
},
{
"text": "Relationships among the propositions are often complex than pairwise. Assuming this complex relationship as pairwise ones reduces the fluency of the verbalized text in some cases. To deal with this complex relationship, it is better to directly use hypergraphs instead of pair-wise approximation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Aggregation as hypergraph partitioning problem",
"sec_num": "4.3"
},
{
"text": "We view the problem of aggregation as a hypergraph partioning problem guided by a data-driven context sensitive discriminative model. The input to the algorithm is a conceptual graph which can be alternatively represented as a set of propositions. The goal is to find optimal partitions of the set of propositions given context, where each partition represents an aggregated sentence. The set of propositions is viewed as a graph where each proposition represents a vertex as shown in Figure 5 . Hyperedges are constructed on the graph obtained from propositions. Each hyperedge of this hypergraph connects one or more propositions. The weight w i of each hyperedge is given by the context sensitive discriminative model discussed in section 4.4. The hypergraph along with edge weights is the input to the multi-level kpartitioning algorithm. ",
"cite_spans": [
{
"start": 485,
"end": 493,
"text": "Figure 5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Aggregation as hypergraph partitioning problem",
"sec_num": "4.3"
},
{
"text": "The weight w iA of a hyperedge (A) in the hypergraph formed from the inputs (S) is the probability of aggregation of propositions in A given contextual information (C) and S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Sensitive Discriminative Model",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w iA = p A = P (A|C, S)",
"eq_num": "(1)"
}
],
"section": "Context Sensitive Discriminative Model",
"sec_num": "4.4"
},
{
"text": "The contextual information include the communicative goal (the concepts in the question) The features that are used to predict the probability of aggregation of a proposition set are based on:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Sensitive Discriminative Model",
"sec_num": "4.4"
},
{
"text": "\u2022 Cohesion of the proposition set is the average score of similarities between each pair of propositions in A:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Sensitive Discriminative Model",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Coh A = \ufffd i=|A|,j=|A| i=1,j=1,i\ufffd =j sim(A i , A j ) |A|",
"eq_num": "(2)"
}
],
"section": "Context Sensitive Discriminative Model",
"sec_num": "4.4"
},
{
"text": "The similarity between each pair is the number of matches in the components of triples. For example, since the triples (Mitosis subevent Prophase), (Mitosis subevent Anaphase) match in two slots, the similarity score is 2/3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Sensitive Discriminative Model",
"sec_num": "4.4"
},
{
"text": "\u2022 Complexity of the realization is a cumulative weighted score of number of words, number of relative clauses, number of connectives, etc. and this score depicts how difficult it is to interpret the sentence corresponding to the proposition set A (if it is generated using the surface realizer). The score value is \u221e if the propositions cannot be realized as a single sentence because the surface realizer cannot find suitable structure that accomodates all the propositions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Sensitive Discriminative Model",
"sec_num": "4.4"
},
{
"text": "\u2022 Dissimilarity with rest of the propositions calculates how dissimilar the proposition set A is with the rest of the propositions (S-A). The maximum distance (or minimum similarity) of each proposition in S-A from A is calculated and averaged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Sensitive Discriminative Model",
"sec_num": "4.4"
},
{
"text": "\u2022 Similarity with context C is the score of the extent of the cover of context by the triples. It is the ratio of number of concepts in the context C that occur in any of the triples in A to the total number of concepts in C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Sensitive Discriminative Model",
"sec_num": "4.4"
},
{
"text": "A number of boolean features and their conjunctive features are generated using the above scores with score bounds. Such feature structures are generated for each hyperedge in the hypergraph formed from S. All the subsets of S which are in Z (the correct partitioning of S) are positive instances and rest are negative instances. A maximum entropy model is employed to predict the probabilities of aggregation of a set of propositions. While using the maximum entropy model to predict the aggregation probability, we can also utilize pattern matching rules to group propositions as a pre-processing step. The pattern matching rules can include domain specific rules, inference rules, etc. The motivations for this grouping are: (1) propositions are a mere representation of complex texts, (2) when the number of propositons is very high, optimization on the level of propositions becomes intractable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Sensitive Discriminative Model",
"sec_num": "4.4"
},
{
"text": "Any constraint on the output can be expressed as features in the discriminative model. Transitivity constraint on set of propositions is automatically captured in the usage of hyperedges. External constraints like complexity of sentence is expressed in the features of the discriminative model (Complexity of the realization).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Sensitive Discriminative Model",
"sec_num": "4.4"
},
{
"text": "We use a n-fold cross validation on the corpus described in section II. We use two baselines for comparison: (1) Hand-Crafted rules (HC) and (2) Greedy clustering of hypergraph (GC). Handcrafted rules are pattern matching rules on sets of propositions. An example rule is to aggregate two triples if they share atleast two slots. In the second baseline, i.e., the greedy clustering of hypergraph, the graph is clustered using the probability scores of hyperedges based on the context sensitive model. The top scoring hyperedges that are non-overlapping and cover the entire input set are outputted. Also, in order to test the impact of context, we build a context independent discriminative model but follow the same hypergraph partitioning approach (HGP).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Let Y be the output partition of our approach and Z be the correct partitioning which is annotated manually. We use the following evaluation metrics:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "\u2022 Precision: the ratio of correct pair-wise aggregations in Y and total pair-wise aggregations in Y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "\u2022 Recall : the ratio of correct pair-wise aggregations in Y and total pair-wise aggregations in Z",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "\u2022 F-score: the harmonic mean of Precision and Recall",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "The results are shown in Table 1 . All the scores are average scores on a 5-fold cross validation.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Hand-Crafted rules performed very poor because there are very few rules covering aggregation of more than five propositions while the corpus consisted of many such proposition sets. The effect of context is clear as the context dependent (HGPC) model outperforms context independent model by 7.15%. This proves that the usage of context is very important if the model has to be generic and adaptable to any kind of NLG system. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "The number of propositions in an answer in our corpus varied from 1 to as large as 46. We used an empirically proven scalable partitioning framework that works well when the number of propositions is huge. We presented a novel context sensitive aggregation algorithm for NLG systems. Also we presented a much natural hypergraph approach to semantic grouping than other methods that approximate the complicated relationships (among the entities that are checked for aggregation) with pair-wise approximations. The approach is adaptable to any domain and any representation. With a small corpus of 717 QA pairs, good results are obtained over the hand-crafted approaches. In our future work, we would like to test the described approach for scalability. The MLFM technique used in this work is proven to be the best technique for partitioning a set of more than 200 propositions. Also, the evaluations in this paper have been conducted in partial isolation from the actual output of the surface realizer. In our future work, we would also like to consider the impact of aggregation on the final textual outputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "The component library is available online at http://www.cs.utexas.edu/ mfkb/RKF/tree/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A library of generic concepts for composing knowledge bases",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Barker",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Porter",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of First International Conference on Knowledge Capture",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barker, Ken , Bruce Porter, and Peter Clark. 2001 A library of generic concepts for composing knowl- edge bases. In Proceedings of First International Conference on Knowledge Capture, 2001.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Aggregation via Set Partitioning for Natural Language Generation",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of NAACL/HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barzilay, Regina and Mirella Lapata. 2006. Aggrega- tion via Set Partitioning for Natural Language Gen- eration. In Proc. of NAACL/HLT, 2006.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Capturing the interaction between aggregation and text planning in two generation systems",
"authors": [
{
"first": "Hua",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Mellish",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of INLG-2000",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheng, Hua and Chris Mellish. 2000. Capturing the interaction between aggregation and text planning in two generation systems. In Proceedings of INLG- 2000, Israel.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Aggregation in Natural Language Generation",
"authors": [
{
"first": "Hercules",
"middle": [],
"last": "Dalianis",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of Computational Intelligence",
"volume": "15",
"issue": "4",
"pages": "384--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dalianis, Hercules. 1999. Aggregation in Natural Language Generation. Journal of Computational Intelligence, Volume 15, Number 4, pp 384-414, November 1999. Abstract",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Discourse Production",
"authors": [
{
"first": "Anthony",
"middle": [
"C"
],
"last": "Davey",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davey, Anthony C. 1979. Discourse Production. Ed- inburgh University Press, Edinburgh.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Using focus to generate complex and simple sentences",
"authors": [
{
"first": "Marcia",
"middle": [
"A"
],
"last": "Derr",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kathleen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1984,
"venue": "Proceedings of the Tenth International Conference on Computational Linguistics (COLING-84) and the 22nd Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Derr, Marcia A. and Kathleen R. McKeown. 1984. Using focus to generate complex and simple sen- tences. In Proceedings of the Tenth Interna- tional Conference on Computational Linguistics (COLING-84) and the 22nd Annual Meeting of the ACL, pages 319326, Stanford University, Stanford, CA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Getting the message across in RST-based text generation",
"authors": [
{
"first": "Donia",
"middle": [
"R"
],
"last": "Scott",
"suffix": ""
},
{
"first": "Clarisse",
"middle": [
"S"
],
"last": "De Souza",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donia R. Scott and Clarisse S. de Souza. 1990. Get- ting the message across in RST-based text genera- tion. In R. Dale, C. Mellish, and M. Zock, editors, Current Research in Natural Language Generation",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unresolved issues in paragraph planning",
"authors": [
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 1990,
"venue": "Current Research in Natural Language Generation, 1741",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hovy, Eduard H. 1990. Unresolved issues in para- graph planning. In R. Dale, C. Mellish, M. Zock, eds., Current Research in Natural Language Gener- ation, 1741. Academic Press, New York.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multilevel k-way hypergraph partitioning",
"authors": [
{
"first": "G",
"middle": [],
"last": "Karypis",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Design and Automation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karypis, G. and V. Kumar. 1999. Multilevel k-way hypergraph partitioning. In Proceedings of the De- sign and Automation Conference, 1999.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Design of a knowledge-based report generator",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Kukich",
"suffix": ""
}
],
"year": 1983,
"venue": "Proceedings of the 21st Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kukich, Karen. 1983. Design of a knowledge-based report generator. In Proceedings of the 21st Annual Meeting of the ACL, pages 145150, Cambridge, MA, June 15-17,.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Computer as author results and prospects",
"authors": [
{
"first": "William",
"middle": [
"C"
],
"last": "Mann",
"suffix": ""
},
{
"first": "James",
"middle": [
"A"
],
"last": "Moore",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mann, William C. and James A. Moore. 1980. Com- puter as author results and prospects. Technical Re- port RR-79-82, USC Information Science Institute, Marina del Rey, CA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Building Natural Language Generation Systems",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reiter, Ehud and Robert Dale. 2000 Building Natural Language Generation Systems. Cambridge Univer- sity Press, Cambridge.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Has a consensus NL generation architecture appeared, and is it psycholinguistically plausible?",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the Seventh International Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reiter, Ehud. 1994 Has a consensus NL genera- tion architecture appeared, and is it psycholinguis- tically plausible? In Proceedings of the Seventh In- ternational Workshop on Natural Language Genera- tion, pages 163170, Nonantum Inn, Kennebunkport, Maine.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Clause aggregation using linguistic knowledge",
"authors": [
{
"first": "James",
"middle": [],
"last": "Shaw",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 9th International Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaw, James. 1998. Clause aggregation using linguis- tic knowledge. In Proceedings of the 9th Interna- tional Workshop on Natural Language Generation., pages 138147.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Aggregation in natural language generation: Another look. Co-op work term report",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wilkinson",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilkinson, John. 1995. Aggregation in natural lan- guage generation: Another look. Co-op work term report, Department of Computer Science, University of Waterloo, Septembe",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Spot: A trainable sentence planner",
"authors": [
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Monica",
"middle": [],
"last": "Rogati",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the second annual meeting of North American Chapter",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Walker, Marilyn, Owen Rambow and Monica Ro- gati. 2001. Spot: A trainable sentence planner. In Proceedings of the second annual meeting of North American Chapter of Association for Computational Linguistics, 17-24, Pittsburgh, PA",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Example showing de-aggregated text and its equivalent aggregated text.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "Example answer from a corpus of QAs in Biology domain.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Example of a conceptual graph.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "Example of a QA pair and its triple representation.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF4": {
"text": "Example of a proposition set and its view as a graph",
"type_str": "figure",
"uris": null,
"num": null
}
}
}
}