|
{ |
|
"paper_id": "R11-1027", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:03:42.978131Z" |
|
}, |
|
"title": "Finding the Best Approach for Multi-lingual Text Summarisation: A Comparative Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Lloret", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Alicante Apdo. de Correos", |
|
"location": { |
|
"postCode": "99 E-03080", |
|
"settlement": "Alicante", |
|
"country": "Spain" |
|
} |
|
}, |
|
"email": "elloret@dlsi.ua.es" |
|
}, |
|
{ |
|
"first": "Manuel", |
|
"middle": [], |
|
"last": "Palomar", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Alicante", |
|
"location": { |
|
"addrLine": "Apdo. de Correos 99 E-03080", |
|
"settlement": "Alicante", |
|
"country": "Spain" |
|
} |
|
}, |
|
"email": "mpalomar@dlsi.ua.es" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper addresses the problem of multilingual text summarisation. The goal is to analyse three approaches for generating summaries in four languages (English, Spanish, German and French), in order to determine the best one to adopt when tackling this issue. The proposed approaches rely on: i) language-independent techniques; ii) language-specific resources; and iii) machine translation resources applied to a mono-lingual summariser. The evaluation carried out employing the JRC corpus-a corpus specifically created for multilingual summarisation-shows that the approach which uses languagespecific resources is the most appropriate in our comparison framework, performing better than state-of-the-art multilingual summarisers. Moreover, the readability assessment conducted over the resulting summaries for this approach proves that they are also very competitive with respect to their quality.", |
|
"pdf_parse": { |
|
"paper_id": "R11-1027", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper addresses the problem of multilingual text summarisation. The goal is to analyse three approaches for generating summaries in four languages (English, Spanish, German and French), in order to determine the best one to adopt when tackling this issue. The proposed approaches rely on: i) language-independent techniques; ii) language-specific resources; and iii) machine translation resources applied to a mono-lingual summariser. The evaluation carried out employing the JRC corpus-a corpus specifically created for multilingual summarisation-shows that the approach which uses languagespecific resources is the most appropriate in our comparison framework, performing better than state-of-the-art multilingual summarisers. Moreover, the readability assessment conducted over the resulting summaries for this approach proves that they are also very competitive with respect to their quality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In the current society, information plays a crucial role that brings competitive advantages to users, when it is managed correctly. However, due to the vast amount of available information, users cannot cope with it, and therefore research into new methods and approaches based on Natural Language Processing (NLP) is crucial, thus resulting in considerable benefits for the society. Specifically, one of these NLP research areas is Text Summarisation (TS) which is essential to condense information keeping, at the same time, the most relevant facts or pieces of information. However, to produce a summary automatically is very challenging. Issues such as redundancy, temporal dimension, coreference or sentence ordering, to name a few, have to be taken into consideration especially when summarising a set of documents (multidocument summarisation), thus making this field even more difficult (Goldstein et al., 2000) . Such difficulty also increases when the information is stated in several languages and we want to be capable of producing a summary in those languages, thus not restricting the summariser to a single language (multi-lingual summarisation). The generation of multi-lingual summaries improves considerably the capabilities of TS systems, allowing users to be able to understand the essence of documents in other languages by only reading their corresponding summaries.", |
|
"cite_spans": [ |
|
{ |
|
"start": 895, |
|
"end": 919, |
|
"text": "(Goldstein et al., 2000)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Therefore, the aim of this paper is to carry out a comparative analysis of several approaches for generating extractive 1 multi-lingual summaries in four languages (English, French, German and Spanish) . These approaches comprise the use of: i) language-independent techniques; ii) languagespecific resources; and iii) machine translation resources applied to a mono-lingual summariser. In this way, we can study the advantages and limitations of each approach, as well as to determine which is the most appropriate to adopt for this type of summaries. Although the languagespecific resources are limited and perform differently for each language, the results indicate that this approach is the best to adopt, since for each language, more specific information could be obtained, benefiting the final summaries.", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 201, |
|
"text": "(English, French, German and Spanish)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remaining of the paper is organised as follows. Section 2 introduces previous work on multi-lingual TS. Section 3 describes the proposed approaches for generating multi-lingual summaries in detail. Further on, the corpus used, the experiments carried out, the results obtained together with an in-depth discussion is provided in Section 4. Finally, the conclusions of the paper together with the future work are outlined in Section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Generating multi-lingual TS is a challenging task, due to the fact that we have to deal with multiple languages, each of which has its peculiarities. Attempts to produce multi-lingual summaries started with SUMMARIST (Hovy and Lin, 1999) , a system which extracted sentences from documents in a variety of languages, by using English, Japanese, Spanish, Indonesian, and Arabic preprocessing modules and lexicons. Another example of multi-lingual TS system is MEAD (Radev et al., 2004) , able to produce summaries in English and Chinese, relying on features, such as sentence position, sentence length, or similarity with the first sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 237, |
|
"text": "(Hovy and Lin, 1999)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 464, |
|
"end": 484, |
|
"text": "(Radev et al., 2004)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "More recently, research in multi-lingual TS has been focused on the analysis of languageindependent methods. For instance, in (Litvak et al., 2010b) a comparative analysis of 16 methods for language-independent extractive summarisation was performed in order to find the most efficient language-independent sentence scoring method in terms of summarisation accuracy and computational complexity across two different languages (English and Hebrew). Such methods relied on vector-, structure-and graph-based features (e.g. frequency, position, length, title-based features, pagerank, etc.), concluding that vector and graph-based approaches were among the top ranked methods for bilingual applications. From this analysis, MUSE -MUltilingual Sentence Extractor (Litvak et al., 2010a) was developed, where other language-independent features were added and a genetic algorithm was employed to find the optimal weighted linear combination of all the sentence scoring methods proposed. In (Patel et al., 2007) a multi-lingual extractive languageindependent TS approach was also suggested. The proposed algorithm was based on structural and statistical factors, such as location or identification of common and proper nouns. However, it also used stemming and stop word lists, which were dependent on the language. This TS approach was evaluated for English, Hindi, Gujarati and Urdu documents, obtaining encouraging results and showing that the proposed method performed equally well regardless of the language. News-Gist (Kabadjov et al., 2010 ) is a multi-lingual summariser that achieves better performance than state-of-the-art approaches. It relies on Singular Value Decomposition, which is also a languageindependent method, so it can be applied to a wide range of languages, although at the moment, it has been only tested for English, French and German.", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 148, |
|
"text": "(Litvak et al., 2010b)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 759, |
|
"end": 781, |
|
"text": "(Litvak et al., 2010a)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 984, |
|
"end": 1004, |
|
"text": "(Patel et al., 2007)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1517, |
|
"end": 1539, |
|
"text": "(Kabadjov et al., 2010", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Furthermore, Wikipedia 2 is a multi-lingual resource, which has been used for many natural language applications. It contains more than 18 million articles in more than 270 languages, which have been written collaboratively by volunteers around the world. This valuable resource has also been used for developing multi-lingual TS approaches. For instance, (Filatova, 2009) took advantage of Wikipedia information stated across different languages with the purpose of creating summaries. The approach was based on the Pyramid method (Nenkova et al., 2007) in order to account for relevant information. The underlying idea was that sentences were placed on different levels of the pyramid, depending on the number of languages containing such sentence. Thus, the top levels were populated by the sentences that appeared in the most languages and the bottom level contained sentences appearing in the least number of languages. The summary was then generated by taking a specific number of sentences starting with the top level, until the desired length was reached. Moreover, although the multi-lingual approach proposed in (Yuncong and Fung, 2010) aimed at generating complete articles instead of summaries, it is very interesting and it can be perfectly applied to TS. Basically, this approach took an existing entry of Wikipedia as content guideline. Then, keywords were extracted from it, and translated into the target language. The translation was used to query the Web in the target language, so candidate fragments of information were obtained. Further on, these fragments were ranked and synthesised into a complete article.", |
|
"cite_spans": [ |
|
{ |
|
"start": 356, |
|
"end": 372, |
|
"text": "(Filatova, 2009)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 532, |
|
"end": 554, |
|
"text": "(Nenkova et al., 2007)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1122, |
|
"end": 1146, |
|
"text": "(Yuncong and Fung, 2010)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Different to the aforementioned approaches, in this paper we carried out a comparison between three approaches: i) a language-independent approach; ii) a language-specific approach; and iii) machine translation resources applied to a monolingual TS approach. Our final aim is to analyse them in order to find which is the most suitable for performing multi-lingual TS.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The objective of this section is to explain the three proposed approaches for generating multi-lingual summaries in four languages (English, French, German and Spanish). We developed an extractive TS approach for each case. In particular, we analysed: i) language-independent techniques (Subsection 3.1); ii) language-specific resources (Subsection 3.2); and iii) machine translation resources applied to a mono-lingual summariser (Subsection 3.3). Next, we describe each approach in detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-lingual Text Summarisation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As a language-independent approach for tackling multi-lingual TS, we computed the relevance of sentences by using the term frequency technique. Term frequency was first proposed in (Luhn, 1958) , and, despite being a simple technique, it has been widely used in TS due to the good results it achieves (Gotti et al., 2007) , (Or\u0203san, 2009) , (Montiel et al., 2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 193, |
|
"text": "(Luhn, 1958)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 321, |
|
"text": "(Gotti et al., 2007)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 324, |
|
"end": 338, |
|
"text": "(Or\u0203san, 2009)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 341, |
|
"end": 363, |
|
"text": "(Montiel et al., 2009)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-independent Approach", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The importance of a term in a document will be given by its frequency. At this point, it is worth mentioning that stop words, such as \"the\", \"a\", \"you\", etc. are not taken into account; otherwise the relevance of sentences could be wrongly calculated. In order to identify them, we need a specific list of stop words, depending on the language used. The language-specific processing in this approach is minimal, so it can be considered language-independent, since given a new language it would be very easy to obtain automatic summaries through this approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-independent Approach", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For determining the relevance of sentences, a matrix is built. In this matrix M , the rows represent the terms of the document without considering the stop words, whereas the columns represent the sentences. Each cell M [i, j] contains the frequency of each term i in the document, provided that such term is included in the sentence; otherwise the cell contains a 0. Then, the importance of sentence S j is computed by means of Formula 1:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-independent Approach", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Sc S j = \u2211 n i=1 M [i, j] |T erms| (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-independent Approach", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-independent Approach", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Sc S j = Score of sentence j M [i, j] = value of the cell [i,j]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-independent Approach", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "|T erms| = total number of terms in the document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-independent Approach", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Once the score for each sentence is calculated, sentences will be ranked in descending order, and the top ones up to a desired length will be chosen to become part of the summary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-independent Approach", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Apart from its simplicity, the advantage of this techniques is that it can be used in any language. However, its main limitation is that the relevance of the sentences is only determined through lexical surface analysis, and therefore, semantics aspects are not taken into account.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-independent Approach", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our second proposed approach is very similar to the first one, but instead of term frequency, it employs language-specific resources for each of the target languages. For determining the relevance of sentences, this approach analyses the use of Named Entity Recognisers (NER) and the identification of concepts, by means of their synsets in WordNet (Fellbaum, 1998) or EuroWordNet (Ellman, 2003 ). On the one hand, named entities can indicate important content, since they refer to specific people, organisations, places, etc. that may be related to the topic of the document. On the other hand, the identification of concepts involves semantic analysis, and therefore, we can identify synonyms or other types of semantic relationships.", |
|
"cite_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 365, |
|
"text": "(Fellbaum, 1998)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 381, |
|
"end": 394, |
|
"text": "(Ellman, 2003", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-specific Approach", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "These types of resources (NERs and resources like Wordnet) have been commonly employed for generating specific types of summaries (Hassel, 2003) , (Bellare et al., 2004) , (Chaves, 2001) . Moreover, in (Filatova and Hatzivassiloglou, 2004) it was proven that approaches that took into consideration named entities as well as frequent words were appropriate for TS. In light of this, we decided to develop a similar approach, but relying on named entities and concepts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 144, |
|
"text": "(Hassel, 2003)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 147, |
|
"end": 169, |
|
"text": "(Bellare et al., 2004)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 172, |
|
"end": 186, |
|
"text": "(Chaves, 2001)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 239, |
|
"text": "(Filatova and Hatzivassiloglou, 2004)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-specific Approach", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In particular, we focus on four languages (English, French, German and Spanish). The named entities are identified using different NERs, depending on the language. In this way, we use LingPipe 3 for English, the Illinois Named Entity Tagger 4 (Ratinov and Roth, 2009) for French, the NER for German 5 proposed in (Faruqui and Pad\u00f3, 2010) , and Freeling 6 for Spanish. For detecting concepts, we rely on WordNet for English and Eu-roWordNet for the remaining languages. Thanks to these types of resources, this approach uses semantic knowledge, instead of only lexical, as in the case of the term frequency in the languageindependent approach.", |
|
"cite_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 267, |
|
"text": "(Ratinov and Roth, 2009)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 313, |
|
"end": 337, |
|
"text": "(Faruqui and Pad\u00f3, 2010)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-specific Approach", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For computing the relevance of the sentences, a matrix (M ) is also built, where the rows represent the entities or concepts of the document and the columns, the sentences. Each cell M [i, j] contains the frequency of appearance of either each entity or concept. As in the previous approach, stop words are not taken into consideration, and in those cases where neither the entity nor the concept is included in the sentence, a 0 is assigned to the cell. Once the matrix has been filled in, Formula 2 is then used to compute the relevance of sentences:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-specific Approach", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Sc S j = \u2211 n i=1 M [i, j] |N E + Concepts| (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-specific Approach", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-specific Approach", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Sc S j = Score of sentence j M [i, j] = value of the cell [i,j]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-specific Approach", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "|N E + Concepts| = total number of named entities and concepts in the document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-specific Approach", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The highest scored sentences, up to a specific length, will be extracted to build the final summary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-specific Approach", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The advantages of this approach with respect to the previous one (i.e. the language-independent) is that semantic analysis is applied by using resources such as WordNet or EuroWordNet. This allows us to group synonyms under the same concept. For instance, the words harassment and molestation represent the same concepts (since they both belong to the same synset in WordNet), so they are grouped together in this approach, whereas in the previous one, where only the frequency of terms is taken into consideration, they are considered two distinct words. In contrast, the drawback of this approach is that such kind of resources may not be available for all languages, and therefore we might have problems in applying this approach. Moreover, the error these resources introduce (e.g. NERs) may negatively affect the performance of the summariser.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-specific Approach", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The idea behind this approach is to use an existing mono-lingual summariser for a specific lan-guage and then employ a machine translation system for obtaining the summaries in the different languages. In particular, we employ the TS approach proposed in (Lloret and Palomar, 2009) that generates extractive summaries for English. The reason for employing such summariser is its competitive results achieved compared to the state of the art. Briefly, the main features of this approach are: i) redundant information is detected and removed by means of textual entailment; and ii) the Code Quantity Principle (Giv\u00f3n, 1990) is used for accounting relevant information from a cognitive perspective. Therefore, important sentences are identified by computing the number of words included in noun-phrases, taking also into consideration the relative frequency each word has in the document. Once the summaries have been generated, Google Translate 7 is used to translate the summaries into the different target languages (i.e., French, German and Spanish), since it is a free online language translation service that can translate text in more than 50 languages. The advantage of this approach is that we do not have to develop a particular approach for each language, because we can rely on existing monolingual summarisers. Although machine translation has been made great progress in the recent years, and they can translate text into a wide range of languages, the disadvantage associated to using such tools concerns their performance, since wrong translations can negatively affect the quality of the resulting summary.", |
|
"cite_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 281, |
|
"text": "(Lloret and Palomar, 2009)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 608, |
|
"end": 621, |
|
"text": "(Giv\u00f3n, 1990)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Machine Translation Resources applied to a Mono-lingual Approach", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The goal of this section is to setup an experimental framework, thus allowing us to analyse the aforementioned approaches in a specific context. Therefore, the corpus employed and the languages used are described in Subsection 4.1. Then, the evaluation methodology proposed and the results obtained together with a discussion is provided in Subsection 4.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Framework", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We used the JRC multi-lingual summary evaluation data 8 for carrying out the experiments, in order to determine which approach should be more appropriate for the task of multi-lingual summarisation. The corpus consists of 20 docu- ments grouped into four topics (genetics, Israeland-Palestine-conflict, malaria and science-andsociety). Each document is available in seven languages (Arabic, Czech, English, French, German, Russian and Spanish), and the corpus also contains the manual annotation of important sentences, so it is possible to have four model summaries for each of the documents. Four our purposes, four languages were selected (English, French, German and Spanish), thus dealing with 80 documents. The type of documents contained in the JRC corpus pertained to the news domain. Table 1 shows some properties of the corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 793, |
|
"end": 800, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpus", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As it can be seen from the table, all the documents have a similar length, the shortest ones having more than 600 words, whereas the longest ones around 1,000 words. Regarding the statistics about the words, it is worth noting that the documents in Romance languages (Spanish and French) have similar characteristics. Analogously, the same happens for the Germanic languages (English and German). However, the highest differences between languages can be found in the number of NE and concepts detected. Whereas for English, the average number of NE is 25, for the remaining languages is at most 17. This depends on the NER employed. The language-specific resources used for detecting concepts (WordNet and EuroWordNet) also influence the number of concepts identified. In this way, Spanish and English are the languages with more concepts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The JRC corpus was used to generate extractive summaries in four languages (English, French, German, and Spanish), following our three proposed approaches. We generated 20 summaries for each approach and language, thus evaluating 240 different summaries in the end. Two types of evaluation were conducted. On the one hand, the content of the summaries was evaluated in an automatic manner (Subsubsection 4.2.1), whereas on the other hand, their readability was manually assessed (Subsubsection 4.2.2). In addition, a comparison with current multi-lingual TS systems was also carried out (Subsubsection 4.2.3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The automatic summaries were compared to the model ones, using ROUGE (Lin, 2004) , a widespread tool for evaluating TS. In this way, the content of the summaries was assessed, since this tool allows to compute recall, precision and F-measure with respect to different metrics, all of them based on how much vocabulary overlap there is between an automatic and model summary. Table 2 shows the F-measure value for ROUGE-1 (R-1), ROUGE-2 (R-2), and ROUGE-SU4 (R-SU4) for each of the proposed multi-lingual TS approaches. R-1 computes the number of common unigram between the automatic and model summary; R-2 computes the number of bi-grams, whereas R-SU4 accounts for the number of bigrams with a maximum distance of four words inbetween.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 80, |
|
"text": "(Lin, 2004)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Evaluation", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Moreover, a t-test was performed in order to account for the significance of the results at a 95% level of confidence. Results statistically significant are marked with a star. As it can be seen from the table, the results for the languageindependent (LI) and language-specific (LS) approaches are statistically significant compared to the mono-lingual approach combined with machine translation (TS+MT) in all the cases, except for English. Furthermore, from the results obtained, it is worth noting that the LS approach Table 2 : F-measure results for the content evaluation using ROUGE (LI=languageindependent; LS=language-specific; TS= monolingual; TS+MT=mono-lingual and machine translation). obtains better results than the LI approach, in all ROUGE metrics, except R-1 for French, where LI and LS obtain very similar results. In addition, the differences between them are statistically significant for German and Spanish. As it can also be seen, the LS obtains the best results for English and Spanish. This may happens because these languages have a lot of specific resources for dealing with them. In contrast, the performance for French and German linguistic resources may not be as accurate as for the other languages, thus affecting the results. Moreover, it is also worth noting that the performance of the LI approach for German is quite low with respect to the other languages. This is due to the fact that the way of writing in German differs from the others in that it is more agglutinative (e.g. arbeitstag 9 ); consequently, the frequency for some of the words in the documents will be computed separately (in the previous example tag and arbeitstag will have different frequencies). This occurs because in the LI approach we do not rely on any specific resources, such as tokenisers or stemmers; we only use the corresponding stop word list for each language.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 522, |
|
"end": 529, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Content Evaluation", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "From Table 2 we can conclude that the LS approach is the most appropriate to tackle multilingual TS. However, we are interested in carrying out a readability assessment, so that the summaries generated by our best approach (LS) can be also assessed with respect to their quality. For conducting this type of assessment, we followed 9 day at work the DUC guidelines 10 , and we asked four people (two natives of Spanish and German and two with very advanced knowledge of English and French) to manually evaluate each summary, assigning values from 1 to 5 (1=very poor. . . 5=very good) with respect to five quality criteria: grammaticality, redundancy, clarity, focus and coherence. Results are shown in Table 3 Table 3 : Readability Assessment of the languagespecific (LS) multi-lingual TS approach.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 12, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 703, |
|
"end": 710, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 711, |
|
"end": 718, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Readability Evaluation", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "In general terms, the results obtained in the readability assessment are very good. This means that using the language-specific approach, the resulting summaries are also good with respect to their quality. Concerning this issue, German summaries obtains the best results, all of them above 4 out of 5. The summaries in the remaining languages perform also very good in the coherence and redundancy criteria. It is worth noting that we generated single-document summaries (i.e., the summaries were produced taking only a document as input), so the chances of redundant information decrease. However, in this criteria we also measured the repetition of named entities, so in this sense, despite relying on named entities and concepts, there was not much repeated information in the summaries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Readability Evaluation", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "Multi-lingual Summarisers", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with Current", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "With the purpose of widening the analysis and verifying our results, we compared our LS approach to several current multi-lingual TS systems, that also produce extractive summaries as a result. In particular, we selected:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with Current", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "\u2022 Open Text Summarizer 11 (OTS). This is a multi-lingual summariser able to generate summaries in more than 25 languages, such as English, German, Spanish, Russian or Hebrew. In this approach, keywords are identified by means of word occurrence, and sen-tences are given a score based on the the keywords they contain. Some language-specific resources, such as stemmers and stop word lists are employed. It has been shown that this system obtains better performance than other multi-lingual TS systems (Yatsko and Vishnyakov, 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 502, |
|
"end": 531, |
|
"text": "(Yatsko and Vishnyakov, 2007)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with Current", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "\u2022 MS Word 2007 Summarizer 12 (MS Word). This summariser is integrated into Microsoft Word 2007 and it also generates summaries in several languages. Since it is a commercial system, the implementation details are not revealed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with Current", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "\u2022 Essential Summarizer 13 (Essential). This TS system is a commercial version of the one presented in (Lehmam, 2010) . It relies on linguistic techniques to perform semantic analysis of written text, taking into account discursive elements of the text. It is able to produce summaries in twenty languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 116, |
|
"text": "(Lehmam, 2010)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with Current", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "For conducting such comparison, summaries were generated using the aforementioned TS systems in the four languages we dealt with. Then, they were evaluated using ROUGE. Table 4 shows the F-measure results for the ROUGE-1 metric. As before, we performed a t-test in order to analyse the significance of the results for a 95% confidence level (significant results are marked with a star). In most of the cases, our LS approach performs better than the other multi-lingual TS systems, except the OTS which performs slightly better for French and German. Our approach (LS) and OTS performed statistically better than the Essential summariser for German, increasing the results by 20% compared to it. Moreover, for Spanish, LS improves the results of MS Word and Essential summarisers by 9% and 16%, respectively, and this improvement is also statistically significant. Table 4 : Comparison with current multi-lingual TS systems (F-measure results for ROUGE-1).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 176, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 865, |
|
"end": 872, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison with Current", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "This paper presented a comparative analysis of three widespread multi-lingual summarisation approaches in order to determine which one would be more suitable to adopt when tackling this task. In particular, we studied: i) a languageindependent approach using the term frequency technique; ii) a language-specific approach, relying on specific linguistic resources for each of the target language (named entities recognisers and semantic resources); and finally, iii) a monolingual text summariser for English, whose output was then inputted to a machine translation system in order to generate summaries in the remaining languages. The experiments carried out in English, French, German and Spanish showed that by employing language-specific resources, the resulting summaries performed better than most of the state-of-the-art multi-lingual summarisers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In the future, we plan to extend our analysis to other languages as well as to investigate other ways of generating multi-lingual summaries, for instance, employing Wikipedia, as in (Filatova, 2009) . This would be the starting point to address cross-lingual summarisation, task that we would like to tackle in the long-term.", |
|
"cite_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 198, |
|
"text": "(Filatova, 2009)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Extractive approaches are those ones which only detect important sentences in documents and extract them, without performing any kind of language generation or generalisation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.wikipedia.org/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://alias-i.com/lingpipe/ 4 http://cogcomp.cs.illinois.edu/page/software view/4 5 http://www.nlpado.de/ sebastian/ner german.html 6 http://nlp.lsi.upc.edu/freeling/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://translate.google.com/ 8 http://langtech.jrc.ec.europa.eu/JRC Resources.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://duc.nist.gov/duc2007/quality-questions.txt 11 http://libots.sourceforge.net/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.microsoft.com/education/autosummarize.aspx 13 https://essential-mining.com/es/index.jsp?ui.lang=en", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research is funded by the Spanish Government thorugh the FPI grant (BES-2007-16268) and the projects TIN2006-15265-C06-01 and TIN2009-13391-C04-01; and by the Valencian Government (projects PROMETEO/2009/119 and ACOMP/2011/001). The authors would like to thank also Ra\u00fal Bernabeu, Hakan Ceylan, Sabine Klausner, and Violeta Seretan for their help in the manual evaluation of the summaries..", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Ganesh Ramakrishnan, and Pushpak Bhattacharyya", |
|
"authors": [ |
|
{ |
|
"first": "Kedar", |
|
"middle": [], |
|
"last": "Bellare", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anish", |
|
"middle": [ |
|
"Das" |
|
], |
|
"last": "Sarma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Atish", |
|
"middle": [ |
|
"Das" |
|
], |
|
"last": "Sarma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Navneet", |
|
"middle": [], |
|
"last": "Loiwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vaibhav", |
|
"middle": [], |
|
"last": "Mehta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 4th International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kedar Bellare, Anish Das Sarma, Atish Das Sarma, Navneet Loiwal, Vaibhav Mehta, Ganesh Ramakr- ishnan, and Pushpak Bhattacharyya. 2004. Generic text summarization using wordnet. In Proceedings of the 4th International Conference on Language Resources and Evaluation.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Wordnet and automated text summarization", |
|
"authors": [ |
|
{ |
|
"first": "Pedro", |
|
"middle": [], |
|
"last": "Rui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chaves", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 6th Natural Language Processing Pacific Rim Symposium", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "109--116", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rui Pedro Chaves. 2001. Wordnet and automated text summarization. In Proceedings of the 6th Natural Language Processing Pacific Rim Symposium, pages 109-116.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Eurowordnet: A multilingual database with lexical semantic networks", |
|
"authors": [ |
|
{ |
|
"first": "Jeremy", |
|
"middle": [ |
|
"Ellman" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Natural Language Engineering", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "427--430", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeremy Ellman. 2003. Eurowordnet: A multilingual database with lexical semantic networks. Natural Language Engineering, 9:427-430.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Training and evaluating a german named entity recognizer with semantic generalization", |
|
"authors": [ |
|
{ |
|
"first": "Manaal", |
|
"middle": [], |
|
"last": "Faruqui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of KONVENS 2010", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manaal Faruqui and Sebastian Pad\u00f3. 2010. Train- ing and evaluating a german named entity recog- nizer with semantic generalization. In Proceedings of KONVENS 2010, Saarbr\u00fccken, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "WordNet: An Electronical Lexical Database", |
|
"authors": [ |
|
{ |
|
"first": "Christiane", |
|
"middle": [], |
|
"last": "Fellbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronical Lexical Database. The MIT Press, Cambridge, MA.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Event-Based Extractive Summarization", |
|
"authors": [ |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Filatova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vasileios", |
|
"middle": [], |
|
"last": "Hatzivassiloglou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Text Summarization Branches Out: Proceedings of the ACL-04 Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "104--111", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elena Filatova and Vasileios Hatzivassiloglou. 2004. Event-Based Extractive Summarization. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 104-111.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Multilingual wikipedia, summarization, and information trustworthiness", |
|
"authors": [ |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Filatova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the IGIR Workshop on Information Access in a Multilingual World", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elena Filatova. 2009. Multilingual wikipedia, summa- rization, and information trustworthiness. In Pro- ceedings of the IGIR Workshop on Information Ac- cess in a Multilingual World.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Syntax: A functional-typological introduction, II", |
|
"authors": [ |
|
{ |
|
"first": "Talmy", |
|
"middle": [], |
|
"last": "Giv\u00f3n", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Talmy Giv\u00f3n, 1990. Syntax: A functional-typological introduction, II. John Benjamins.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Multi-document Summarization by Sentence Extraction", |
|
"authors": [ |
|
{ |
|
"first": "Jade", |
|
"middle": [], |
|
"last": "Goldstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vibhu", |
|
"middle": [], |
|
"last": "Mittal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Kantrowitz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "NAACL-ANLP Workshop on Automatic Summarization", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "40--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jade Goldstein, Vibhu Mittal, Jaime Carbonell, and Mark Kantrowitz. 2000. Multi-document Summa- rization by Sentence Extraction. In NAACL-ANLP Workshop on Automatic Summarization, pages 40- 48.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Gofaisum: A symbolic summarizer for duc", |
|
"authors": [ |
|
{ |
|
"first": "Fabrizio", |
|
"middle": [], |
|
"last": "Gotti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guy", |
|
"middle": [], |
|
"last": "Lapalme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luka", |
|
"middle": [], |
|
"last": "Nerima", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Wehrli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Document Understanding Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fabrizio Gotti, Guy Lapalme, Luka Nerima, and Eric Wehrli. 2007. Gofaisum: A symbolic summarizer for duc. In Proceedings of the Document Under- standing Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Exploitation of named entities in automatic text summarization for swedish", |
|
"authors": [ |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Hassel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 14th Mnordic Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martin Hassel. 2003. Exploitation of named entities in automatic text summarization for swedish. In Pro- ceedings of the 14th Mnordic Conference on Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Automated text summarization in summarist", |
|
"authors": [ |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Advances in Automatic Text Summarization", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "81--94", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eduard Hovy and Chin-Yew Lin. 1999. Automated text summarization in summarist. In Inderjeet Mani and Mark Maybury, editors, Advances in Automatic Text Summarization, pages 81-94. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "NewsGist: a multilingual statistical news summarizer", |
|
"authors": [ |
|
{ |
|
"first": "Mijail", |
|
"middle": [], |
|
"last": "Kabadjov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Atkinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Steinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralf", |
|
"middle": [], |
|
"last": "Steinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erik", |
|
"middle": [], |
|
"last": "Van Der", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Goot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the European conference on Machine learning and knowledge discovery in databases: Part III", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "591--594", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mijail Kabadjov, Martin Atkinson, Josef Steinberger, Ralf Steinberger, and Erik Van Der Goot. 2010. NewsGist: a multilingual statistical news summa- rizer. In Proceedings of the European conference on Machine learning and knowledge discovery in databases: Part III, pages 591-594.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Essential summarizer: innovative automatic text summarization software in twenty languages", |
|
"authors": [ |
|
{ |
|
"first": "Abderrafih", |
|
"middle": [], |
|
"last": "Lehmam", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Adaptivity, Personalization and Fusion of Heterogeneous Information", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "216--217", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abderrafih Lehmam. 2010. Essential summarizer: in- novative automatic text summarization software in twenty languages. In Adaptivity, Personalization and Fusion of Heterogeneous Information, pages 216-217.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "ROUGE: a Package for Automatic Evaluation of Summaries", |
|
"authors": [ |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of Association of Computational Linguistics Text Summarization Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chin-Yew Lin. 2004. ROUGE: a Package for Auto- matic Evaluation of Summaries. In Proceedings of Association of Computational Linguistics Text Sum- marization Workshop, pages 74-81.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A new approach to improving multilingual summarization using a genetic algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Marina", |
|
"middle": [], |
|
"last": "Litvak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Last", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Menahem", |
|
"middle": [], |
|
"last": "Friedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "927--936", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marina Litvak, Mark Last, and Menahem Friedman. 2010a. A new approach to improving multilingual summarization using a genetic algorithm. In Pro- ceedings of the 48th Annual Meeting of the Associa- tion for Computational Linguistics, pages 927-936.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Towards multi-lingual summarization: A comparative analysis of sentence extraction methods on english and hebrew corpora", |
|
"authors": [ |
|
{ |
|
"first": "Marina", |
|
"middle": [], |
|
"last": "Litvak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Last", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slava", |
|
"middle": [], |
|
"last": "Kisilevich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Keim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hagay", |
|
"middle": [], |
|
"last": "Lipman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Assaf Ben", |
|
"middle": [], |
|
"last": "Gur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 4th Workshop on Cross Lingual Information Access", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "61--69", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marina Litvak, Mark Last, Slava Kisilevich, Daniel Keim, Hagay Lipman, and Assaf Ben Gur. 2010b. Towards multi-lingual summarization: A compara- tive analysis of sentence extraction methods on en- glish and hebrew corpora. In Proceedings of the 4th Workshop on Cross Lingual Information Access, pages 61-69.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A gradual combination of features for building automatic summarisation systems", |
|
"authors": [ |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Lloret", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manuel", |
|
"middle": [], |
|
"last": "Palomar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 12th International Conference on Text, Speech and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "16--23", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elena Lloret and Manuel Palomar. 2009. A grad- ual combination of features for building automatic summarisation systems. In Proceedings of the 12th International Conference on Text, Speech and Dia- logue, pages 16-23.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "The automatic creation of literature abstracts", |
|
"authors": [ |
|
{ |
|
"first": "Hans", |
|
"middle": [], |
|
"last": "Peter Luhn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1958, |
|
"venue": "Advances in Automatic Text Summarization", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "15--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hans Peter Luhn. 1958. The automatic creation of lit- erature abstracts. In Inderjeet Mani and Mark May- bury, editors, Advances in Automatic Text Summa- rization, pages 15-22. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Comparaci\u00f3n de tres modelos de texto para la generaci\u00f3n autom\u00e1tica de res\u00famenes. Sociedad Espa\u00f1ola para el Procesamiento del Lenguaje Natural", |
|
"authors": [ |
|
{ |
|
"first": "Romyna", |
|
"middle": [], |
|
"last": "Montiel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ren\u00e9", |
|
"middle": [], |
|
"last": "Garc\u00eda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yulia", |
|
"middle": [], |
|
"last": "Ledeneva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rafael", |
|
"middle": [ |
|
"Cruz" |
|
], |
|
"last": "Reyes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "43", |
|
"issue": "", |
|
"pages": "303--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Romyna Montiel, Ren\u00e9 Garc\u00eda, Yulia Ledeneva, and Rafael Cruz Reyes. 2009. Comparaci\u00f3n de tres modelos de texto para la generaci\u00f3n autom\u00e1tica de res\u00famenes. Sociedad Espa\u00f1ola para el Proce- samiento del Lenguaje Natural, 43:303-311.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The pyramid method: Incorporating human content selection variation in summarization evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Passonneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "ACM Transactions on Speech and Language Processing", |
|
"volume": "4", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ani Nenkova, Rebecca Passonneau, and Kathleen McKeown. 2007. The pyramid method: Incorpo- rating human content selection variation in summa- rization evaluation. ACM Transactions on Speech and Language Processing, 4(2):4.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Comparative Evaluation of Term-Weighting Methods for Automatic Summarization", |
|
"authors": [ |
|
{ |
|
"first": "Constantin", |
|
"middle": [], |
|
"last": "Or\u0203san", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Journal of Quantitative Linguistics", |
|
"volume": "16", |
|
"issue": "1", |
|
"pages": "67--95", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Constantin Or\u0203san. 2009. Comparative Evaluation of Term-Weighting Methods for Automatic Sum- marization. Journal of Quantitative Linguistics, 16(1):67-95.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "A language independent approach to multilingual text summarization", |
|
"authors": [ |
|
{ |
|
"first": "Alkesh", |
|
"middle": [], |
|
"last": "Patel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanveer", |
|
"middle": [], |
|
"last": "Siddiqui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Tiwary", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Large Scale Semantic Access to Content (Text, Image, Video, and Sound), RIAO '07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "123--132", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alkesh Patel, Tanveer Siddiqui, and U. S. Tiwary. 2007. A language independent approach to multi- lingual text summarization. In Large Scale Semantic Access to Content (Text, Image, Video, and Sound), RIAO '07, pages 123-132.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "MEAD -A Platform for Multidocument Multilingual Text Summarization", |
|
"authors": [ |
|
{ |
|
"first": "Dragomir", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Allison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sasha", |
|
"middle": [], |
|
"last": "Blair-Goldensohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Blitzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arda", |
|
"middle": [], |
|
"last": "Celebi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elliott", |
|
"middle": [], |
|
"last": "Drabek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wai", |
|
"middle": [], |
|
"last": "Lam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danyu", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jahna", |
|
"middle": [], |
|
"last": "Otterbacher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hong", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Horacio", |
|
"middle": [], |
|
"last": "Saggion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simone", |
|
"middle": [], |
|
"last": "Teufel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Topper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Winkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 4th International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dragomir Radev, Tim Allison, Sasha Blair- Goldensohn, John Blitzer, Arda Celebi, Elliott Drabek, Wai Lam, Danyu Liu, Jahna Otterbacher, Hong Qi, Horacio Saggion, Simone Teufel, Michael Topper, Adam Winkel, and Zhu Zhang. 2004. MEAD -A Platform for Multidocument Multilin- gual Text Summarization. In Proceedings of the 4th International Conference on Language Resources and Evaluation.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Design challenges and misconceptions in named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Lev", |
|
"middle": [], |
|
"last": "Ratinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 13th Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "147--155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the 13th Conference on Computa- tional Natural Language Learning, pages 147-155.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "A method for evaluating modern systems of automatic text summarization", |
|
"authors": [ |
|
{ |
|
"first": "Viatcheslav", |
|
"middle": [], |
|
"last": "Yatsko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timur", |
|
"middle": [], |
|
"last": "Vishnyakov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Automatic Documentation and Mathematical Linguistics", |
|
"volume": "41", |
|
"issue": "", |
|
"pages": "93--103", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Viatcheslav Yatsko and Timur Vishnyakov. 2007. A method for evaluating modern systems of automatic text summarization. Automatic Documentation and Mathematical Linguistics, 41:93-103.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Unsupervised synthesis of multilingual wikipedia articles", |
|
"authors": [ |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Yuncong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascale", |
|
"middle": [], |
|
"last": "Fung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "197--205", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen Yuncong and Pascale Fung. 2010. Unsupervised synthesis of multilingual wikipedia articles. In Pro- ceedings of the 23rd International Conference on Computational Linguistics, pages 197-205.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Statistical properties of the JRC corpus.", |
|
"html": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": ".", |
|
"html": null, |
|
"content": "<table><tr><td/><td colspan=\"4\">English French German Spanish</td></tr><tr><td colspan=\"2\">Grammaticality 3.4</td><td>4.3</td><td>4.6</td><td>3.1</td></tr><tr><td>Redundancy</td><td>3.8</td><td>5.0</td><td>4.3</td><td>4.8</td></tr><tr><td>Clarity</td><td>3.6</td><td>3.9</td><td>4.6</td><td>3.8</td></tr><tr><td>Focus</td><td>4.4</td><td>3.9</td><td>4.6</td><td>4.6</td></tr><tr><td>Coherence</td><td>4.0</td><td>3.5</td><td>4.0</td><td>3.5</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |