{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:57:55.182015Z" }, "title": "A multi-lingual and cross-domain analysis of features for text simplification", "authors": [ { "first": "Regina", "middle": [], "last": "Stodden", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heinrich Heine University", "location": { "settlement": "D\u00fcsseldorf", "country": "Germany" } }, "email": "" }, { "first": "Laura", "middle": [], "last": "Kallmeyer", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heinrich Heine University", "location": { "settlement": "D\u00fcsseldorf", "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In text simplification and readability research, several features have been proposed to estimate or simplify a complex text, e.g., readability scores, sentence length, or proportion of POS tags. These features are however mainly developed for English. In this paper, we investigate their relevance for Czech, German, English, Spanish, and Italian text simplification corpora. Our multilingual and multi-domain corpus analysis shows that the relevance of different features for text simplification is different per corpora, language, and domain. For example, the relevance of the lexical complexity is different across all languages, the BLEU score across all domains, and 14 features within the web domain corpora. Overall, the negative statistical tests regarding the other features across and within domains and languages lead to the assumption that text simplification models may be transferable between different domains or different languages.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In text simplification and readability research, several features have been proposed to estimate or simplify a complex text, e.g., readability scores, sentence length, or proportion of POS tags. These features are however mainly developed for English. In this paper, we investigate their relevance for Czech, German, English, Spanish, and Italian text simplification corpora. Our multilingual and multi-domain corpus analysis shows that the relevance of different features for text simplification is different per corpora, language, and domain. For example, the relevance of the lexical complexity is different across all languages, the BLEU score across all domains, and 14 features within the web domain corpora. Overall, the negative statistical tests regarding the other features across and within domains and languages lead to the assumption that text simplification models may be transferable between different domains or different languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In research regarding readability and text simplification, several features are mentioned which identify easy-to-read sentences or help to transform complex to simplified texts. However, features such as readability metrics are highly criticized because they only consider surface characteristics, e.g., word and sentence length, ignore other relevant factors, such as infrequent words (Collins-Thompson, 2014) , and are optimized only for English. Therefore, Collins-Thompson (2014) proposes more sophisticated features, e.g., parse tree height or word frequency, which might be applicable to non-English-languages too. Similar to the research in text readability, most text simplification research is concerned with English, with some exceptions, e.g., Italian (Brunato et al., 2016) or Czech (Baran\u010d\u00edkov\u00e1 and Bojar, 2019) , or multi-lingual approaches, e.g., Scarton et al. (2017) . Text simplification or readability measurement models with the same feature set for all corpora have been shown to perform well on crosslingual (Scarton et al., 2017) , multi-lingual (Yimam et al., 2017) , and cross-domain (Gasperin et al., 2009) corpora. However, due to language or domain characteristics, distinct features, e.g., parse tree height, proportion of added lemmas, or usage of passive voice, might be more or less relevant during the simplification process and also during its evaluation. So far, it has not been investigated whether the relevance of distinct text simplification features differs across languages and domains. We therefore address the following research questions (RQ) in this paper:", "cite_spans": [ { "start": 386, "end": 410, "text": "(Collins-Thompson, 2014)", "ref_id": "BIBREF2" }, { "start": 460, "end": 483, "text": "Collins-Thompson (2014)", "ref_id": "BIBREF2" }, { "start": 763, "end": 785, "text": "(Brunato et al., 2016)", "ref_id": "BIBREF22" }, { "start": 795, "end": 824, "text": "(Baran\u010d\u00edkov\u00e1 and Bojar, 2019)", "ref_id": "BIBREF21" }, { "start": 862, "end": 883, "text": "Scarton et al. (2017)", "ref_id": "BIBREF14" }, { "start": 1030, "end": 1052, "text": "(Scarton et al., 2017)", "ref_id": "BIBREF14" }, { "start": 1069, "end": 1089, "text": "(Yimam et al., 2017)", "ref_id": "BIBREF19" }, { "start": 1109, "end": 1132, "text": "(Gasperin et al., 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "1. Do complex texts and its simplified version differ significantly regarding linguistic features? Can languageindependent linguistic features explain at least partially the simplification process?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "2. Is the simplification process consistent between corpora across and within domains?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "3. Is the simplification process consistent between corpora within and across languages?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Concretely, we analyze the relevance of features named in readability and text simplification research on aligned sentence simplification pairs in five languages, i.e., Czech, German, English, Spanish, and Italian, and in three domains, i.e., web data, Wikipedia articles, and news articles. This automated multi-lingual text simplification corpus analysis is implemented based on the analysis proposed in Martin et al. (2018) . For re-use on other corpora, our code is available on github 1 .", "cite_spans": [ { "start": 406, "end": 426, "text": "Martin et al. (2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The paper is structured as follows: Section 2 gives an overview of related work, the next section describes our methods for addressing the above mentioned research questions, including corpora, features, and evaluation methods. Section 4 discusses our results, and Section 5 concludes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Several studies of text readability/simplification analyze or compare texts or sentence pairs with different complexity levels, e.g., Collins-Thompson (2014) or Kauchak et al. (2014) in English, Hancke et al. (2012) in German, Gasperin et al. (2009) or Aluisio et al. (2010) in Portuguese, Pil\u00e1n and Volodina (2018) in Swedish, and Scarton et al. (2017) in English, Italian, and Spanish. However, in contrast to the paper in hand, they focus on building either complexity level assessment models using and comparing grouped features sets or on the theoretical justification of these features (Collins-Thompson, 2014) rather than on a comparison of the relevance and statistical significance of the distinct features (see RQ1). Most of the text level features proposed in these studies, e.g., parse tree height, passive voice, length of verb phrases, are also considered in our work. Unfortunately, we could not include discourse-level features, e.g., coherence, idea density, or logical argumentation, because of the lack of alignments at that level.", "cite_spans": [ { "start": 134, "end": 157, "text": "Collins-Thompson (2014)", "ref_id": "BIBREF2" }, { "start": 161, "end": 182, "text": "Kauchak et al. (2014)", "ref_id": "BIBREF6" }, { "start": 195, "end": 215, "text": "Hancke et al. (2012)", "ref_id": "BIBREF5" }, { "start": 227, "end": 249, "text": "Gasperin et al. (2009)", "ref_id": "BIBREF4" }, { "start": 253, "end": 274, "text": "Aluisio et al. (2010)", "ref_id": "BIBREF0" }, { "start": 290, "end": 315, "text": "Pil\u00e1n and Volodina (2018)", "ref_id": "BIBREF13" }, { "start": 319, "end": 353, "text": "Swedish, and Scarton et al. (2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2." }, { "text": "In the context of text simplification, several related corpus studies exist either to analyze the quality of a new corpus, e.g., (Xu et al., 2015) or Scarton et al. (2018) , or to build an evaluation metric, e.g., Martin et al. (2018) . Martin et al. (2018) implemented several features regarding English text simplification and test whether they correlate with human judgments in order to build an evaluation metric which does not require gold simplifications. Their work is the most similar to ours, but in comparison to them, we will analyze simplification features from another perspective: Instead of comparing with human judgments, we will evaluate the features at their simplification level, language, and domain. The analysis proposed here is based on their implementation, but it extends it with more features and enables the analysis of other languages than English. Gasperin et al. (2009) built a classifier that predicts whether a sentence needs to be split in the context of Portuguese text simplification. Their basic feature set, including, e.g., word length, sentence length, and number of clauses, achieved good results on the news-article domain (F-score of 73.40), the science articles domain (72.50) but performs best crossdomain (77.68).", "cite_spans": [ { "start": 129, "end": 146, "text": "(Xu et al., 2015)", "ref_id": "BIBREF26" }, { "start": 150, "end": 171, "text": "Scarton et al. (2018)", "ref_id": "BIBREF15" }, { "start": 214, "end": 234, "text": "Martin et al. (2018)", "ref_id": "BIBREF9" }, { "start": 237, "end": 257, "text": "Martin et al. (2018)", "ref_id": "BIBREF9" }, { "start": 877, "end": 899, "text": "Gasperin et al. (2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2." }, { "text": "We use similar features but analyze them separately and evaluate them regarding other domains, i.e., web data and Wikipedia (see RQ2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2." }, { "text": "The topic of multi-lingual text simplification is also related to this paper. For complex word identification, a sub-task of text simplification, a data set in German, English, and Spanish exists (Yimam et al., 2017) . On this data set, Finnimore et al. (2019) tested language-independent features as to whether they generalize in a cross-lingual setting. Their ablation tests identified the number of syllables, number of tokens, ratio of punctuation, and word probability as the best performing features. In contrast, Scarton et al. (2017) focus on syntactical multi-lingual simplification. They proposed a multi-lingual classifier for deciding whether a sentence needs to be simplified or not for English, Italian, and Spanish, using the same features for all languages. For each language, the system achieved an F1-score of roughly 61% using the same feature set. In our study, we investigate whether their findings also hold for both syntactic and lexical simplifications and not only one of them (see RQ3).", "cite_spans": [ { "start": 196, "end": 216, "text": "(Yimam et al., 2017)", "ref_id": "BIBREF19" }, { "start": 520, "end": 541, "text": "Scarton et al. (2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2." }, { "text": "In order to compare text simplification corpora in different languages and domains, we have chosen eight corpora in five languages and three domains (see Section 3.1). For the analysis, we use in sum 104 language-independent features (see Section 3.2). In order to analyze relevance of the features per corpus, language, and domain, we conduct several statistical tests (see Section 3.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3." }, { "text": "Most text simplification research focuses on English, but also research in other languages exist, e.g., Bulgarian, French, Danish, Japanese, Korean. However, due to limited access, now-defunct links, non-parallel-versions, or a missing statement regarding availability, we focus on the following four non-English text simplification corpora:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1." }, { "text": "\u2022 German (DE) web data corpus (Klaper et al., 2013) , \u2022 Spanish (ES) news corpus Newsela (Xu et al., 2015) 2 , 2 https://newsela.com/data/", "cite_spans": [ { "start": 30, "end": 51, "text": "(Klaper et al., 2013)", "ref_id": "BIBREF24" }, { "start": 89, "end": 106, "text": "(Xu et al., 2015)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1." }, { "text": "\u2022 Czech (CS) newspaper corpus COSTRA (Baran\u010d\u00edkov\u00e1 and Bojar, 2019) 3 , and", "cite_spans": [ { "start": 37, "end": 66, "text": "(Baran\u010d\u00edkov\u00e1 and Bojar, 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1." }, { "text": "\u2022 Italian (IT) web data corpus PaCCSS (Brunato et al., 2016) 4 .", "cite_spans": [ { "start": 38, "end": 60, "text": "(Brunato et al., 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1." }, { "text": "In contrast, several freely available corpora for English text simplification exist. We decided to use the following four:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1." }, { "text": "\u2022 TurkCorpus (Xu et al., 2016) 5 ,", "cite_spans": [ { "start": 13, "end": 30, "text": "(Xu et al., 2016)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1." }, { "text": "\u2022 QATS corpus (\u0160tajner et al., 2016) 6 , and", "cite_spans": [ { "start": 37, "end": 38, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1." }, { "text": "\u2022 two current used versions of the Newsela corpus (Xu et al., 2015) 7 .", "cite_spans": [ { "start": 50, "end": 67, "text": "(Xu et al., 2015)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1." }, { "text": "The first version of Newsela (2015-03-02) (Xu et al., 2015) is already sentence-wise aligned whereas the second version (2016-01-29) is not aligned. Therefore, the alignment is computed on all adjacent simplification levels (e.g., 0-1, 1-2, .., 4-5) with the alignment algorithm MASSAlign proposed in Paetzold et al. 20178 using a similarity value \u03b1 of 0.2 for the paragraph as well as for the sentence aligner. In addition to the language variation, the corpora chosen for this purpose differ in their domains, i.e., newspaper articles, web data, and Wikipedia data. An overview, including the license, domain, size, and alignment type of the corpora, is provided in Table 1 . As illustrated in Table 1 , the corpora largely differ in their size of pairs (CS-Costra: 293, EN-Newsela-15: 141,582) as well as in the distribution of simplification transformations (see Table 1 ), e.g., 15% of only syntactic simplifications in EN-QATS but only 0.03% in EN-Newsela-15.", "cite_spans": [ { "start": 42, "end": 59, "text": "(Xu et al., 2015)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 668, "end": 675, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 696, "end": 703, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 867, "end": 874, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data", "sec_num": "3.1." }, { "text": "For the analysis, overall, 104 language-independent features are measured per corpus, domain, or language. 43 features, further called single features, are measured per item in the complex-simplified pair. For the domain and language comparison, the difference of each of the same 43 features between the complex and simplified text is measured, further called difference features. The remaining 18 features, paired features, describe respectively one feature per complex-simplified pair. The implementation of the features is in Python 3 and is based on the code provided by Martin et al. (2018) . In contrast to them, we are offering the usage of SpaCy 9 and Stanza 10 instead of NLTK for pre-processing. In comparison to SpaCy, Stanza is slower but has a higher accuracy and supports more languages. In the following, the results using SpaCy are presented. The pre-processing with SpaCy includes sentence-splitting, tokenization, lemmatizing, POS-tagging, dependency parsing, named entity recognition, and generating word embeddings. The SpaCy word embeddings are replaced in this study by pre-trained word embeddings of FastText (Grave et al., 2018) to achieve a higher quality 11 . Unless otherwise stated, this data is used to measure the used features.", "cite_spans": [ { "start": 576, "end": 596, "text": "Martin et al. (2018)", "ref_id": "BIBREF9" }, { "start": 1133, "end": 1153, "text": "(Grave et al., 2018)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "3.2." }, { "text": "The single features are grouped into proportion of part of speech (POS) tags, proportion of clauses & phrases, length of phrases, syntactical, lexical, word frequency, word length, sentence length, and readability features. An overview is provided in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 251, "end": 258, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Single Features", "sec_num": "3.2.1." }, { "text": "Proportion of POS Tags Features. Gasperin et al. (2009) and Kauchak et al. (2014) name the proportion of POS tags per sentence as a relevant feature for text simplification. According to Kercher (2013), a higher proportion of verbs in German indicates for instance a simpler text because it might be more colloquial. POS tag counts are normalized by dividing them by the number of tokens per text, as in Kauchak et al. (2014) . A list of all used POS tags features is provided in Table 2 .", "cite_spans": [ { "start": 33, "end": 55, "text": "Gasperin et al. (2009)", "ref_id": "BIBREF4" }, { "start": 60, "end": 81, "text": "Kauchak et al. (2014)", "ref_id": "BIBREF6" }, { "start": 404, "end": 425, "text": "Kauchak et al. (2014)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 480, "end": 487, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Single Features", "sec_num": "3.2.1." }, { "text": "Proportion of Clauses and Phrases Features. Gasperin et al. (2009) and recommend using the proportion of clauses and phrases. The clauses and phrases extend and complex a sentence, so they are often split (Gasperin et al., 2009) . The proportion of the clauses and phrases is measured using the dependency tree of the texts and differentiated, as shown in Table 2 .", "cite_spans": [ { "start": 44, "end": 66, "text": "Gasperin et al. (2009)", "ref_id": "BIBREF4" }, { "start": 205, "end": 228, "text": "(Gasperin et al., 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 356, "end": 363, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Single Features", "sec_num": "3.2.1." }, { "text": "Length of Phrases Features. In a study regarding sentence splitting prediction (Gasperin et al., 2009) , the length of noun, verb, and prepositional phrases are used as features because the longer a phrase, the more complex the sentence and the higher the amount of processing. Syntactic Features. We use six syntactic features, computed based on the SpaCy dependency trees and POS tags. Inspired by Niklaus et al. (2019) , we measure whether the head of the text is a verb (Feature 1). If the text contains more than one sentence, at least one root must be a verb. Following Universal Dependencies 12 , a verb is most likely to be the head of a sentence in several languages. So, sentences whose heads are not verbs might be ungrammatical or hard to read due to their uncommon structure. Therefore, the feature of whether the head of the sentence is a noun is added (2). Niklaus et al. (2019) also state that a sentence is more likely to be ungrammatical and, hence, more difficult to read if no child of the root is a subject (3). According to Collins-Thompson (2014), a sentence with a higher parse tree is more difficult to read, we therefore add the parse tree height as well (4). Feature (5) indicates whether the parse tree is projective; a parse is non-projective if dependency arcs cross each other or, put differently, if the yield of a subtree is discontinuous in the sentence. In some languages, e.g., German and Czech, non-projective dependency trees are rather frequent, but we hypothesize that they decrease readability. Gasperin et al. (2009) suggest passive voice (6) as a further feature because text simplification often includes transforming passive to active, as recommended in easy-to-read text guidelines, because the agent of the sentence might get clearer. Due to different dependency label sets in SpaCy for some languages, this feature is only implemented for German and English.", "cite_spans": [ { "start": 79, "end": 102, "text": "(Gasperin et al., 2009)", "ref_id": "BIBREF4" }, { "start": 400, "end": 421, "text": "Niklaus et al. (2019)", "ref_id": "BIBREF11" }, { "start": 872, "end": 893, "text": "Niklaus et al. (2019)", "ref_id": "BIBREF11" }, { "start": 1536, "end": 1558, "text": "Gasperin et al. (2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Single Features", "sec_num": "3.2.1." }, { "text": "Lexical Features. Further, six features are grouped into lexical features. The lexical complexity (Feature 1) might be a relevant feature because a word might be more familiar for a reader the more often it occurs in texts. In order to measure the lexical complexity of the input text, the third quartile of the log-ranks of each token in the frequency table is used (Alva-Manchego et al., 2019). The lexical density -type-token-ratio-(2) is calculated using the ratio of lexical items to the total number of words in the input text (Martin et al., 2018; Collins-Thompson, 2014; Hancke et al., 2012; Scarton et al., 2018) . It is assumed that a more complex text has a larger vocabulary than a simplified text (Collins-Thompson, 2014).", "cite_spans": [ { "start": 533, "end": 554, "text": "(Martin et al., 2018;", "ref_id": "BIBREF9" }, { "start": 555, "end": 578, "text": "Collins-Thompson, 2014;", "ref_id": "BIBREF2" }, { "start": 579, "end": 599, "text": "Hancke et al., 2012;", "ref_id": "BIBREF5" }, { "start": 600, "end": 621, "text": "Scarton et al., 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Single Features", "sec_num": "3.2.1." }, { "text": "Following Collins-Thompson (2014) , the proportion of function words is a relevant feature for readability and text simplification. In this study, function words (3) are defined using the universal dependency labels \"aux\", \"cop\", \"mark\" and \"case\". Additionally, we added the proportion of multi-word expressions (MWE, 4) using the dependency labels \"flat\", \"fixed\", and \"compound\" because it might be difficult for non-native speakers to identify and understand the separated components of an MWE, especially when considering long dependencies between its components. The ratio of referential expressions (5) is also added based on POS tags and dependency labels. The more referential expression, the more difficult the text because the reader has to connect previous or following tokens of the same or even another sentence. Lastly, the ratio of named entities (6) is examined because they might be difficult to understand for non-natives or non-experts of the topic.", "cite_spans": [ { "start": 10, "end": 33, "text": "Collins-Thompson (2014)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Single Features", "sec_num": "3.2.1." }, { "text": "Word Frequency Features. As another indication for lexical simplification, the word frequency can be used (Martin et al., 2018; Collins-Thompson, 2014) . Complex words are often infrequent, so word frequency features may help to identify difficult sentences. The frequency of the words is based on the ranks in the FastText Embeddings (Grave et al., 2018) . The average position of all tokens in the frequency table is measured as well as the position of the most infrequent word. ", "cite_spans": [ { "start": 106, "end": 127, "text": "(Martin et al., 2018;", "ref_id": "BIBREF9" }, { "start": 128, "end": 151, "text": "Collins-Thompson, 2014)", "ref_id": "BIBREF2" }, { "start": 335, "end": 355, "text": "(Grave et al., 2018)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Single Features", "sec_num": "3.2.1." }, { "text": "The paired features (see Table 3 ) are grouped into lexical, syntactic, simplification, word embeddings, and machine translation features.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Paired Features", "sec_num": "3.2.2." }, { "text": "Lexical Features. Inspired by Martin et al. (2018) and Alva-Manchego et al. (2019) , the following proportions relative to the simplified or complex texts are included as lexical features:", "cite_spans": [ { "start": 30, "end": 50, "text": "Martin et al. (2018)", "ref_id": "BIBREF9" }, { "start": 55, "end": 82, "text": "Alva-Manchego et al. (2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Paired Features", "sec_num": "3.2.2." }, { "text": "\u2022 Added Lemmas: Additional words can make the simplified sentence more precise and comprehensible by enriching it with, e.g., decorative adjectives or term definitions. \u2022 Deleted Lemmas: Deleting complex words might contribute to ease of readability. \u2022 Kept Lemmas: Keeping words, on the other hand, might contribute to preserving the meaning of the text (but also its complexity). Kept lemmas describe the words which occur in both texts but might be differently inflected. \u2022 Kept Words: Kept Words are a portion of kept lemmas, they describe the proportion of words which occur exactly in the same inflection in both texts. \u2022 Rewritten Words: Words which are differently inflected in the simplified text, compared to the complex one, but have the same lemma are called rewritten words. Granted that complex words are rewritten, a higher amount of rewritten words represents a more simplified text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paired Features", "sec_num": "3.2.2." }, { "text": "The compression ratio is similar to the Levenshtein Distance and measures how many characters are left in the simplified text compared to the complex text. The Levenshtein Similarity measures the difference between complex and simplified texts by insertions, substitutions, or deletions of characters in the texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paired Features", "sec_num": "3.2.2." }, { "text": "Syntactic Features. The idea of the features of split and joined sentences are based on Gasperin et al. (2009) , both show an applied simplification transaction. The sentence is counted as split if the number of sentences of the complex text is lower than of the simplified text. The sentence is counted as joined if the number of sentences of the complex text is higher than of the simplified text.", "cite_spans": [ { "start": 88, "end": 110, "text": "Gasperin et al. (2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Paired Features", "sec_num": "3.2.2." }, { "text": "Simplification Features. In order to address more simplification transactions, we measure lexical, syntactical, and no changes. A complex-simplified-pair is considered as a lexical simplification if tokens are added or rewritten in the simplified text. A complex-simplified-pair is considered as a syntactic simplification if the text is split or joined. Also, a change from non-projective to projective, passive to active, and a reduction of the parse tree height are considered as syntactic simplifications. A complex-simplifiedpair is considered as identical if both texts are the same, so no simplification has been applied. As each pair is solely analyzed, the standard text simplification evaluation metric SARI (Xu et al., 2016) , which needs several gold references, cannot be considered in the analysis.", "cite_spans": [ { "start": 718, "end": 735, "text": "(Xu et al., 2016)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Paired Features", "sec_num": "3.2.2." }, { "text": "Word Embedding Features. The similarity between the complex and the simplified text (Martin et al., 2018) is measured using pre-trained FastText embeddings (Grave et al., 2018) . We consider cosine similarity, and also the dot product (Martin et al., 2018) . The higher the value, the more similar the sentences, the more the meaning might be preserved and the higher the simplification quality might be. Machine Translation (MT) Features. Lastly, three MT features are added to the feature set, i.e., BLEU, ROUGE-L, and METEOR. As text simplification is a monolingual machine translation task, evaluation metrics from MT, in particular the BLEU score, are often used in text simplification. Similar to the word embedding features, the higher the value the more meaning of the complex text is preserved in the simplified text. The BLEU score is a well-established measurement for MT based on n-grams. We use 12 different BLEU implementations, 8 from the Python package NLTK and 4 implemented in Sharma et al. (2017) .", "cite_spans": [ { "start": 84, "end": 105, "text": "(Martin et al., 2018)", "ref_id": "BIBREF9" }, { "start": 156, "end": 176, "text": "(Grave et al., 2018)", "ref_id": "BIBREF23" }, { "start": 235, "end": 256, "text": "(Martin et al., 2018)", "ref_id": "BIBREF9" }, { "start": 995, "end": 1015, "text": "Sharma et al. (2017)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Paired Features", "sec_num": "3.2.2." }, { "text": "The research questions stated in Section 1 will be answered using non-parametric statistical tests using the previously described features on the eight corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3." }, { "text": "In order to answer the first research question regarding differences between the simplified and the complex text, the complexity level is the dependent variable (0: complex, 1: simple). The features previously named are the independent variables and the values per complex-simple pairs are the samples. To evaluate whether the feature values differ between the simplified and complex texts, we use nonparametric statistical hypothesis test for dependent samples, i.e., Wilcoxon signed-rank tests. Afterwards, we measure the effect size r, where r>=0.4 represents a strong effect, 0.25<=r<0.4 a moderate effect and 0.1<=r<0.25 a low effect. For the analysis of the research questions 2 and 3 regarding differences between the corpora regarding domains or languages, Kruskal-Wallis one-way analyses of variance are conducted. Therefore, the dependent variables are the languages or domains and the independent variables are the paired and difference features. For the analysis within domains and languages, the tests are evaluated against all corpora of one domain or language, e.g., for Wikipedia data the values of EN-QATS and EN-TurkCorpus are analyzed. For the analysis within and across languages and domains, the tests are evaluated against stacked corpora. All corpora assigned to the same language or domain are stacked to one large corpus, e.g., the German corpus and IT-PaCCSS are stacked as web data corpus and are tested against the stacked Wikipedia corpus and the stacked news article corpus. If there is a significant difference between the groups, a Dunn-Bonferonni Post-hoc Test is applied to find the pair(s) of the difference. Afterwards, again, the effect size is measured using the same interpretation levels as for the Wilcoxon signed-rank tests.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3." }, { "text": "The results of the analysis are reported on eight corpora, five languages, three domains, and 104 features using Wilcoxon signed-rank tests and Kruskal-Wallis tests 13 . 13 All statistical characteristics are provided as supplementary material in the linked github repository.", "cite_spans": [ { "start": 170, "end": 172, "text": "13", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4." }, { "text": "These results should be handled with caution because they might be biased due to errors in SpaCy's output, e.g., regarding dependency parsing and named entity recognition, or due to the unbalanced corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4." }, { "text": "The results concerning the question whether the feature values of complex texts and its simplified version differ significantly are summarized in Table 2 . For all three sentence length features, both readability features, and the parse tree height feature, Wilcoxon signedrank tests indicate at least low but significant effects between the complex and simplified text pairs overall all corpora when analyzing the corpora solely. The result is not surprising since sentence length has already been shown to be a relevant feature in different languages, e.g., in English Napoles and Dredze 2010and Martin et al. (2018 ), in German Hancke et al. (2012 , and in Portuguese Aluisio et al. (2010) . The parse tree height also differs significantly for all corpora in the complex and simplified texts. Pil\u00e1n and Volodina (2018) and Napoles and Dredze (2010) also conclude in their studies regarding Swedish and English that the parse tree height is a relevant complexity measurement feature.", "cite_spans": [ { "start": 598, "end": 617, "text": "Martin et al. (2018", "ref_id": "BIBREF9" }, { "start": 618, "end": 650, "text": "), in German Hancke et al. (2012", "ref_id": null }, { "start": 671, "end": 692, "text": "Aluisio et al. (2010)", "ref_id": "BIBREF0" }, { "start": 797, "end": 822, "text": "Pil\u00e1n and Volodina (2018)", "ref_id": "BIBREF13" }, { "start": 827, "end": 852, "text": "Napoles and Dredze (2010)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 146, "end": 153, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Differences between Complex and Simplified Texts (RQ1)", "sec_num": "4.1." }, { "text": "Considering differences between the proportion of verbs in complex and simplified texts, the Wilcoxon signed-rank tests indicate at least low but significant effects for each corpus except EN-QATS. So, the assumption of Kercher (2013) , that a higher number of verbs simplifies a text can be generalized to other languages than German. In contrast, several features are only relevant for a few corpora and differ even more in the effect size. For example, Wilcoxon signed-rank tests indicate a strong significant effect for the lexical density in EN-Newsela-2015 (M comp =0.89\u00b10.08, M simp =0.93\u00b10.07, n=141,582, t(141,581)=1329762920.5, p <=.01) but only indicate at most moderate effects on three other corpora and no effect on the remaining four corpora. Furthermore, for several features, the Wilcoxon signed-rank tests indicate no significant difference not even for one corpus, e.g., nonprojectivity, proportion of symbols, or proportion of named entities (see Table 2 ). Overall, the results show that some of the proposed features help to explain the simplification processes in the selected corpora even if the features might well not be sufficient to explain the simplification process at all. In the next Subsections, we will follow up on these assumptions by comparing the consistency of the simplification process regarding domains and languages.", "cite_spans": [ { "start": 220, "end": 234, "text": "Kercher (2013)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 967, "end": 974, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Differences between Complex and Simplified Texts (RQ1)", "sec_num": "4.1." }, { "text": "Since the selected features are useful to explain the simplification process, the consistency or differences in the simplification process are measured using the difference version of these features as well as the paired features. The results regarding domains are separated into differences within and across domains. Table 2 : The single and difference features are presented sorted by groups. In the 2nd, 5th, 8th and 11th column, differences between the complex-simplified pairs are listed: \u2665 symbolizes differences in the pairs per corpus (RQ1), \u2663 in the pairs within domains, \u2660 in the pairs across domains, \u2666 in the pairs within languages, and \u25a0 in the pairs across languages. In the 3rd, 6th, 9th and 12th column, the differences between the languages and the domains are shown in across and within settings using the same symbols. The color of the symbols indicates the distribution of the effects: Black illustrates an effect for all languages or domains, gray for most of them and lightgray/white for only a few. Table 3 : The paired features are presented sorted by their group label. The significant effects per features are highlighted using the following symbols per research question: The \u2663 symbol represents within domain results, \u2660 across domains, \u2666 within languages, and \u25a0 across languages. Black illustrates an effect for all languages or domains, gray for most of them and white for only a few.", "cite_spans": [], "ref_spans": [ { "start": 319, "end": 326, "text": "Table 2", "ref_id": null }, { "start": 1023, "end": 1030, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Domain Simplification Consistency (RQ2)", "sec_num": "4.2." }, { "text": "Within Domains. When the features are analyzed regarding the consistency within a domain, significant differences are indicated only between the corpora of the web text domain. The German and Italian corpora of this domain differ significantly with a low effect for 14 features (see Table 2 and Table 3 ), e.g., parse tree height difference, difference of non-projectivity, characters per word, and BLEU score. The parse tree height is significantly more reduced in German (Difference:", "cite_spans": [], "ref_spans": [ { "start": 283, "end": 290, "text": "Table 2", "ref_id": null }, { "start": 295, "end": 302, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Domain Simplification Consistency (RQ2)", "sec_num": "4.2." }, { "text": "M DE =1.16\u00b11.96, N DE =1,888) than in Italian (M IT =0.13\u00b10.64, N IT =63,012, H(1)=759", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Simplification Consistency (RQ2)", "sec_num": "4.2." }, { "text": ".71, p <=.01, r=.11) which might be due to a higher average parse tree height in the German corpus (M comp =4.74\u00b12.41, M simple =3.58\u00b11.35) than in the Italian corpus (M comp =3.14\u00b10.97, M simp =3.02\u00b10.94). Parse tree height and sentence length are reduced in both corpora in the simplified texts, but, surprisingly, the average word length in characters is slightly increased in Italian (M comp =4.62\u00b10.91, M simp =4.64\u00b10.89). So, this effect might explain the significant difference between both corpora and should be considered for following analysis. Overall, the differences between the two web data corpora may tie to the high proportion of only lexical simplification in the IT corpus and high proportion of lexical and syntactic simplification in the DE corpus. The other corpora within one domain are more similar in their distribution, which may explain why they do not differ significantly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Simplification Consistency (RQ2)", "sec_num": "4.2." }, { "text": "Across Domains. The only significant difference across all domains is the BLEU score (H(2)=1429.0979, p <=.01, r=.12). A Dunn-Bonferonni Post-hoc Test indicates that the web (M =0.61\u00b10.15, N =64,900) and Wikipedia data (M =0.67\u00b10.22, N =19,377) are differing. This confirms the findings of Sulem et al. (2018) that BLEU is not suitable for measuring text simplification. Furthermore, the domains differ also in more features even if not significantly between all domains. The following features show only a significant difference between complex and simplified texts in one of the domains. In contrast, some features are relevant for text simplification in all domains, i.e., characters per sentence, syllables per sentence, words per sentence, parse tree height, proportions of auxiliary verbs and of verbs, FKGL, and FRE. Overall, these results show, also in combination with BLEU as the only significant difference across domains, that the simplification process seems to be consistent across the web, Wikipedia, and news article domain.", "cite_spans": [ { "start": 219, "end": 244, "text": "(M =0.67\u00b10.22, N =19,377)", "ref_id": null }, { "start": 290, "end": 309, "text": "Sulem et al. (2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Domain Simplification Consistency (RQ2)", "sec_num": "4.2." }, { "text": "The results of the differences in the simplification process regarding languages are separated into differences within and across languages. Within Languages. The comparison within a single language is done only for English because this is the only language where we have more than one corpus. All English corpora 14 are combined into a large corpus of 230,144 complex-simplified pairs. Using a Kruskal-Wallis test, no significant difference is indicated between the English corpora, which led to the conclusion that the simplification process measured using several linguistic features in these corpora is consistent. However, this must be handled with particular caution because the size of the corpora is unbalanced and, furthermore, the simplification processes applied have different focuses, varying between lexical and syntactic simplification, e.g., EN-QATS has 15.45% of syntactically simplified text pairs whereas EN-Newsela-15 has only 0.03% (see Table 1 ). Across Languages. The only significant difference between all languages is the lexical complexity difference (H(4)=425.1521, p <=.01, r=.12). A Dunn-Bonferonni Post-hoc Test indicates that only the German (M =0.33\u00b11.07) and the Czech corpus (M =-0.09\u00b11.19) are significantly differing. Surprisingly, the lexical complexity seems to increase in Czech during simplification. On the one hand, Wilcoxon signed-rank tests also indicate some features with a significant difference in the languagewise data regarding complex and simplified texts for only one or two languages:", "cite_spans": [], "ref_spans": [ { "start": 958, "end": 965, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Language Simplification Consistency (RQ3)", "sec_num": "4.3." }, { "text": "\u2022 DE: lexical complexity (r=.31, p <=.01),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Simplification Consistency (RQ3)", "sec_num": "4.3." }, { "text": "\u2022 IT: proportion of function words (r=.32, p <=.01), proportion of numerals (r=.19, p <=.01),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Simplification Consistency (RQ3)", "sec_num": "4.3." }, { "text": "\u2022 DE and IT: proportion of pronouns (r DE =.31, r IT =.31, p <=.01),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Simplification Consistency (RQ3)", "sec_num": "4.3." }, { "text": "\u2022 EN and CS: proportion of relative phrases (r CS =.12, r EN =.15, p <=.01).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Simplification Consistency (RQ3)", "sec_num": "4.3." }, { "text": "On the other hand, the simplification processes of all languages are similar regarding the following 9 features: characters per sentence, syllables per sentence, words per sentence, parse tree height, proportion of adpositions, proportion of verbs, proportion of prepositional phrases, FKGL, and FRE. Following these results as well as the result of the lexical complexity as sole difference regarding languages, the simplification process seems to be more or less consistent across Czech, German, English, Spanish, and Italian.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Simplification Consistency (RQ3)", "sec_num": "4.3." }, { "text": "14 From EN-Newsela-2016 only level 0 to 1 is used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Simplification Consistency (RQ3)", "sec_num": "4.3." }, { "text": "This study investigated whether text simplification processes differ within or across five languages (Czech, German, English, Italian, and Spanish) and three domains (newspaper articles, web texts, and Wikipedia texts). To this end, we first tested linguistic features as to their relevance for characterizing the differences in complexsimplified text pairs of eight corpora. Statistical tests indicate significant differences for some of the features, e.g., sentence length, parse tree height, or proportion of verbs. So, these features are used to measure the simplification process in this study. However, the selected features might well not be sufficient to explain the whole simplification process. Other features, such as morphological or grammatical features could improve it in future work. Furthermore, our study shows differences in the relevance of features per corpus. This insight was further refined regarding differences within and across domains. For the newspaper and Wikipedia corpora, no differences were found within each of the two domains, the statistical tests indicated only differences for the web corpora. These results as well as the finding of only one differing feature across domains, led to the assumption that the simplification process is consistent across and within domains, such as similarly stated in Vajjala and Meurers (2014) . Our study regarding within and across language comparisons also supports the results of Scarton et al. (2017) and Finnimore et al. (2019) : text simplification seems to be consistent across languages, which indicates that cross-lingual text simplification based on a single languageindependent feature set is a viable approach. Nevertheless, features might be weighted differently per language. Overall, the negative statistical tests regarding differences across and within domains and languages led to the assumption that the simplification process is robust across and within domains and languages. Especially the features of parse tree height, readability, and sentence length seem to be robust against domains and languages. In contrast, in the evaluation and designing of text simplification models, features such as lexical complexity, and BLEU score should be used with caution due to their found differences in the corpora. These findings might help to build a text simplification model or a text simplification metric that is aware of language or domain characteristics.", "cite_spans": [ { "start": 1339, "end": 1365, "text": "Vajjala and Meurers (2014)", "ref_id": "BIBREF18" }, { "start": 1456, "end": 1477, "text": "Scarton et al. (2017)", "ref_id": "BIBREF14" }, { "start": 1482, "end": 1505, "text": "Finnimore et al. (2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Works", "sec_num": "5." }, { "text": "https://github.com/rstodden/TS_corpora_ analysis", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This has the disadvantage that the here proposed corpus analysis is only available for languages supported by SpaCy and Fast-Text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://universaldependencies.org/docs/ en/dep/root.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research is part of the PhD-program \"Online Participation\", supported by the North Rhine-Westphalian funding scheme \"Forschungskolleg\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "6." } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Readability assessment for text simplification", "authors": [ { "first": "S", "middle": [], "last": "Aluisio", "suffix": "" }, { "first": "L", "middle": [], "last": "Specia", "suffix": "" }, { "first": "C", "middle": [], "last": "Gasperin", "suffix": "" }, { "first": "C", "middle": [], "last": "Scarton", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 5th Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aluisio, S., Specia, L., Gasperin, C., and Scarton, C. (2010). Readability assessment for text simplification. In Proceedings of the NAACL HLT 2010 5th Workshop on Innovative Use of NLP for Building Educational Ap- plications, pages 1-9, Los Angeles, California, June. ACL.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "EASSE: Easier automatic sentence simplification evaluation", "authors": [ { "first": "F", "middle": [], "last": "Alva-Manchego", "suffix": "" }, { "first": "L", "middle": [], "last": "Martin", "suffix": "" }, { "first": "C", "middle": [], "last": "Scarton", "suffix": "" }, { "first": "L", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on EMNLP and the 9th IJCNLP", "volume": "", "issue": "", "pages": "49--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alva-Manchego, F., Martin, L., Scarton, C., and Specia, L. (2019). EASSE: Easier automatic sentence simplifica- tion evaluation. In Proceedings of the 2019 Conference on EMNLP and the 9th IJCNLP, pages 49-54, Hong Kong, China, November. ACL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Computational assessment of text readability: A survey of current and future research", "authors": [ { "first": "K", "middle": [], "last": "Collins-Thompson", "suffix": "" } ], "year": 2014, "venue": "ITL -International Journal of Applied Linguistics", "volume": "165", "issue": "2", "pages": "97--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins-Thompson, K. (2014). Computational assessment of text readability: A survey of current and future re- search. ITL -International Journal of Applied Linguis- tics, 165(2):97-135.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Strong baselines for complex word identification across multiple languages", "authors": [ { "first": "P", "middle": [], "last": "Finnimore", "suffix": "" }, { "first": "E", "middle": [], "last": "Fritzsch", "suffix": "" }, { "first": "D", "middle": [], "last": "King", "suffix": "" }, { "first": "A", "middle": [], "last": "Sneyd", "suffix": "" }, { "first": "A", "middle": [], "last": "Ur Rehman", "suffix": "" }, { "first": "F", "middle": [], "last": "Alva-Manchego", "suffix": "" }, { "first": "A", "middle": [], "last": "Vlachos", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the NAACL HTL 2019", "volume": "", "issue": "", "pages": "970--977", "other_ids": {}, "num": null, "urls": [], "raw_text": "Finnimore, P., Fritzsch, E., King, D., Sneyd, A., Ur Rehman, A., Alva-Manchego, F., and Vlachos, A. (2019). Strong baselines for complex word identifica- tion across multiple languages. In Proceedings of the NAACL HTL 2019, pages 970-977, Minneapolis, Min- nesota, June. ACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning when to simplify sentences for natural text simplification", "authors": [ { "first": "C", "middle": [], "last": "Gasperin", "suffix": "" }, { "first": "L", "middle": [], "last": "Specia", "suffix": "" }, { "first": "T", "middle": [ "F" ], "last": "Pereira", "suffix": "" }, { "first": "Aluisio", "middle": [], "last": "", "suffix": "" }, { "first": "R", "middle": [ "M" ], "last": "", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ENIA", "volume": "", "issue": "", "pages": "809--818", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gasperin, C., Specia, L., Pereira, T. F., and Aluisio, R. M. (2009). Learning when to simplify sentences for natural text simplification. In Proceedings of ENIA, pages 809- 818.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Readability classification for German using lexical, syntactic, and morphological features", "authors": [ { "first": "J", "middle": [], "last": "Hancke", "suffix": "" }, { "first": "S", "middle": [], "last": "Vajjala", "suffix": "" }, { "first": "D", "middle": [], "last": "Meurers", "suffix": "" } ], "year": 2012, "venue": "The COLING 2012 Organizing Committee", "volume": "", "issue": "", "pages": "1063--1080", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hancke, J., Vajjala, S., and Meurers, D. (2012). Read- ability classification for German using lexical, syntactic, and morphological features. In Proceedings of COLING 2012, pages 1063-1080, Mumbai, India, December. The COLING 2012 Organizing Committee.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Text simplification tools: Using machine learning to discover features that identify difficult text", "authors": [ { "first": "D", "middle": [], "last": "Kauchak", "suffix": "" }, { "first": "O", "middle": [], "last": "Mouradi", "suffix": "" }, { "first": "C", "middle": [], "last": "Pentoney", "suffix": "" }, { "first": "G", "middle": [], "last": "Leroy", "suffix": "" } ], "year": 2014, "venue": "47th Hawaii International Conference on System Sciences", "volume": "", "issue": "", "pages": "2616--2625", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kauchak, D., Mouradi, O., Pentoney, C., and Leroy, G. (2014). Text simplification tools: Using machine learn- ing to discover features that identify difficult text. In 2014 47th Hawaii International Conference on System Sciences, pages 2616-2625, Jan.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Verstehen und Verst\u00e4ndlichkeit von Politikersprache", "authors": [ { "first": "J", "middle": [], "last": "Kercher", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kercher, J. (2013). Verstehen und Verst\u00e4ndlichkeit von Politikersprache. Springer Fachmedien Wiesbaden.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel", "authors": [ { "first": "J", "middle": [ "P" ], "last": "Kincaid", "suffix": "" }, { "first": "R", "middle": [ "P" ], "last": "Fishburne", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Rogers", "suffix": "" }, { "first": "B", "middle": [ "S" ], "last": "Chissom", "suffix": "" } ], "year": 1975, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kincaid, J. P., Fishburne Jr, R. P., Rogers, R. L., and Chissom, B. S. (1975). Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Reference-less quality estimation of text simplification systems", "authors": [ { "first": "L", "middle": [], "last": "Martin", "suffix": "" }, { "first": "S", "middle": [], "last": "Humeau", "suffix": "" }, { "first": "P.-E", "middle": [], "last": "Mazar\u00e9", "suffix": "" }, { "first": "\u00c9", "middle": [], "last": "De La Clergerie", "suffix": "" }, { "first": "A", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "B", "middle": [], "last": "Sagot", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 1st Workshop on Automatic Text Adaptation", "volume": "", "issue": "", "pages": "29--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin, L., Humeau, S., Mazar\u00e9, P.-E., de La Clergerie,\u00c9., Bordes, A., and Sagot, B. (2018). Reference-less quality estimation of text simplification systems. In Proceedings of the 1st Workshop on Automatic Text Adaptation, pages 29-38, Tilburg, the Netherlands, November. ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning simple Wikipedia: A cogitation in ascertaining abecedarian language", "authors": [ { "first": "C", "middle": [], "last": "Napoles", "suffix": "" }, { "first": "M", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics and Writing", "volume": "", "issue": "", "pages": "42--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Napoles, C. and Dredze, M. (2010). Learning simple Wikipedia: A cogitation in ascertaining abecedarian lan- guage. In Proceedings of the NAACL HLT 2010 Work- shop on Computational Linguistics and Writing, pages 42-50, Los Angeles, CA, USA, June. ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Min-WikiSplit: A sentence splitting corpus with minimal propositions", "authors": [ { "first": "C", "middle": [], "last": "Niklaus", "suffix": "" }, { "first": "A", "middle": [], "last": "Freitas", "suffix": "" }, { "first": "S", "middle": [], "last": "Handschuh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th INLG", "volume": "", "issue": "", "pages": "118--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niklaus, C., Freitas, A., and Handschuh, S. (2019). Min- WikiSplit: A sentence splitting corpus with minimal propositions. In Proceedings of the 12th INLG, pages 118-123, Tokyo, Japan, October-November. ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "MASSAlign: Alignment and annotation of comparable documents", "authors": [ { "first": "G", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "F", "middle": [], "last": "Alva-Manchego", "suffix": "" }, { "first": "L", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IJCNLP 2017", "volume": "", "issue": "", "pages": "1--4", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paetzold, G., Alva-Manchego, F., and Specia, L. (2017). MASSAlign: Alignment and annotation of comparable documents. In Proceedings of the IJCNLP 2017, pages 1-4, Tapei, Taiwan, November. ACL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Investigating the importance of linguistic complexity features across different datasets related to language learning", "authors": [ { "first": "I", "middle": [], "last": "Pil\u00e1n", "suffix": "" }, { "first": "E", "middle": [], "last": "Volodina", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Workshop on Linguistic Complexity and Natural Language Processing", "volume": "", "issue": "", "pages": "49--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pil\u00e1n, I. and Volodina, E. (2018). Investigating the impor- tance of linguistic complexity features across different datasets related to language learning. In Proceedings of the Workshop on Linguistic Complexity and Natural Lan- guage Processing, pages 49-58, Santa Fe, New-Mexico, August. ACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "MUSST: A multilingual syntactic simplification tool", "authors": [ { "first": "C", "middle": [], "last": "Scarton", "suffix": "" }, { "first": "A", "middle": [], "last": "Palmero Aprosio", "suffix": "" }, { "first": "S", "middle": [], "last": "Tonelli", "suffix": "" }, { "first": "T", "middle": [], "last": "Mart\u00edn Wanton", "suffix": "" }, { "first": "L", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IJCNLP 2017", "volume": "", "issue": "", "pages": "25--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scarton, C., Palmero Aprosio, A., Tonelli, S., Mart\u00edn Wan- ton, T., and Specia, L. (2017). MUSST: A multilin- gual syntactic simplification tool. In Proceedings of the IJCNLP 2017, pages 25-28, Tapei, Taiwan, November. ACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Text simplification from professionally produced corpora", "authors": [ { "first": "C", "middle": [], "last": "Scarton", "suffix": "" }, { "first": "G", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "L", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scarton, C., Paetzold, G., and Specia, L. (2018). Text simplification from professionally produced corpora. In Proceedings of the 11th LREC, Miyazaki, Japan, May. ELRA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation", "authors": [ { "first": "S", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "L", "middle": [], "last": "El Asri", "suffix": "" }, { "first": "H", "middle": [], "last": "Schulz", "suffix": "" }, { "first": "J", "middle": [], "last": "Zumer", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharma, S., El Asri, L., Schulz, H., and Zumer, J. (2017). Relevance of unsupervised metrics in task-oriented dia- logue for evaluating natural language generation. CoRR, abs/1706.09799.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "BLEU is not suitable for the evaluation of text simplification", "authors": [ { "first": "E", "middle": [], "last": "Sulem", "suffix": "" }, { "first": "O", "middle": [], "last": "Abend", "suffix": "" }, { "first": "A", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on EMNLP", "volume": "", "issue": "", "pages": "738--744", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sulem, E., Abend, O., and Rappoport, A. (2018). BLEU is not suitable for the evaluation of text simplification. In Proceedings of the 2018 Conference on EMNLP, pages 738-744, Brussels, Belgium, October-November. ACL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Readability assessment for text simplification:from analyzing documents to identifying sentential simplifications", "authors": [ { "first": "S", "middle": [], "last": "Vajjala", "suffix": "" }, { "first": "D", "middle": [], "last": "Meurers", "suffix": "" } ], "year": 2014, "venue": "International Journal of Applied Linguistics, Special Issue on Current Research in Readability and Text Simplification", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vajjala, S. and Meurers, D. (2014). Readability assess- ment for text simplification:from analyzing documents to identifying sentential simplifications. International Journal of Applied Linguistics, Special Issue on Current Research in Readability and Text Simplification.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Multilingual and cross-lingual complex word identification", "authors": [ { "first": "S", "middle": [ "M" ], "last": "Yimam", "suffix": "" }, { "first": "S", "middle": [], "last": "\u0160tajner", "suffix": "" }, { "first": "M", "middle": [], "last": "Riedl", "suffix": "" }, { "first": "C", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2017, "venue": "Proceedings of RANLP 2017", "volume": "", "issue": "", "pages": "813--822", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yimam, S. M.,\u0160tajner, S., Riedl, M., and Biemann, C. (2017). Multilingual and cross-lingual complex word identification. In Proceedings of RANLP 2017, pages 813-822, Varna, Bulgaria, September. INCOMA Ltd.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "COSTRA 1.0: A dataset of complex sentence transformations. LIN-DAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL)", "authors": [ { "first": "P", "middle": [], "last": "Baran\u010d\u00edkov\u00e1", "suffix": "" }, { "first": "O", "middle": [], "last": "Bojar", "suffix": "" } ], "year": 2019, "venue": "Faculty of Mathematics and Physics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baran\u010d\u00edkov\u00e1, P. and Bojar, O. (2019). COSTRA 1.0: A dataset of complex sentence transformations. LIN- DAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathemat- ics and Physics, Charles University.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "PaCCSS-IT: A parallel corpus of complexsimple sentences for automatic text simplification", "authors": [ { "first": "D", "middle": [], "last": "Brunato", "suffix": "" }, { "first": "A", "middle": [], "last": "Cimino", "suffix": "" }, { "first": "F", "middle": [], "last": "Dell'orletta", "suffix": "" }, { "first": "G", "middle": [], "last": "Venturi", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on EMNLP", "volume": "", "issue": "", "pages": "351--361", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brunato, D., Cimino, A., Dell'Orletta, F., and Venturi, G. (2016). PaCCSS-IT: A parallel corpus of complex- simple sentences for automatic text simplification. In Proceedings of the 2016 Conference on EMNLP, pages 351-361, Austin, Texas, November. ACL.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Learning word vectors for 157 languages", "authors": [ { "first": "E", "middle": [], "last": "Grave", "suffix": "" }, { "first": "P", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "P", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "A", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grave, E., Bojanowski, P., Gupta, P., Joulin, A., and Mikolov, T. (2018). Learning word vectors for 157 lan- guages. In Proceedings of the 11th LREC, Miyazaki, Japan, May. ELRA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Building a German/simple German parallel corpus for automatic text simplification", "authors": [ { "first": "D", "middle": [], "last": "Klaper", "suffix": "" }, { "first": "S", "middle": [], "last": "Ebling", "suffix": "" }, { "first": "M", "middle": [], "last": "Volk", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2nd Workshop on Predicting and Improving Text Readability for Target Reader Populations", "volume": "", "issue": "", "pages": "11--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klaper, D., Ebling, S., and Volk, M. (2013). Building a German/simple German parallel corpus for automatic text simplification. In Proceedings of the 2nd Workshop on Predicting and Improving Text Readability for Target Reader Populations, pages 11-19, Sofia, Bulgaria, Au- gust. ACL.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Shared task on quality assessment for text simplification", "authors": [ { "first": "S", "middle": [], "last": "Stajner", "suffix": "" }, { "first": "M", "middle": [], "last": "Popovi\u0107", "suffix": "" }, { "first": "H", "middle": [], "last": "Saggion", "suffix": "" }, { "first": "L", "middle": [], "last": "Specia", "suffix": "" }, { "first": "M", "middle": [], "last": "Fishel", "suffix": "" } ], "year": 2016, "venue": "qats2016: LREC 2016 Workshop & Shared Task on Quality Assessment for Text Simplification (QATS)", "volume": "", "issue": "", "pages": "22--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stajner, S., Popovi\u0107, M., Saggion, H., Specia, L., and Fishel, M. (2016). Shared task on quality assessment for text simplification. In qats2016: LREC 2016 Work- shop & Shared Task on Quality Assessment for Text Sim- plification (QATS), 28th May 2016, Portoro\u017e, Slovenia ; proceedings, pages 22-31, Paris. ELRA-ERDA. Online- Ressource.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Problems in current text simplification research: New data can help", "authors": [ { "first": "W", "middle": [], "last": "Xu", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "C", "middle": [], "last": "Napoles", "suffix": "" } ], "year": 2015, "venue": "Transactions of the ACL", "volume": "3", "issue": "", "pages": "283--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, W., Callison-Burch, C., and Napoles, C. (2015). Prob- lems in current text simplification research: New data can help. Transactions of the ACL, 3:283-297.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Optimizing statistical machine translation for text simplification", "authors": [ { "first": "W", "middle": [], "last": "Xu", "suffix": "" }, { "first": "C", "middle": [], "last": "Napoles", "suffix": "" }, { "first": "E", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Q", "middle": [], "last": "Chen", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2016, "venue": "Transactions of the ACL", "volume": "4", "issue": "", "pages": "401--415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, W., Napoles, C., Pavlick, E., Chen, Q., and Callison- Burch, C. (2016). Optimizing statistical machine trans- lation for text simplification. Transactions of the ACL, 4:401-415.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "web data: word frequency avg. position (r=.26, p <=.01), word frequency max. position (r=.14, p <=.01), prop. of adjectives (r=.22, p <=.01), prop. of adverbs (r=.21, p <=.01), prop. of determiners (r=.52, p <=.01), prop. of function words (r=.31, p <=.01), and prop. of numerals (r=.18, p <=.01) \u2022 newspaper articles: prop. of clauses (r=.15, p <=.01), prop. of MWEs (r=.14, p <=.01), prop. of adpositions (r=.12, p <=.01), prop. of conjunctions (r=.2, p <=.01), prop. of propositional phrases (r=.12, p <=.01), prop. of relative phrases (r=.16, p <=.01).", "type_str": "figure", "uris": null }, "TABREF1": { "text": "", "html": null, "num": null, "type_str": "table", "content": "" }, "TABREF2": { "text": "Word and Sentence Length Features. Word length and sentence length are well-established measurements used for readability measurement. Following Scarton et al. (2018), we distinguish word length in number of characters, and syllables and sentence length in number of characters, syllables, and words.", "html": null, "num": null, "type_str": "table", "content": "
" } } } }