ACL-OCL / Base_JSON /prefixR /json /readi /2020.readi-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:58:01.641690Z"
},
"title": "Benchmarking Data-driven Automatic Text Simplification for German",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "S\u00e4uberli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {
"addrLine": "Andreasstrasse 15",
"postCode": "8050",
"settlement": "Zurich",
"country": "Switzerland"
}
},
"email": "andreas.saeuberli@uzh.ch"
},
{
"first": "Sarah",
"middle": [],
"last": "Ebling",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {
"addrLine": "Andreasstrasse 15",
"postCode": "8050",
"settlement": "Zurich",
"country": "Switzerland"
}
},
"email": "ebling@cl.uzh.ch"
},
{
"first": "Martin",
"middle": [],
"last": "Volk",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {
"addrLine": "Andreasstrasse 15",
"postCode": "8050",
"settlement": "Zurich",
"country": "Switzerland"
}
},
"email": "volk@cl.uzh.ch"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic text simplification is an active research area, and there are first systems for English, Spanish, Portuguese, and Italian. For German, no data-driven approach exists to this date, due to a lack of training data. In this paper, we present a parallel corpus of news items in German with corresponding simplifications on two complexity levels. The simplifications have been produced according to a well-documented set of guidelines. We then report on experiments in automatically simplifying the German news items using state-of-the-art neural machine translation techniques. We demonstrate that despite our small parallel corpus, our neural models were able to learn essential features of simplified language, such as lexical substitutions, deletion of less relevant words and phrases, and sentence shortening.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic text simplification is an active research area, and there are first systems for English, Spanish, Portuguese, and Italian. For German, no data-driven approach exists to this date, due to a lack of training data. In this paper, we present a parallel corpus of news items in German with corresponding simplifications on two complexity levels. The simplifications have been produced according to a well-documented set of guidelines. We then report on experiments in automatically simplifying the German news items using state-of-the-art neural machine translation techniques. We demonstrate that despite our small parallel corpus, our neural models were able to learn essential features of simplified language, such as lexical substitutions, deletion of less relevant words and phrases, and sentence shortening.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Simplified language is a variety of standard language characterized by reduced lexical and syntactic complexity, the addition of explanations for difficult concepts, and clearly structured layout. 1 Among the target groups of simplified language are persons with cognitive impairment and learning disabilities, prelingually deaf persons, functionally illiterate persons, and foreign language learners (Bredel and Maa\u00df, 2016) . Automatic text simplification, the process of automatically producing a simplified version of a standard-language text, was initiated in the late 1990s (Carroll et al., 1998; Chandrasekar et al., 1996) and since then has been approached by means of rule-based and statistical methods. As part of a rule-based approach, the operations carried out typically include replacing complex lexical and syntactic units by simpler ones. A statistical approach generally conceptualizes the simplification task as one of converting a standardlanguage into a simplified-language text using machine translation techniques. Research on automatic text simplification has been documented for English (Zhu et al., 2010) , Spanish (Saggion et al., 2015) , Portuguese (Aluisio and Gasperin, 2010) , French (Brouwers et al., 2014) , and Italian (Barlacchi and Tonelli, 2013) . To the authors' knowledge, the work of Suter (2015) and Suter et al. (2016) , who presented a prototype of a rulebased text simplification system, is the only proposal for German. The paper at hand presents the first experiments in datadriven simplification for German, relying on neural machine translation. The data consists of news items manually simplified according to a well-known set of guidelines. Hence, the contribution of the paper is twofold:",
"cite_spans": [
{
"start": 401,
"end": 424,
"text": "(Bredel and Maa\u00df, 2016)",
"ref_id": "BIBREF5"
},
{
"start": 579,
"end": 601,
"text": "(Carroll et al., 1998;",
"ref_id": "BIBREF7"
},
{
"start": 602,
"end": 628,
"text": "Chandrasekar et al., 1996)",
"ref_id": "BIBREF8"
},
{
"start": 1110,
"end": 1128,
"text": "(Zhu et al., 2010)",
"ref_id": "BIBREF37"
},
{
"start": 1139,
"end": 1161,
"text": "(Saggion et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 1175,
"end": 1203,
"text": "(Aluisio and Gasperin, 2010)",
"ref_id": "BIBREF0"
},
{
"start": 1213,
"end": 1236,
"text": "(Brouwers et al., 2014)",
"ref_id": "BIBREF6"
},
{
"start": 1251,
"end": 1280,
"text": "(Barlacchi and Tonelli, 2013)",
"ref_id": "BIBREF1"
},
{
"start": 1322,
"end": 1334,
"text": "Suter (2015)",
"ref_id": "BIBREF32"
},
{
"start": 1339,
"end": 1358,
"text": "Suter et al. (2016)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Introducing a parallel corpus as data for automatic text simplification for German 2. Establishing a benchmark for automatic text simplification for German Section 2 presents the research background with respect to parallel corpora (Section 2.1) and monolingual sentence alignment tools (Section 2.2) for automatic text simplification. Section 3 introduces previous approaches to datadriven text simplification. Section 4 presents our work on automatic text simplification for German, introducing the data (Section 4.1), the models (Section 4.2), the results (Section 4.3), and a discussion (Section 4.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Parallel Corpora and Alignment Tools for Automatic Text Simplification",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Automatic text simplification via machine translation requires pairs of standard-language/simplified-language texts aligned at the sentence level, i.e., parallel corpora. A number of parallel corpora have been created to this end. Gasperin et al. (2010) compiled the PorSimples Corpus consisting of Brazilian Portuguese texts (2,116 sentences), each with two different levels of simplifications (\"natural\" and \"strong\"), resulting in around 4,500 aligned sentences. Bott and Saggion (2012) produced the Simplext Corpus consisting of 200 Spanish/simplified Spanish document pairs, amounting to a total of 1,149 (Spanish) and 1,808 (simplified Spanish) sentences (approximately 1,000 aligned sentences). A large parallel corpus for automatic text simplification is the Parallel Wikipedia Simplification Corpus (PWKP) compiled from parallel articles of the English Wikipedia and the Simple English Wikipedia (Zhu et al., 2010) , consisting of around 108,000 sentence pairs. Application of the corpus has been criticized for various reasons (\u0160tajner et al., 2018) ; the most important among these is the fact that Simple English Wikipedia articles are often not translations of articles from the English Wikipedia. Hwang et al. (2015) provided an updated version of the corpus that includes a total of 280,000 full and partial matches between the two Wikipedia versions. Another frequently used data collection, available for English and Spanish, is the Newsela Corpus (Xu et al., 2015) consisting of 1,130 news articles, each simplified into four school grade levels by professional editors. Klaper et al. (2013) created the first parallel corpus for German/simplified German, consisting of 256 texts each (approximately 70,000 tokens) downloaded from the Web. More recently, Battisti et al. (2020) ",
"cite_spans": [
{
"start": 231,
"end": 253,
"text": "Gasperin et al. (2010)",
"ref_id": "BIBREF10"
},
{
"start": 466,
"end": 489,
"text": "Bott and Saggion (2012)",
"ref_id": "BIBREF4"
},
{
"start": 905,
"end": 923,
"text": "(Zhu et al., 2010)",
"ref_id": "BIBREF37"
},
{
"start": 1037,
"end": 1059,
"text": "(\u0160tajner et al., 2018)",
"ref_id": null
},
{
"start": 1211,
"end": 1230,
"text": "Hwang et al. (2015)",
"ref_id": "BIBREF14"
},
{
"start": 1465,
"end": 1482,
"text": "(Xu et al., 2015)",
"ref_id": "BIBREF35"
},
{
"start": 1589,
"end": 1609,
"text": "Klaper et al. (2013)",
"ref_id": "BIBREF15"
},
{
"start": 1773,
"end": 1795,
"text": "Battisti et al. (2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Corpora",
"sec_num": "2.1"
},
{
"text": "A freely available tool exists for generating sentence alignments of standard-language/simplified-language document pairs: Customized Alignment for Text Simplification (CATS) (\u0160tajner et al., 2018) . CATS requires a number of parameters to be specified:",
"cite_spans": [
{
"start": 175,
"end": 197,
"text": "(\u0160tajner et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment Tools for Simplified Texts",
"sec_num": "2.2"
},
{
"text": "\u2022 Similarity strategy: CATS offers a lexical (charactern-gram-based, CNG) and two semantic similarity strategies. The two semantic similarity strategies, WAVG (Word Average) and CWASA (Continuous Word Alignment-based Similarity Analysis), both require pretrained word embeddings. WAVG averages the word vectors of a paragraph or sentence to obtain the final vector for the respective text unit. CWASA is based on the alignment of continuous words using directed edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment Tools for Simplified Texts",
"sec_num": "2.2"
},
{
"text": "\u2022 Alignment strategy: CATS allows for adhering to a monotonicity restriction, i.e., requiring the order of information to be identical on the standard-language and simplified-language side, or abandoning it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment Tools for Simplified Texts",
"sec_num": "2.2"
},
{
"text": "3 Data-Driven Automatic Text Simplification Specia (2010) introduced statistical machine translation to the automatic text simplification task, using data from a small parallel corpus (roughly 4,500 parallel sentences) for Portuguese. Coster and Kauchak (2011) used the original PWKP Corpus (cf. Section 2.1) to train a machine translation system. Xu et al. (2016) performed syntax-based statistical machine translation on the English/simplified English part of the Newsela Corpus. Nisioi et al. (2017) introduced neural sequence-tosequence models to automatic text simplification, performing experiments on both the Wikipedia dataset of (Hwang et al., 2015) and the Newsela Corpus for English, with automatic alignments derived from CATS (cf. Section 2.2). The authors used a Long Short-term Memory (LSTM) architecture (Hochreiter and Schmidhuber, 1997) as instance of Recurrent Neural Networks (RNNs). Surya et al. (2019) proposed an unsupervised or partially supervised approach to text simplification. Their model is based on a neural encoder-decoder but differs from previous approaches by adding reconstruction, adversarial, and diversification loss, which allows for exploiting nonparallel data as well. However, the authors' results prove that some parallel data is still essential. Finally, Palmero Aprosio et al. 2019experimented with data augmentation methods for low-resource text simplification for Italian. Their unaugmented dataset is larger than the one presented in this paper but includes more lowquality simplifications due to automatic extraction of simplified sentences from the Web. Our work differs in that we benchmark and compare a wider variety of low-resource methods. The most commonly applied automatic evaluation metrics for text simplification are BLEU (Papineni et al., 2002) and SARI (Xu et al., 2016) . BLEU, the de-facto standard metric for machine translation, computes token n-gram overlap between a hypothesis and one or multiple references. A shortcoming of BLEU with respect to automatic text simplification is that it rewards hypotheses that do not differ from the input. By contrast, SARI was designed to punish such output. It does so by explicitly considering the input and rewarding tokens in the hypothesis that do not occur in the input but in one of the references (addition) and tokens in the input that are retained (copying) or removed (deletion) in both the hypothesis and one of the references. SARI is generally used with multiple reference sentences, which are hard to obtain. Due to this limitation, human evaluation is often needed. This mostly consists of three types of ratings: how well the content or meaning of the standard-language text is preserved, how fluent or natural the simplified output is, and how much simpler the output is compared to the standard-language original. Each simplified unit (in most cases, a sentence) is typically rated on a 5-point scale with respect to each of the three dimensions.",
"cite_spans": [
{
"start": 235,
"end": 260,
"text": "Coster and Kauchak (2011)",
"ref_id": "BIBREF9"
},
{
"start": 348,
"end": 364,
"text": "Xu et al. (2016)",
"ref_id": "BIBREF36"
},
{
"start": 482,
"end": 502,
"text": "Nisioi et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 638,
"end": 658,
"text": "(Hwang et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 820,
"end": 854,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF13"
},
{
"start": 904,
"end": 923,
"text": "Surya et al. (2019)",
"ref_id": "BIBREF30"
},
{
"start": 1784,
"end": 1807,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF18"
},
{
"start": 1817,
"end": 1834,
"text": "(Xu et al., 2016)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment Tools for Simplified Texts",
"sec_num": "2.2"
},
{
"text": "Original Jedes Kalb erh\u00e4lt sp\u00e4testens sieben Tage nach der Geburt eine eindeutig identifizierbare Lebensnummer, die in Form von Ohrmarken beidseitig eingezogen wird. ('At the latest seven days after birth, each calf is given a unique identification number, which is recorded on ear tags on both sides.') B1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment Tools for Simplified Texts",
"sec_num": "2.2"
},
{
"text": "In\u00d6sterreich bekommt jedes Kalb sp\u00e4testens 7 Tage nach seiner Geburt eine Nummer, mit der man es erkennen kann. ('In Austria, at the latest 7 days after birth, each calf receives a number, with which it can be identified.')",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment Tools for Simplified Texts",
"sec_num": "2.2"
},
{
"text": "Original Sie steht auch heute noch jeden Tag um 6 Uhr in der Fr\u00fch auf und geht um 21 Uhr schlafen. ('Even today, she still gets up at 6 every morning and goes to bed at 9.') We aligned the sentences from the original German news articles with the simplified articles using CATS (cf. Section 2.2). We chose the WAVG similarity strategy in conjunction with fastText embeddings (Bojanowski et al., 2017) . fastText offers pretrained word vectors in 157 languages, derived from Wikipedia and Common Crawl (Grave et al., 2018) . 4 As our alignment strategy, we dismissed the monotonicity restriction due to our observation that the order of information in a simplified-language text is not always preserved compared to that of the corresponding standard-language text. CATS is built on the heuristic that every simplifiedlanguage sentence is aligned with one or several standardlanguage sentences. For 1-to-n and n-to-1 alignments, each of the n sentences forms a separate sentence pair with its counterpart, i.e., the single counterpart is duplicated. This leads to oversampling of some sentences and-as we will discuss in Section 4.4-poses a significant challenge for learning algorithms, but it is inevitable because we cannot assume that the order of information is preserved after simplification. 5 Sentence pairs with a similarity score of less than 90% were discarded (this threshold was established based on empirical evaluation of the tool on a different dataset), which resulted in a total of 3,616 sentence pairs. Table 1 shows examples, which are also representative of the wide range of simplifications present in the texts. Table 2 shows the number of German and simplified German sentences that we used for training and evaluation. The sets are all disjoint, i.e., there are no cross-alignments between any of them. Since the dataset is already very small crawl-vectors.html (last accessed: November 25, 2019) 5 Another possibility to deal with 1-to-n and n-to-1 alignments would be to merge them into single alignments by concatenation. However, in our case, this would have resulted in many segments becoming too long to be processed by the sequence-to-sequence model.",
"cite_spans": [
{
"start": 375,
"end": 400,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 501,
"end": 521,
"text": "(Grave et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 524,
"end": 525,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1520,
"end": 1527,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 1633,
"end": 1640,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Alignment Tools for Simplified Texts",
"sec_num": "2.2"
},
{
"text": "Usage German 3316 3316 1:1, 1:n, n:1 training 300 300 1:1 validation 3316 -data augmentation 50 -evaluation Table 2 : Number of sentences from the Austria Press Agency (APA) corpus in our experiments and the automatic alignments are not perfect, we decided not to use a parallel test set but to select models based on their best performance on the validation set and evaluate manually without a target reference. We chose the number of sentences for data augmentation to match the number of parallel sentences during training, in accordance with Sennrich et al. (2016a). We applied the following preprocessing steps:",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 115,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "German Simplified Alignment",
"sec_num": null
},
{
"text": "\u2022 In the simplified German text, we replaced all hyphenated compounds (e.g., Premier-Ministerin 'female prime minister') with their unhyphenated equivalents (Premierministerin), but only if they never occur in hyphenated form in the original German corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "German Simplified Alignment",
"sec_num": null
},
{
"text": "\u2022 We converted all tokens to lowercase. This reduces the subword vocabulary and ideally makes morpheme/subword correspondences more explicit across different parts of speech, since nouns are generally capitalized in German orthography.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "German Simplified Alignment",
"sec_num": null
},
{
"text": "\u2022 We applied byte-pair encoding (BPE) (Sennrich et al., 2016b) , trained jointly on the source and target text. BPE splits tokens into subwords based on the frequencies of their character sequences. This decreases the total vocabulary size and increases overlap between source and target.",
"cite_spans": [
{
"start": 38,
"end": 62,
"text": "(Sennrich et al., 2016b)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "German Simplified Alignment",
"sec_num": null
},
{
"text": "All models in our experiments are based on the Transformer encoder-decoder architecture (Vaswani et al., 2017) . We used Sockeye version 1.18.106 (Hieber et al., 2017) for training and translation into simplified German. Unless otherwise stated, the hyperparameters are defaults defined by Sockeye. The following is an overview of the models:",
"cite_spans": [
{
"start": 88,
"end": 110,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF33"
},
{
"start": 146,
"end": 167,
"text": "(Hieber et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Models in Our Experiments",
"sec_num": "4.2"
},
{
"text": "BASE baseline model; embedding size of 256 BPE5K same as BASE but with less BPE merge operations (10,000 \u2192 5,000) (Sennrich and Zhang, 2019) BATCH1K same as BASE but with a smaller token-based batch size (4096 \u2192 1024) (Sennrich and Zhang, 2019) LINGFEAT same as BASE but extending embedding vectors with additional linguistic features (lemmas, partof-speech tags, morphological attributes, dependency tags, and BIEO tags marking where subwords begin or end) (Sennrich and Haddow, 2016) NULL2TRG same as BASE but with additional <null>-totarget sentence pairs generated from non-parallel simplified sentences, doubling the size of the training set (Sennrich et al., 2016a) TRG2TRG same as BASE but with additional target-totarget sentence pairs (same simplified sentence in source as in target), doubling the size of the training set (Palmero Aprosio et al., 2019) (cf. Section 3) BT2TRG same as BASE but with additional backtranslatedto-target sentence pairs (source sentence is machinetranslated from target sentence), doubling the size of the training set (Sennrich et al., 2016a) For LINGFEAT, all linguistic features were obtained with ParZu (Sennrich et al., 2013) , using clevertagger (Sennrich et al., 2013) for part-of-speech tags and Zmorge (Sennrich and Kunz, 2014) for morphological analysis. The embedding sizes for these features are: 221 for lemmas, 10 each for part-of-speech, morphology, and dependency tags, and 5 for subword BIEO tags, thus extending the total embedding size to 512.",
"cite_spans": [
{
"start": 1058,
"end": 1082,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF25"
},
{
"start": 1146,
"end": 1169,
"text": "(Sennrich et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 1191,
"end": 1214,
"text": "(Sennrich et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 1250,
"end": 1275,
"text": "(Sennrich and Kunz, 2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Models in Our Experiments",
"sec_num": "4.2"
},
{
"text": "For the backtranslation system, we used the same architecture, the same method, and the same set of sentence pairs as in LINGFEAT, and the added non-parallel sentences were the same for all models trained with augmented data (NULL2TRG, TRG2TRG, BT2TRG) . Moreover, each model type was trained three times, with three different random seeds for shuffling and splitting the training and validation set, in order to reach statistical significance. After running preliminary trainings, it became clear that all of these models overfit quickly. Validation perplexity regularly reached its minimum before sentences of any kind of fluency were produced, and BLEU scores only started to increase after this point. Therefore, we decided to optimize for the BLEU score instead, i.e., stop training when BLEU scores on the validation set reached the maximum. We will discuss more specific implications of this decision in Section 4.4.",
"cite_spans": [
{
"start": 225,
"end": 235,
"text": "(NULL2TRG,",
"ref_id": null
},
{
"start": 236,
"end": 244,
"text": "TRG2TRG,",
"ref_id": null
},
{
"start": 245,
"end": 252,
"text": "BT2TRG)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Models in Our Experiments",
"sec_num": "4.2"
},
{
"text": "We report case-insensitive BLEU and SARI on the validation set, calculated using SacreBLEU (Post, 2018 ). Since we optimized the models for the BLEU score, these values may be taken as a kind of \"upper bound\" rather than true indicators of their performance. Figure 1 shows results for the models listed in Section 4.2.",
"cite_spans": [
{
"start": 91,
"end": 102,
"text": "(Post, 2018",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 259,
"end": 267,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results of Our Simplication Experiments",
"sec_num": "4.3"
},
{
"text": "TRG2TRG is the only model whose improvements compared to the baseline reached high statistical significance (p = 0.00014 for BLEU, p = 0.00050 for SARI), although improvements by LINGFEAT look promising (p = 0.10 for BLEU, p = 0.020 for SARI). The low performance of BT2TRG is surprising, considering the significant BLEU score improvements we observed in a previous experiment with a different German dataset (Battisti et al., 2020) . BPE5K and BATCH1K, both proposed as low-resource optimizations in machine translation, do not have much of an effect in this context, either. Table 3 : BLEU and SARI scores of final model configurations on the validation set (means and standard errors from three runs). Bold font indicates significant improvements (p < 0.05) with respect to BASE We also trained additional models which combined the data augmentation methods (TRG2TRG and BT2TRG) with the linguistic features (LINGFEAT) to see if there was a combined effect. The validation scores of all six configurations are presented in Table 3 . These results suggest that linguistic features are beneficial even with synthetic data, and that augmentation with target-to-target pairs is more effective than backtranslation. In addition to automatic evaluation, we translated a test set of 50 sentences using the above models and manually evaluated the output. This was done by the first author, a native speaker of German, with reference to the original sentence along the three criteria shown in Table 4 . These are based on Surya et al. (2019) but adapted to capture more specific weaknesses arising from the low-resource setting. The results are in Figure 2 . They provide a clearer picture of the strengths and weaknesses of the configurations. In general, the models have no difficulty producing fluent sentences. However, most of the time, these sentences have little in common with the original but are exact or partial copies of other sentences in the training set. In the worst cases, 60-80% of output sentences are exact copies from the training set. This is a direct consequence of overfitting. Only TRG2TRG (especially in combination with linguistic features) managed to preserve content in a significant portion of the cases. Very often, this was accompanied by decreased fluency in the produced sentences, as in the following examples from the test set, produced by TRG2TRG+LINGFEAT (non-words are marked with '*'):",
"cite_spans": [
{
"start": 410,
"end": 433,
"text": "(Battisti et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 1517,
"end": 1536,
"text": "Surya et al. (2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 578,
"end": 585,
"text": "Table 3",
"ref_id": null
},
{
"start": 1027,
"end": 1034,
"text": "Table 3",
"ref_id": null
},
{
"start": 1488,
"end": 1495,
"text": "Table 4",
"ref_id": null
},
{
"start": 1643,
"end": 1651,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results of Our Simplication Experiments",
"sec_num": "4.3"
},
{
"text": "(1) Source:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Our Simplication Experiments",
"sec_num": "4.3"
},
{
"text": "Die\u00d6sterreichischen In these cases, the system attempts sentence shortening and lexical simplification (note the numeral replacement in Example 1). Generally, the model copies less from training targets (about 10%) and tends more towards transferring tokens from the input. The results for BT2TRG confirm that backtranslation was not effective in this setting. Given the low content preservation scores in our baseline model for backtranslating, this is not surprising.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Our Simplication Experiments",
"sec_num": "4.3"
},
{
"text": "As reported in Section 4.2, we optimized our models for BLEU scores. This resulted in models which strongly favored fluency over content preservation by mainly reproducing training material exactly and thus acted more like translation memories. The fact that augmenting the data with simple-to-simple pairs was relatively successful shows that the main difficulty for the other models was finding relevant correspondences between source and target. In the augmented data, these correspondences are trivial to find, and apparently, the model partly succeeded in combining knowledge from this trivial copying job with knowledge about sentence shortening and lexical simplification, as demonstrated by Examples 1-3. In higher-resource scenarios, a frequent problem is that neural machine translation systems used for text simplification tasks are \"over-conservative\" (Sulem et al., 2018; Wubben et al., 2012) , i.e., they tend to copy the input without simplifying anything. One possible solution to this is to enforce a less probable output during decoding, which is more likely to contain some changes to the input (\u0160tajner and Nisioi, 2018). However, in the present setting, it is Table 4 : Criteria and values for human evaluation quite the opposite: The models fail to reproduce most of the content, and adding simple-to-simple pairs can help in this case. However, as datasets grow larger, it may be challenging to balance the effects of real and synthetic data appropriately. To this end, approaches such as the semisupervised one by Surya et al. (2019) , where reconstruction of the input sequence is explicitly built into the model architecture, may be interesting to explore further. When inspecting the model predictions in the test set, it also became clear that there was a considerable bias towards reproducing one of a handful of sentences in the training set. These are simplified sentences which occur more than once in training, because they are aligned with multiple original sentences. This suggests that including n-to-1 alignments in this way is a bad idea for sentence-to-sentence simplification.",
"cite_spans": [
{
"start": 864,
"end": 884,
"text": "(Sulem et al., 2018;",
"ref_id": "BIBREF29"
},
{
"start": 885,
"end": 905,
"text": "Wubben et al., 2012)",
"ref_id": "BIBREF34"
},
{
"start": 1538,
"end": 1557,
"text": "Surya et al. (2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 1181,
"end": 1188,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.4"
},
{
"text": "Overall, even with a limited quantity of data, our models were able to learn essential features of simplified language, such as lexical substitutions, deletion of less relevant words and phrases, and sentence shortening. Although the performance of the models is not yet mature, these observations give a first idea about which types of texts are important in different settings. In particular, transformations of more complex syntactic structures require substantial amounts of data. When aiming for higher-quality output in lowresource settings, for example, it may be advisable to filter the texts to focus on lexical simplification and deletion, in order not to confuse the model with phenomena it will not learn anyway, and use the discarded sentences for data augmentation instead.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.4"
},
{
"text": "This paper introduces the first parallel corpus for datadriven automatic text simplification for German. The corpus consists of 3,616 sentence pairs. Since simplification of Austria Press Agency news items is ongoing, the size of our corpus will increase continuously. A parallel corpus of the current size is generally not sufficient to train a neural machine translation system that produces both adequate and fluent text simplifications. However, we demonstrated that even with the limited amount of data available, our models were able to learn some essential features of simplified language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The term plain language is avoided, as it refers to a specific level of simplification. Simplified language subsumes all efforts of reducing the complexity of a text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.capito.eu/ (last accessed: February 3, 2020)3 Note that while the CEFR was designed to measure foreign language skills, with simplified language, it is partly applied in the context first-language acquisition(Bredel and Maa\u00df, 2016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Automatic Text Simplification for German4.1 Training DataAll data used in our experiments was taken from the Austria Press Agency (Austria Presse Agentur, APA) corpus built by our group. At this press agency, four to six news items covering the topics of politics, economy, culture, and sports are manually simplified into two language levels, B1 and A2, each day following the capito guidelines introduced in Section 2.1. The subset of data used for the experiments reported in this paper contains standard-language news items along with their simplifications on level B1 between August 2018 and December 2019. The dataset will be described in more detail in a separate publication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://fasttext.cc/docs/en/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors are indebted to Austria Presse Agentur (APA) and capito for providing the parallel corpus of standardlanguage and simplified-language news items.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fostering Digital Inclusion and Accessibility: The PorSimples project for Simplification of Portuguese Texts",
"authors": [
{
"first": "S",
"middle": [
"M"
],
"last": "Aluisio",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Gasperin",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 Young Investigators Workshop on Computational Approaches to Languages of the Americas",
"volume": "",
"issue": "",
"pages": "46--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aluisio, S. M. and Gasperin, C. (2010). Fostering Digi- tal Inclusion and Accessibility: The PorSimples project for Simplification of Portuguese Texts. In Proceedings of the NAACL HLT 2010 Young Investigators Workshop on Computational Approaches to Languages of the Ameri- cas, pages 46-53, Los Angeles, CA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "ERNESTA: A Sentence Simplification Tool for Children's Stories in Italian",
"authors": [
{
"first": "G",
"middle": [],
"last": "Barlacchi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tonelli",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 14th Conference on Intelligent Text Processing and Computational Linguistics (CI-CLing)",
"volume": "",
"issue": "",
"pages": "476--487",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barlacchi, G. and Tonelli, S. (2013). ERNESTA: A Sen- tence Simplification Tool for Children's Stories in Ital- ian. In Proceedings of the 14th Conference on Intelli- gent Text Processing and Computational Linguistics (CI- CLing), pages 476-487, Samos, Greece.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Corpus for Automatic Readability Assessment and Text Simplification of German",
"authors": [
{
"first": "A",
"middle": [],
"last": "Battisti",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pf\u00fctze",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "S\u00e4uberli",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kostrzewa",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ebling",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Battisti, A., Pf\u00fctze, D., S\u00e4uberli, A., Kostrzewa, M., and Ebling, S. (2020). A Corpus for Automatic Readabil- ity Assessment and Text Simplification of German. In Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC), Marseille, France.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "P",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2017). Enriching word vectors with subword informa- tion. Transactions of the Association for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic simplification of Spanish text for e-Accessibility",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bott",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th International Conference on Computers Helping People with Special Needs (ICCHP)",
"volume": "",
"issue": "",
"pages": "527--534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bott, S. and Saggion, H. (2012). Automatic simplification of Spanish text for e-Accessibility. In Proceedings of the 13th International Conference on Computers Help- ing People with Special Needs (ICCHP), pages 527-534, Linz, Austria.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Leichte Sprache: Theoretische Grundlagen. Orientierung f\u00fcr die Praxis. Duden",
"authors": [
{
"first": "U",
"middle": [],
"last": "Bredel",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Maa\u00df",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bredel, U. and Maa\u00df, C. (2016). Leichte Sprache: The- oretische Grundlagen. Orientierung f\u00fcr die Praxis. Du- den, Berlin.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Syntactic Sentence Simplification for French",
"authors": [
{
"first": "L",
"middle": [],
"last": "Brouwers",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bernhard",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ligozat",
"suffix": ""
},
{
"first": "Francois",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR)",
"volume": "",
"issue": "",
"pages": "47--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brouwers, L., Bernhard, D., Ligozat, A., and Francois, T. (2014). Syntactic Sentence Simplification for French. In Proceedings of the 3rd Workshop on Predicting and Im- proving Text Readability for Target Reader Populations (PITR), pages 47-56, Gothenburg, Sweden.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Practical simplification of English newspaper text to assist aphasic readers",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Minnen",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Canning",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tait",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the AAAI'98 Workshop on Integrating aI and Assistive Technology",
"volume": "",
"issue": "",
"pages": "7--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carroll, J., Minnen, G., Canning, Y., Devlin, S., and Tait, J. (1998). Practical simplification of English newspa- per text to assist aphasic readers. In Proceedings of the AAAI'98 Workshop on Integrating aI and Assistive Tech- nology, pages 7-10.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Motivations and methods for text simplification",
"authors": [
{
"first": "R",
"middle": [],
"last": "Chandrasekar",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Doran",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1041--1044",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandrasekar, R., Doran, C., and Srinivas, B. (1996). Mo- tivations and methods for text simplification. In Pro- ceedings of the 16th Conference on Computational Lin- guistics, pages 1041-1044, Copenhagen, Denmark.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Common European Framework of Reference for Languages: Learning, teaching, assessment",
"authors": [
{
"first": "W",
"middle": [],
"last": "Coster",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kauchak",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Monolingual Text-To-Text Generation (MTTG)",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Coster, W. and Kauchak, D. (2011). Learning to simplify sentences using Wikipedia. In Proceedings of the Work- shop on Monolingual Text-To-Text Generation (MTTG), pages 1-9, Portland, OR. Council of Europe. (2009). Common European Frame- work of Reference for Languages: Learning, teaching, assessment. Cambridge University Press, Cambridge.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Challenging Choices for Text Simplification",
"authors": [
{
"first": "C",
"middle": [],
"last": "Gasperin",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Maziero",
"suffix": ""
},
{
"first": "Aluisio",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Processing of the Portuguese Language. Proceedings of the 9th International Conference, PROPOR 2010",
"volume": "",
"issue": "",
"pages": "40--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gasperin, C., Maziero, E., and Aluisio, S. M. (2010). Chal- lenging Choices for Text Simplification. In Computa- tional Processing of the Portuguese Language. Proceed- ings of the 9th International Conference, PROPOR 2010, pages 40-50, Porto Alegre, Brazil.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning word vectors for 157 languages",
"authors": [
{
"first": "E",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grave, E., Bojanowski, P., Gupta, P., Joulin, A., and Mikolov, T. (2018). Learning word vectors for 157 lan- guages. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sockeye: A toolkit for neural machine translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Hieber",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Domhan",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Vilar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sokolov",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Clifton",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1712.05690"
]
},
"num": null,
"urls": [],
"raw_text": "Hieber, F., Domhan, T., Denkowski, M., Vilar, D., Sokolov, A., Clifton, A., and Post, M. (2017). Sockeye: A toolkit for neural machine translation. arXiv preprint arXiv:1712.05690, December.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Long shortterm memory",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hochreiter, S. and Schmidhuber, J. (1997). Long short- term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Aligning Sentences from Standard Wikipedia to Simple Wikipedia",
"authors": [
{
"first": "W",
"middle": [],
"last": "Hwang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ostendorf",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "211--217",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwang, W., Hajishirzi, H., Ostendorf, M., and Wu, W. (2015). Aligning Sentences from Standard Wikipedia to Simple Wikipedia. In Proceedings of NAACL-HLT, pages 211-217.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Building a German/Simple German Parallel Corpus for Automatic Text Simplification",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klaper",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ebling",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Volk",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL Workshop on Predicting and Improving Text Readability for Target Reader Populations",
"volume": "",
"issue": "",
"pages": "11--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klaper, D., Ebling, S., and Volk, M. (2013). Building a German/Simple German Parallel Corpus for Automatic Text Simplification. In ACL Workshop on Predicting and Improving Text Readability for Target Reader Popula- tions, pages 11-19, Sofia, Bulgaria.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Exploring Neural Text Simplification Models",
"authors": [
{
"first": "S",
"middle": [],
"last": "Nisioi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "\u0160tajner",
"suffix": ""
},
{
"first": "S",
"middle": [
"P"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "L",
"middle": [
"P"
],
"last": "Dinu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "85--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nisioi, S.,\u0160tajner, S., Ponzetto, S. P., and Dinu, L. P. (2017). Exploring Neural Text Simplification Models. In Proceedings of the 55th Annual Meeting of the Associ- ation for Computational Linguistics, pages 85-91, Van- couver, Canada, July.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Neural text simplification in low-resource conditions using weak supervision",
"authors": [
{
"first": "A",
"middle": [],
"last": "Palmero Aprosio",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tonelli",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Gangi",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation",
"volume": "",
"issue": "",
"pages": "37--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Palmero Aprosio, A., Tonelli, S., Turchi, M., Negri, M., and Di Gangi, M. A. (2019). Neural text simplification in low-resource conditions using weak supervision. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 37- 44, Minneapolis, Minnesota, June.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BLEU: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W.-J",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311-318, Philadelphia, PA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "M",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Post, M. (2018). A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Ma- chine Translation: Research Papers, pages 186-191, Brussels, Belgium, October.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Making it Simplext: Implementation and evaluation of a text simplification system for Spanish",
"authors": [
{
"first": "H",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Stajner",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bott",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Rello",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Drndarevi\u0107",
"suffix": ""
}
],
"year": 2015,
"venue": "ACM Transactions on Accessible Computing (TACCESS)",
"volume": "6",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saggion, H.,Stajner, S., Bott, S., Mille, S., Rello, L., and Drndarevi\u0107, B. (2015). Making it Simplext: Implemen- tation and evaluation of a text simplification system for Spanish. ACM Transactions on Accessible Computing (TACCESS), 6(4):14.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Linguistic input features improve neural machine translation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Haddow",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "1",
"issue": "",
"pages": "83--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sennrich, R. and Haddow, B. (2016). Linguistic input fea- tures improve neural machine translation. In Proceed- ings of the First Conference on Machine Translation: Volume 1, Research Papers, pages 83-91, Berlin, Ger- many, August.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Zmorge: A German morphological lexicon extracted from Wiktionary",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kunz",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "1063--1067",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sennrich, R. and Kunz, B. (2014). Zmorge: A German morphological lexicon extracted from Wiktionary. In LREC, pages 1063-1067.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Revisiting lowresource neural machine translation: A case study",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "211--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sennrich, R. and Zhang, B. (2019). Revisiting low- resource neural machine translation: A case study. In Proceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics, pages 211-221, Flo- rence, Italy, July.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Exploiting synergies between open resources for german dependency parsing, pos-tagging, and morphological analysis",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Volk",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing RANLP 2013",
"volume": "",
"issue": "",
"pages": "601--609",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sennrich, R., Volk, M., and Schneider, G. (2013). Exploit- ing synergies between open resources for german depen- dency parsing, pos-tagging, and morphological analysis. In Proceedings of the International Conference Recent Advances in Natural Language Processing RANLP 2013, pages 601-609.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sennrich, R., Haddow, B., and Birch, A. (2016a). Improv- ing neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Processing of the Portuguese Language. Proceedings of the 9th International Conference, PROPOR 2010",
"volume": "1",
"issue": "",
"pages": "30--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sennrich, R., Haddow, B., and Birch, A. (2016b). Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany, August. Specia, L. (2010). Translating from Complex to Simpli- fied Sentences. In Computational Processing of the Por- tuguese Language. Proceedings of the 9th International Conference, PROPOR 2010, pages 30-39, Porto Alegre, Brazil.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A detailed evaluation of neural sequence-to-sequence models for in-domain and cross-domain text simplification",
"authors": [
{
"first": "S",
"middle": [],
"last": "Stajner",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nisioi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stajner, S. and Nisioi, S. (2018). A detailed evaluation of neural sequence-to-sequence models for in-domain and cross-domain text simplification. In Proceedings of the 11th Language Resources and Evaluation Conference, Miyazaki, Japan.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "CATS: A Tool for Customized Alignment of Text Simplification Corpora",
"authors": [
{
"first": "S",
"middle": [],
"last": "Stajner",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Franco-Salvador",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "3895--3903",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stajner, S., Franco-Salvador, M., Rosso, P., and Ponzetto, S. (2018). CATS: A Tool for Customized Alignment of Text Simplification Corpora. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018), pages 3895-3903, Miyazaki, Japan.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Simple and effective text simplification using semantic and neural methods",
"authors": [
{
"first": "E",
"middle": [],
"last": "Sulem",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "162--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sulem, E., Abend, O., and Rappoport, A. (2018). Simple and effective text simplification using semantic and neu- ral methods. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 162-173, Melbourne, Aus- tralia, July.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Unsupervised neural text simplification",
"authors": [
{
"first": "S",
"middle": [],
"last": "Surya",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Laha",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Sankaranarayanan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2058--2068",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Surya, S., Mishra, A., Laha, A., Jain, P., and Sankara- narayanan, K. (2019). Unsupervised neural text sim- plification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2058-2068, Florence, Italy, July.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Rule-based Automatic Text Simplification for German",
"authors": [
{
"first": "J",
"middle": [],
"last": "Suter",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ebling",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Volk",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 13th Conference on Natural Language Processing (KONVENS 2016)",
"volume": "",
"issue": "",
"pages": "279--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suter, J., Ebling, S., and Volk, M. (2016). Rule-based Au- tomatic Text Simplification for German. In Proceedings of the 13th Conference on Natural Language Processing (KONVENS 2016), pages 279-287, Bochum, Germany.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Rule-based text simplification for German. Bachelor's thesis",
"authors": [
{
"first": "J",
"middle": [],
"last": "Suter",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suter, J. (2015). Rule-based text simplification for Ger- man. Bachelor's thesis, University of Zurich.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Attention is all you need",
"authors": [
{
"first": "A",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "A",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., and Polosukhin, I. (2017). Attention is all you need. In Advances in neural infor- mation processing systems, pages 5998-6008.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Sentence simplification by monolingual machine translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Wubben",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Van Den Bosch",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Krahmer",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1015--1024",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wubben, S., van den Bosch, A., and Krahmer, E. (2012). Sentence simplification by monolingual machine trans- lation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1015-1024, Jeju Island, Korea, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Problems in Current Text Simplification Research: New Data Can Help",
"authors": [
{
"first": "W",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Napoles",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "283--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu, W., Callison-Burch, C., and Napoles, C. (2015). Prob- lems in Current Text Simplification Research: New Data Can Help. Transactions of the Association for Computa- tional Linguistics, 3:283-297.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Optimizing statistical machine translation for text simplification",
"authors": [
{
"first": "W",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "401--415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu, W., Napoles, C., Pavlick, E., Chen, Q., and Callison- Burch, C. (2016). Optimizing statistical machine trans- lation for text simplification. Transactions of the Associ- ation for Computational Linguistics, 4:401-415.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A monolingual tree-based translation model for sentence simplification",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bernhard",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1353--1361",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhu, Z., Bernhard, D., and Gurevych, I. (2010). A mono- lingual tree-based translation model for sentence simpli- fication. In Proceedings of the International Conference on Computational Linguistics, pages 1353-1361, Bei- jing, China.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Figure 2: Human evaluation results",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"num": null,
"text": "US-Pr\u00e4sident Donald Trump hat in seiner mit Spannung erwarteten Rede zur Lage der Nation seine politischen Priorit\u00e4ten betont, ohne gro\u00dfe wirtschaftliche Initiativen vorzustellen. ('In his eagerly awaited State of the Union address, U.S. President Donald Trump stressed his political priorities without presenting any major economic initiatives.') B1 US-Pr\u00e4sident Donald Trump hat am Dienstag seine Rede zur Lage der Nation gehalten. ('U.S. President Donald Trump gave his State of the Union address on Tuesday.')Original Sie stehe noch immer jeden Morgen um 6.00 Uhr auf und gehe erst gegen 21.00 Uhr ins Bett, berichtete das Guinness-Buch der Rekorde. ('She still gets up at 6:00 a.m. every morning and does not go to bed until around 9:00 p.m., the Guinness Book of Records reported.') B1",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF2": {
"num": null,
"text": "Examples from the Austria Press Agency (APA) corpus",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"num": null,
"text": "\u00b1 0.55 31.87 \u00b1 0.46 7.59 \u00b1 0.40 35.29 \u00b1 0.28 1.81 \u00b1 0.94 31.54 \u00b1 0.58 +LINGFEAT 2.94 \u00b1 0.60 33.13 \u00b1 0.56 9.75 \u00b1 0.63 36.88 \u00b1 0.67 3.11 \u00b1 0.56 32.96 \u00b1 0.59",
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td/><td/><td>BASE</td><td/><td/><td colspan=\"2\">+TRG2TRG</td><td/><td/><td/><td colspan=\"2\">+BT2TRG</td><td/></tr><tr><td/><td/><td/><td colspan=\"2\">BLEU</td><td/><td>SARI</td><td/><td>BLEU</td><td colspan=\"2\">SARI</td><td/><td colspan=\"2\">BLEU</td><td colspan=\"2\">SARI</td></tr><tr><td colspan=\"2\">BASE</td><td/><td>2.23</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>8</td><td/><td/><td/><td/><td/><td/><td/><td>36</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>35</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>6</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>34</td><td/><td/><td/><td/><td/><td/></tr><tr><td>BLEU</td><td>4</td><td/><td/><td/><td/><td/><td/><td>SARI</td><td>33</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>32</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>2</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>31</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>0</td><td>BASE</td><td>BPE5K</td><td>BATCH1K</td><td>LINGFEAT</td><td>NULL2TRG</td><td>TRG2TRG</td><td>BT2TRG</td><td>30</td><td>BASE</td><td>BPE5K</td><td>BATCH1K</td><td>LINGFEAT</td><td>NULL2TRG</td><td>TRG2TRG</td><td>BT2TRG</td></tr></table>"
}
}
}
}