{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:57:53.860594Z" }, "title": "Is it simpler? An Evaluation of an Aligned Corpus of Standard-Simple Sentences", "authors": [ { "first": "Evelina", "middle": [], "last": "Rennes", "suffix": "", "affiliation": { "laboratory": "", "institution": "Research Institutes of Sweden Link\u00f6ping", "location": {} }, "email": "evelina.rennes@liu.se" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Parallel monolingual resources are imperative for data-driven sentence simplification research. We present the work of aligning, at the sentence level, a corpus of all Swedish public authorities and municipalities web texts in standard and simple Swedish. We compare the performance of three alignment algorithms used for similar work in English (Average Alignment, Maximum Alignment, and Hungarian Alignment), and the best-performing algorithm is used to create a resource of 15,433 unique sentence pairs. We evaluate the resulting corpus using a set of features that has proven to predict text complexity of Swedish texts. The results show that the sentences of the simple sub-corpus are indeed less complex than the sentences of the standard part of the corpus, according to many of the text complexity measures.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Parallel monolingual resources are imperative for data-driven sentence simplification research. We present the work of aligning, at the sentence level, a corpus of all Swedish public authorities and municipalities web texts in standard and simple Swedish. We compare the performance of three alignment algorithms used for similar work in English (Average Alignment, Maximum Alignment, and Hungarian Alignment), and the best-performing algorithm is used to create a resource of 15,433 unique sentence pairs. We evaluate the resulting corpus using a set of features that has proven to predict text complexity of Swedish texts. The results show that the sentences of the simple sub-corpus are indeed less complex than the sentences of the standard part of the corpus, according to many of the text complexity measures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Automatic Text Simplification (ATS) denotes the process of transforming a text, semantically, syntactically or lexically, in order to make it easier while preserving meaning and grammaticality. The simplification of text can have different purposes. Historically, it has been used as a preprocessing step to facilitate other natural language processing tasks, such as machine translation and text summarisation. The intuition was that a simpler syntactic structure of input texts would lead to less ambiguity, which would improve text processing performance. Another purpose of ATS is to make texts available to a broader audience, for example by adapting texts for people with different kinds of reading difficulties (Saggion, 2017) . Examples of target groups that have been accounted for within the field are people with dyslexia, people with aphasia, children, the deaf and hearing-impaired, second language learners, and the elderly. Data-driven techniques have gained ground the last years within the field of natural language processing, and the simplification field is no exception. Recent approaches regard simplification as a task analogous to (monolingual) machine translation (Specia, 2010; Coster and Kauchak, 2011b; Coster and Kauchak, 2011a; Wubben et al., 2012; Xu et al., 2016; Nisioi et al., 2017; Zhang and Lapata, 2017; . One well-recognised issue with data-driven techniques is that these techniques typically demand large-scale highquality data resources, which can be problematic for lessresourced languages. A widely used resource in previous automatic text simplification research is Wikipedia and Simple English Wikipedia (Zhu et al., 2010; Coster and Kauchak, 2011b; Hwang et al., 2015; Kajiwara and Komachi, 2016) , but its quality as a resource has been questioned (Xu et al., 2015) . The collaborative and uncontrolled nature of Wikipedia makes it somewhat unreliable as a resource, and the authors pointed out that simple articles gen-erally are not rewritten versions of the standard articles, which can be problematic when attempting to perform sentence alignment. Another commonly used resource is the Newsela corpus 1 . Newsela contains 1,130 original news articles in English, manually simplified to 3-4 complexity levels by professional writers. The readability levels correspond to education grade levels, thus targeting children of different reading levels. Although there are many advantages of Newsela, such as the high quality of the texts, there is one disadvantage: researchers are not allowed to publicly release model output based on this corpus, which in turn hinders model comparison. The Newsela corpus has been used in some studies for text simplification (Zhang and Lapata, 2017; Alva-Manchego et al., 2017; Scarton et al., 2018) . The need for more and better resources for sentence simplification was highlighted by Alva-Manchego et al. (2020) , and proposed as one of the key topics that should be addressed by the field. In Sweden, most websites of public authorities and municipalities have versions adapted to people in need of simple text. These texts are often based on guidelines learned from the professional experience of expert writers and editors. The Swedish Agency for Accessible Media (MTM) describes some of these guidelines 2 :", "cite_spans": [ { "start": 718, "end": 733, "text": "(Saggion, 2017)", "ref_id": "BIBREF18" }, { "start": 1188, "end": 1202, "text": "(Specia, 2010;", "ref_id": "BIBREF21" }, { "start": 1203, "end": 1229, "text": "Coster and Kauchak, 2011b;", "ref_id": "BIBREF4" }, { "start": 1230, "end": 1256, "text": "Coster and Kauchak, 2011a;", "ref_id": "BIBREF3" }, { "start": 1257, "end": 1277, "text": "Wubben et al., 2012;", "ref_id": "BIBREF22" }, { "start": 1278, "end": 1294, "text": "Xu et al., 2016;", "ref_id": "BIBREF24" }, { "start": 1295, "end": 1315, "text": "Nisioi et al., 2017;", "ref_id": "BIBREF14" }, { "start": 1316, "end": 1339, "text": "Zhang and Lapata, 2017;", "ref_id": "BIBREF25" }, { "start": 1648, "end": 1666, "text": "(Zhu et al., 2010;", "ref_id": "BIBREF27" }, { "start": 1667, "end": 1693, "text": "Coster and Kauchak, 2011b;", "ref_id": "BIBREF4" }, { "start": 1694, "end": 1713, "text": "Hwang et al., 2015;", "ref_id": "BIBREF12" }, { "start": 1714, "end": 1741, "text": "Kajiwara and Komachi, 2016)", "ref_id": "BIBREF13" }, { "start": 1794, "end": 1811, "text": "(Xu et al., 2015)", "ref_id": "BIBREF23" }, { "start": 2706, "end": 2730, "text": "(Zhang and Lapata, 2017;", "ref_id": "BIBREF25" }, { "start": 2731, "end": 2758, "text": "Alva-Manchego et al., 2017;", "ref_id": "BIBREF0" }, { "start": 2759, "end": 2780, "text": "Scarton et al., 2018)", "ref_id": "BIBREF19" }, { "start": 2869, "end": 2896, "text": "Alva-Manchego et al. (2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 The text should be adapted to the type of reader that will read the text", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 The text should have a common thread and capture the interest of the reader immediately", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 The context should be clear, and the text should not demand any extensive prerequisites", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 The text should contain everyday words and the text rows should be short", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 If a picture is presented next to a text, it should interplay with the text", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 The language and presentation should be adapted to the specific demands and purposes of the specific type of media These properties are, for obvious reasons, difficult to model in a concrete and unambiguous way to be fed into a system that automatically simplifies text. Professionally written texts comprise, however, concrete examples of sentences that adhere to these guidelines. They can therefore be used for learning how experts write simple text. This motivated us to collect a corpus of web texts from Swedish public authorities and municipalities (Rennes and J\u00f6nsson, 2016) . The collected corpus contained a total of 1,629 pages in simple Swedish, and 136,501 pages in standard Swedish, with a total of 29.6 million tokens. The corpus was aligned using three different alignment algorithms, broadly following Kajiwara and Komachi (2016) . The alignment algorithms, originally proposed by Song and Roth (2015) ; Average Alignment (AA), Maximum Alignment (MA), and Hungarian Alignment (HA), align sentence pairs by calculating and combining the similarities of word embeddings to create a sentence similarity score. The AA algorithm bases the sentence similarity on the average of the pairwise word similarities of all words of a pair of sentences. The MA algorithm considers the word pairs that maximise the word similarity of all words of a pair of sentences, and the sentence similarity score is given by the sum of the word similarity scores. The HA algorithm determines the sentence similarity by calculating the lowest cost (in our case, the highest cosine value) for every possible word pair, and the resulting sum is normalised by the length of the shortest sentence in the sentence pair. Thus, for all algorithms, we could alter the word similarity threshold (the threshold of when a word pair is regarded similar enough) and sentence similarity threshold (the threshold of when a sentence pair is similar enough and should be aligned). A few modifications of the Kajiwara and Komachi (2016) implementation were made. The language was changed to Swedish, and unknown words, so called Out-of-Vocabulary (OOV) words, were treated differently. Since Kajiwara and Komachi (2016) used word embeddings trained on a large-scale corpus, they disregarded the OOV words when calculating the sentence similarity scores. However, since we used a much smaller set of Swedish word embeddings, Swectors (Fallgren et al., 2016) , ignoring OOV words was not a viable approach. Instead, we used Mimick (Pinter et al., 2017) to train a recurrent neural network at the character level, in order to predict OOV word vectors based on a word's spelling. Mimick works by generating approximated word embeddings for OOV words. The intuition behind this approach is that word embeddings that are generated based on the spelling of a word provide a better vector estimation than other common methods (such as creating a randomised word embedding) since they capture features related to the shape of a word. In this article, we present detailed results on the nature of the different algorithms using a combination of evaluations. In Section 2.1., we investigate at what sentence similarity threshold humans perceive the aligned sentence pairs as semantically similar. In Section 2.2., we aim to find the algorithm and the best combination of parameters to maximise alignment performance. In Section 2.3., we investigate whether the sentences in the aligned sentence pairs differ in complexity. In Section 3., results and methodological considerations are discussed, and the conclusions are presented in Section 4.. The main contribution of this work is the provision and evaluation of a new text simplification corpus for Swedish.", "cite_spans": [ { "start": 558, "end": 584, "text": "(Rennes and J\u00f6nsson, 2016)", "ref_id": "BIBREF17" }, { "start": 821, "end": 848, "text": "Kajiwara and Komachi (2016)", "ref_id": "BIBREF13" }, { "start": 900, "end": 920, "text": "Song and Roth (2015)", "ref_id": "BIBREF20" }, { "start": 1983, "end": 2010, "text": "Kajiwara and Komachi (2016)", "ref_id": "BIBREF13" }, { "start": 2166, "end": 2193, "text": "Kajiwara and Komachi (2016)", "ref_id": "BIBREF13" }, { "start": 2407, "end": 2430, "text": "(Fallgren et al., 2016)", "ref_id": "BIBREF8" }, { "start": 2503, "end": 2524, "text": "(Pinter et al., 2017)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "A total of three evaluations were performed. The first two evaluations aimed to tune the values of the word and sentence similarity thresholds to maximise the performance of the algorithms. An aligned corpus was then created of sentence pairs using the best-performing threshold values. The third evaluation aimed to investigate whether the aligned corpus consisted of sentence pairs that differed in complexity, i.e. if we really had a corpus of standard and simple Swedish. Since the sentences are extracted from corpora consisting of standard and simple documents, it is intuitive that the extracted sentences are good representatives of standard and simple text segments. However, given the way the corpus was created, we cannot know that the sentence pairs are true alignments, that is, that the simple sentence is a simplified version of the standard sentence. The third evaluation aims to investigate whether the sentences of the different parts of the corpus in fact differs in complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluations", "sec_num": "2." }, { "text": "The quality of the sentence pairs generated by the alignment algorithms was evaluated in a human evaluation conducted through a web survey. The word threshold value was set to 0.49 following Kajiwara and Komachi (2016) . The intuition behind this evaluation was to see at what sentence threshold humans perceive the aligned sentences as semantically similar.", "cite_spans": [ { "start": 191, "end": 218, "text": "Kajiwara and Komachi (2016)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation I: Human Evaluation", "sec_num": "2.1." }, { "text": "2.1.1. Procedure From the three corpora generated by the different algorithms, we randomly picked three sentence pairs per similarity interval (0.51-0.60, 0.61-0.70, 0.71-0.80, 0.81-0.90, 0.91-1.0). The number of sentence pairs aligned by the AA algorithm were, however, very few (<10). AA was therefore excluded from this evaluation. For MA and HA a total of 30 sentence pairs were extracted. All extracted pairs from HA and MA were then included in a web survey, and participants were asked to grade the sentence pairs on a four-graded scale regarding similarity. The grading was based on categories previously used to create a manually annotated data set (Hwang et al., 2015) . For this evaluation, the categories were translated into Swedish and slightly reformulated to suit non-experts. The reformulated categories were:", "cite_spans": [ { "start": 658, "end": 678, "text": "(Hwang et al., 2015)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation I: Human Evaluation", "sec_num": "2.1." }, { "text": "1. Meningarna handlar om helt olika saker The sentences treat completely different things 2. Meningarna handlar om olika saker men delar en kortare fras The sentences treat different things, but share a shorter phrase 3. En menings inneh\u00e5ll t\u00e4cks helt av den andra meningen, men inneh\u00e5ller\u00e4ven ytterligare information The content of a sentence is completely covered by the second sentence, but also contains additional information 4. Meningarnas inneh\u00e5ll matchar helt, m\u00f6jligtvis med sm\u00e5 undantag (t. ex. pronomen, datum eller nummer)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation I: Human Evaluation", "sec_num": "2.1." }, { "text": "The content of the sentences matches completely, possibly with minor exceptions (such as pronouns, dates or numbers)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation I: Human Evaluation", "sec_num": "2.1." }, { "text": "Convenience sampling was used to gather responses, and 61 participants submitted a response to the web survey.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation I: Human Evaluation", "sec_num": "2.1." }, { "text": "The results of the human evaluation are presented in Table 1 , and further illustrated in Figure 1 . The sentence pairs in the corpus using the MA algorithm were generally considered more similar, than the sentence pairs of the corpus aligned with the HA algorithm. For the MA algorithm, a sentence threshold over 0.71 seemed to produce similar sentences. The HA algorithm did not reach an average value above 2. The high standard deviation through all intervals shows that these results should be interpreted with caution.", "cite_spans": [], "ref_spans": [ { "start": 53, "end": 60, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 90, "end": 98, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "2.1.2." }, { "text": "The gold standard evaluation was performed to find the best parameter settings regarding word and sentence thresholds for all three alignment algorithms (AA, MA, HA).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation II: Gold Standard", "sec_num": "2.2." }, { "text": "All alignment algorithms used a threshold for word alignment and a threshold for sentence alignment. We used a gold standard to reveal the optimal combination of parameters that maximise the F1 score. The gold standard was collected broadly following the procedure in Hwang et al. (2015) , annotated by one graduate student and two payed undergraduate students. Document Figure 1 : Average grade per interval, according to the web survey (where a value of 0 means that the sentences are not considered similar, and a value of 3 means that the sentences are considered very similar).", "cite_spans": [ { "start": 268, "end": 287, "text": "Hwang et al. (2015)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 371, "end": 379, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Procedure", "sec_num": "2.2.1." }, { "text": "pairs (based on a title match) were presented to the annotators, and they were instructed to rate each sentence pair according to the descriptions of each point of the scale. If there were any doubts, they were instructed to focus on the semantic meaning rather than specific words. A training example was given prior to the annotation. Only sentences with exactly three annotations were considered, which resulted in 4548 sentence pairs. Of these pairs, 4457 were rated as Bad, 37 were rated as Bad Partial, 24 were rated as Good Partial, and 30 were rated as Good.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.2.1." }, { "text": "The inter-annotator agreement was calculated using the Intra-class Correlation Coefficient (ICC), and revealed excellent agreement, ICC(2, 3) = 0.964. Since the gold standard was divided into four categories, we performed two experiments. In the first experiment (GGPO), the sentences rated as Good and Good Partial were considered correct alignments, and in the second experiment (GO) we restricted the correct alignments to only the sentences ranked as Good.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.2.1." }, { "text": "As in the previous evaluation, the AA algorithm resulted in a very low number of aligned sentences for all given conditions when tested on the gold sentences. In the GGPO setting, presented in Table 2 , the results were as follows:", "cite_spans": [], "ref_spans": [ { "start": 193, "end": 200, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "2.2.2." }, { "text": "\u2022 The AA algorithm maximised its performance at F 1 = 0.034, aligning 3 sentences (no difference was observed when changing parameters or vector conditions).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "2.2.2." }, { "text": "\u2022 The MA algorithm maximised its performance at F 1 = 0.758, aligning 39 sentences (Mimick vectors, word similarity threshold of 0.39, sentence similarity threshold of 0.7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "2.2.2." }, { "text": "\u2022 The HA algorithm maximised its performance at F 1 = 0.762, aligning 49 sentences (Mimick vectors, word similarity threshold of 0.79, sentence similarity threshold of 0.7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "2.2.2." }, { "text": "Max F1 No. sentences AA 0.060 2 MA 0.892 33 HA 0.800 38 Table 3 : The best-performing algorithm conditions in the GO setting.", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 63, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "2.2.2." }, { "text": "In the GO setting, presented in Table 3 , we saw similar tendencies:", "cite_spans": [], "ref_spans": [ { "start": 32, "end": 39, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "2.2.2." }, { "text": "\u2022 The AA algorithm maximised its performance at F 1 = 0.060, aligning 2 sentences (Mimick vectors, word similarity threshold of \u2265 0.29 and sentence similarity threshold of \u2265 0.4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "2.2.2." }, { "text": "\u2022 The MA algorithm maximised its performance at F 1 = 0.892, aligning 33 sentences (Mimick vectors, word similarity threshold of \u2265 0.39 and sentence similarity threshold of 0.8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "2.2.2." }, { "text": "\u2022 The HA algorithm maximised its performance at F 1 = 0.800, aligning 38 sentences (Mimick vectors, word similarity threshold of \u2265 0.59 and sentence similarity threshold of 0.9).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "2.2.2." }, { "text": "Generally, the conditions using Mimick for generating vectors for out-of-vocabulary words performed better in terms of precision, recall and number of aligned sentences. The best-performing algorithm was the MA in the GO setting, and HA in the GGPO setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "2.2.2." }, { "text": "After discovering the best-performing similarity thresholds for word and sentence alignment, the winning algorithm was re-run on the raw corpus of Swedish public authorities and municipalities web texts. The performance of MA and HA did not differ much in the GGPO setting, but MA was substantially better in the GO setting. Another benefit of MA is that it less computationally demanding, which could be important to consider when running on large corpora. We chose to run the alignment with the MA algorithm, using a word similarity threshold of 0.39 and a sentence similarity threshold of 0.7. This resulted in a resource of 45,671 sentence pairs. After removing duplicates, 15,433 sentence pairs remained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Corpus", "sec_num": "2.2.3." }, { "text": "The aligned corpus was further analysed based on text characteristics. In this evaluation, we were interested in whether the sentence pairs in the aligned resource in fact differed in complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation III: Text Characteristics", "sec_num": "2.3." }, { "text": "Since the aligned corpus contained duplicate sentences, we only considered the 15,433 unique sentence pairs for this analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "First, we performed a corpus-level surface analysis, using frequency and ratio measures to get a general overview of the corpus. The corpus-level measures have been previously used for analysing comparable corpora of texts in simple and standard Swedish (Heimann M\u00fchlenbock, 2013). However, since this corpus does not include documents, but rather sentences, some of the measures used by Heimann M\u00fchlenbock (2013) are not applicable. The measures we excluded from the analysis were LIX (Bj\u00f6rnsson, 1968) , type-token ratio and OVIX (Hultman and Westman, 1977) . The measures used for the corpus-level analysis were:", "cite_spans": [ { "start": 486, "end": 503, "text": "(Bj\u00f6rnsson, 1968)", "ref_id": "BIBREF3" }, { "start": 532, "end": 559, "text": "(Hultman and Westman, 1977)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 Total number of words, calculated as the number of all the alphanumeric word tokens in the sub-corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 Number of unique words, calculated as the number of all unique alphanumeric word tokens in the subcorpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 Ratio of long words, defined as the ratio of words longer than 6 characters to the total number of words in the sub-corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 Ratio of extra long words, defined as the ratio of words longer than 13 characters to the total number of words in the sub-corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "We then performed a sentence-level surface analysis of the collected corpora. The complexity measures were calculated for all sentences in the simple Swedish sub-corpus, and all sentences in the standard sub-corpus, and significance testing was performed using two-tailed t-test. The measures considered for the sentence-level surface analysis were:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 Word length (chars), calculated as the mean word length in number of characters. This value was calculated for each sentence, and then averaged over the entire sub-corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 Word length (syll), calculated as the mean word length in number of syllables. For simplicity, we let the number of vowels correspond to the number of syllables. This value was calculated for each sentence, and then averaged over the entire sub-corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 Sentence length (words), calculated as the number of tokens of a sentence. This value was calculated for each sentence, and then averaged over the entire subcorpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 Number of long words, defined as the number of words longer than 6 characters. This value was calculated for each sentence, and then averaged over the entire sub-corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 Number of extra long words, defined as the number of words longer than 13 characters. This value was calculated for each sentence, and then averaged over the entire sub-corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "Finally, we calculated the measures of a subset of a feature set used for text complexity classification (Falkenjack et al., 2013) . The subset (hereafter: SCREAM-sent) consisted of the measures that were suitable for sentence-level analysis. The selection was done according to Falkenjack (2018) . A new version of SAPIS (Fahlborg and Rennes, 2016) , an API service for text analysis and simplification, was used to calculate the linguistic measures used for the SCREAMsent analysis. The new version has the same functionality as the original version of SAPIS, but now uses efselab 3 (\u00d6stling, 2018) for part-of-speech tagging. SAPIS uses MaltParser (Nivre et al., 2007) version 1.9.0 for dependency parsing.", "cite_spans": [ { "start": 105, "end": 130, "text": "(Falkenjack et al., 2013)", "ref_id": "BIBREF6" }, { "start": 279, "end": 296, "text": "Falkenjack (2018)", "ref_id": null }, { "start": 322, "end": 349, "text": "(Fahlborg and Rennes, 2016)", "ref_id": "BIBREF5" }, { "start": 651, "end": 671, "text": "(Nivre et al., 2007)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "Since the SCREAM-sent measures were calculated at the sentence level, all measures indicating an average should be regarded as absolute for a given sentence. The significance testing was performed using two-tailed t-tests, assuming non-equal variances. The selected features were:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 avg dep distance dependent, calculated as the average dependency distance in the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 avg n syllables, calculated as the average number of syllables per word in the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 avg prep comp, calculated as the average number of prepositional complements in the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 avg sentence depth, calculated as the average sentence depth.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 avg word length, calculated as the average word length in a document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 n content words, calculated as the number of content words (nouns, verbs, adjectives and adverbs).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 n dependencies, calculcated as the number of dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 n lix long words, calculated as the number of long words as defined by the LIX formula; words with more than 6 characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 n nominal postmodifiers, calculated as the number of nominal pre-modifiers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 n nominal premodifiers, calculated as the number of nominal post-modifiers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 n right dependencies, calculated as the number of right dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 n sub clauses, calculated as the number of subclauses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 Lemma frequencies, derived from the basic Swedish vocabulary SweVoc (Heimann M\u00fchlenbock and Johansson Kokkinakis, 2012):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "3 https://github.com/robertostling/efselab -n swevoc c, calculated as the number of words that belong to the SweVoc C word list. SweVoc C contains lemmas that are fundamental for communication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "-n swevoc d, calculated as the number of words that belong to the SweVoc D word list. SweVoc D contains lemmas for everyday use.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "-n swevoc h, calculated as the number of words that belong to the SweVoc H word list. SweVoc H contains other highly frequent lemmas.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "-n swevoc s, calculated as the number of words that belong to the SweVoc S word list. SweVoc S contains supplementary words from Swedish Base Vocabulary Pool.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "-n swevoc total, calculated as the number of words that belong to the total SweVoc word list. SweVoc Total contains SweVoc words of all categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 n syllables, calculated as the number of syllables in the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 n tokens, calculated as the number of tokens in the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 n unique tokens, calculated as the number of unique tokens in the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 n verbal roots, calculated as the number of sentences where the root is a verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 n verbs, calculated as the number of verbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 right dependency ratio, calculated as the ratio of the number of right dependencies to the number of total dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 sub clause ratio, calculated as the ratio of subclauses to the total amount of sub-clauses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "\u2022 total token length, calculated as the length of all tokens of a document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2.3.1." }, { "text": "We performed three sets of analyses: one corpus-level surface analysis, and two sentence-level analyses. The corpuslevel analysis and the first sentence-level analysis account for the measures previously used by Heimann M\u00fchlenbock (2013). The second sentence-level analysis accounts for the SCREAM-sent measures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "2.3.2." }, { "text": "The results of the corpus-level surface analysis are presented in Table 4 . The corpus of simple sentences is slightly smaller in size regarding the total number of words. The corpus of standard sentences exhibits a larger variety regarding word variation (number of unique word tokens), and has a slightly higher ratio of long and extra long word tokens.", "cite_spans": [], "ref_spans": [ { "start": 66, "end": 73, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "2.3.2." }, { "text": "The results of the sentence-level surface analysis is presented in Table 5 . This analysis also shows a tendency of the corpus of simple sentences to have shorter word length (in both number of characters and number of syllables), Measure simple standard Total number of words 177,011 181,111 Number of unique words 10,373 11,593 Ratio of long words 22.55% 22.97% Ratio of extra long words 3.28% 3.44% Table 4 : Overview of the characteristics of the sentences in the simple part of the corpus (simple) and the standard part of the corpus (standard). Table 5 : Sentence-level surface analysis.", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 74, "text": "Table 5", "ref_id": null }, { "start": 231, "end": 357, "text": "Measure simple standard Total number of words 177,011 181,111 Number of unique words 10,373 11,593 Ratio of long words", "ref_id": "TABREF1" }, { "start": 410, "end": 417, "text": "Table 4", "ref_id": null }, { "start": 559, "end": 566, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "2.3.2." }, { "text": "shorter sentence length and a lower number of long and extra long words. The differences are statistically significant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "2.3.2." }, { "text": "The results of the sentence-level analysis using the SCREAM-sent measures are presented in Table 6 . Statistically significant p-values are marked in bold. Most measures show statistically significant differences. Measures related to the length of the sentence, such as the number of syllables and the number of tokens, are generally higher in the standard sentences. There is also a significant difference in sentence depth and number of right dependencies, which could indicate higher complexity in the standard sentences. The simple sentences generally exhibit shorter token length, and fewer long words (>6 characters). No difference could be observed regarding the SweVoc measures from category C (core vocabulary), D (words referring to everyday objects and actions, and H (highly frequent words). However, statistically significant differences were observed for the SweVoc category S (supplementary words from the Swedish Base Vocabulary Pool), and Swe-Voc Total.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 98, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "2.3.2." }, { "text": "We have presented results from three evaluations. The first and second evaluation were done on the previously aligned corpus in order to find the optimal combination of settings for the corpus alignment. Then, the corpus was aligned with the best-performing parameter settings, and the third evaluation was conducted on the new resource of aligned sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3." }, { "text": "\u2022 Evaluation I, the human evaluation, indicated that sentence pairs produced by the MA algorithm were regarded more similar than sentence pairs produced by the HA algorithm. A sentence similarity threshold of 0.71 seemed to produce sentence pairs that were perceived as similar, but the results lack statistical power.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3." }, { "text": "\u2022 Evaluation II, the evaluation on the gold standard, indicated that the best-performing combination of settings for the alignment in the GGPO condition was the HA algorithm, using Mimick vector generation, a word similarity threshold of 0.79, and a sentence similarity threshold of 0.7. In the GO condition, the bestperforming combination of settings was the MA algorithm, using Mimick vector generation, a word similarity threshold of \u2265 0.39 and a sentence similarity threshold of 0.8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3." }, { "text": "\u2022 Evaluation III, the evaluation of text characteristics, revealed that there are many statistically significant differences between the sentences in the simple subcorpus and the sentences in the standard sub-corpus. The standard part of the corpus generally scores higher on features used to predict text complexity, when compared sentence-wise to the sentences collected from the material in simple Swedish.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3." }, { "text": "This work has resulted in a sentence-aligned Swedish corpus of sentence pairs that differ in complexity. Many of the differences observed in the final text complexity evaluation are to be expected if we accept the hypothesis that the sentences belonging to the standard part of the corpus are more complex than the sentences in the simple Swedish sub-corpus. Such measures include the number of long words (in characters and syllables), sentence length (in tokens and syllables), and sentence depth. However, some of the measures are not straightforward to interpret. For example, Falkenjack et al. (2013) discuss the ratio of content words to be ambiguous, since a high ratio could be indicative of higher information density, while a low ratio could mean higher syntactic complexity. We did not observe any statistically significant differences in the majority of the SweVoc measures, and this could possibly be explained by the nature of the used alignment algorithm. Since the algorithm aims to find semantically similar sentence pairs, it is likely that the aligned sentences will also be lexically similar. The linguistic analysis of the different parts of the corpus in this study does not include pairwise comparison, which could reveal whether the complexity differs between the sentences in the sentence pairs. The human evaluation performed shows tendencies of when the sentences are perceived as similar. However, due to the low sample size, these tendencies can not be confirmed without an additional study with a larger sample. It would also be interesting to see whether human readers experience differences in complexity when presented with the sentences in the sentence pairs. The collected corpus contains texts written by expert writers, following general guidelines on how to write simple text. However, even though there are some general traits of what makes a text easy to read, one must remember that the needs of the different target groups may vary. Second language learners face other problems than persons with dyslexia or aphasia, and there can be large variations within each target group. The corpus collected in this study is restricted in this sense, and future work would benefit from a more target-centred approach. For the purpose of ATS, sentence aligned resources can be sub-optimal, since simplification operations are not limited to the sentence level. The division of long or complex sentences into multiple shorter sentences is not an uncommon Table 6 : Results from the t-test comparing the sentences in the simple sub-corpus (simple) with the sentences in the standard sub-corpus (standard). The n lix long words differs from the Number of long words in Table 5 , since the former uses the lemma form in its calculation.", "cite_spans": [ { "start": 581, "end": 605, "text": "Falkenjack et al. (2013)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 2485, "end": 2492, "text": "Table 6", "ref_id": null }, { "start": 2697, "end": 2704, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "3." }, { "text": "operation when simplifying text, as well as the addition of explanatory sentences to clarify one complex sentence. However, it has been pointed out that certain simplification approaches are best modelled with 1-to-1 alignments (see for example Alva-Manchego et al. (2017)), and that more complex operations might need other methods and data organised in a different manner. A resource aligned at the sentence level can be used to investigate specific sentence-level simplification operations, but it is important to be aware of the limitations, and that additional resources, such as aligned text fragments or even full documents, are needed for a complete ATS analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3." }, { "text": "In this article, we have presented the work on creating and evaluating an aligned resource of Swedish sentence pairs that differ in complexity. The first two evaluations aimed to find the algorithm and the best combination of parameters to maximise alignment performance. The last evaluation investigated whether the sentences in the aligned sentence pairs in fact differed in complexity. The resulting corpus consisted of 45,671 sentence pairs, of which 15,433 were unique. The statistical analysis indicates that the sentences belonging to the simple Swedish sub-corpus are generally less complex than the sentence be-longing to the standard part of the corpus, according to both surface-level measures and analysis at a deeper linguistic level. Future research includes further analysis of the sentence pairs to see what simplification operations that are present in the data, as well as making use of this resource in datadriven text simplification research for Swedish.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4." } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning how to simplify from explicit labeling of complex-simplified text pairs", "authors": [ { "first": "F", "middle": [], "last": "Alva-Manchego", "suffix": "" }, { "first": "J", "middle": [], "last": "Bingel", "suffix": "" }, { "first": "G", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "C", "middle": [], "last": "Scarton", "suffix": "" }, { "first": "L", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "295--305", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alva-Manchego, F., Bingel, J., Paetzold, G., Scarton, C., and Specia, L. (2017). Learning how to simplify from explicit labeling of complex-simplified text pairs. In Proceedings of the Eighth International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 295--305, Taipei, Taiwan. Asian Feder- ation of Natural Language Processing.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Data-driven sentence simplification: Survey and benchmark", "authors": [], "year": null, "venue": "Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Data-driven sentence simplification: Survey and bench- mark. Computational Linguistics, pages 1-87, 01.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning to simplify sentences using wikipedia", "authors": [ { "first": "C", "middle": [ "H" ], "last": "Bj\u00f6rnsson", "suffix": "" }, { "first": "", "middle": [], "last": "L\u00e4sbarhet", "suffix": "" }, { "first": "", "middle": [], "last": "Liber", "suffix": "" }, { "first": "", "middle": [], "last": "Stockholm", "suffix": "" }, { "first": "W", "middle": [], "last": "Coster", "suffix": "" }, { "first": "D", "middle": [], "last": "Kauchak", "suffix": "" } ], "year": 1968, "venue": "Proceedings of the workshop on monolingual text-to-text generation", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bj\u00f6rnsson, C. H. (1968). L\u00e4sbarhet. Liber, Stockholm. Coster, W. and Kauchak, D. (2011a). Learning to simplify sentences using wikipedia. In Proceedings of the work- shop on monolingual text-to-text generation, pages 1-9. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Simple english wikipedia: a new text simplification task", "authors": [ { "first": "W", "middle": [], "last": "Coster", "suffix": "" }, { "first": "D", "middle": [], "last": "Kauchak", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers", "volume": "2", "issue": "", "pages": "665--669", "other_ids": {}, "num": null, "urls": [], "raw_text": "Coster, W. and Kauchak, D. (2011b). Simple english wikipedia: a new text simplification task. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technologies: short papers-Volume 2, pages 665-669. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Introducing SAPISan API service for text analysis and simplification", "authors": [ { "first": "D", "middle": [], "last": "Fahlborg", "suffix": "" }, { "first": "E", "middle": [], "last": "Rennes", "suffix": "" } ], "year": 2016, "venue": "The second national Swe-Clarin workshop: Research collaborations for the digital age", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fahlborg, D. and Rennes, E. (2016). Introducing SAPIS - an API service for text analysis and simplification. In The second national Swe-Clarin workshop: Research collaborations for the digital age, Ume\u00e5, Sweden.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Features indicating readability in Swedish text", "authors": [ { "first": "J", "middle": [], "last": "Falkenjack", "suffix": "" }, { "first": "K", "middle": [], "last": "Heimann M\u00fchlenbock", "suffix": "" }, { "first": "J\u00f6nsson", "middle": [], "last": "", "suffix": "" }, { "first": "A", "middle": [], "last": "", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 19th Nordic Conference of Computational Linguistics (NoDaLiDa-2013)", "volume": "", "issue": "", "pages": "27--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Falkenjack, J., Heimann M\u00fchlenbock, K., and J\u00f6nsson, A. (2013). Features indicating readability in Swedish text. In Proceedings of the 19th Nordic Conference of Com- putational Linguistics (NoDaLiDa-2013), Oslo, Norway, number 085 in NEALT Proceedings Series 16, pages 27- 40. Link\u00f6ping University Electronic Press.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Towards a standard dataset of swedish word vectors", "authors": [ { "first": "P", "middle": [], "last": "Fallgren", "suffix": "" }, { "first": "J", "middle": [], "last": "Segeblad", "suffix": "" }, { "first": "M", "middle": [], "last": "Kuhlmann", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Sixth Swedish Language Technology Conference (SLTC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fallgren, P., Segeblad, J., and Kuhlmann, M. (2016). To- wards a standard dataset of swedish word vectors. In Proceedings of the Sixth Swedish Language Technology Conference (SLTC), Ume\u00e5, Sweden.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "SweVoc -a Swedish vocabulary resource for CALL", "authors": [ { "first": "K", "middle": [], "last": "Heimann M\u00fchlenbock", "suffix": "" }, { "first": "S", "middle": [], "last": "Johansson Kokkinakis", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the SLTC 2012 workshop on NLP for CALL", "volume": "", "issue": "", "pages": "28--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heimann M\u00fchlenbock, K. and Johansson Kokkinakis, S. (2012). SweVoc -a Swedish vocabulary resource for CALL. In Proceedings of the SLTC 2012 workshop on NLP for CALL, pages 28-34, Lund. Link\u00f6ping Univer- sity Electronic Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "I see what you mean. Assessing readability for specific target groups. Dissertation, Spr\u00e5kbanken", "authors": [ { "first": "K", "middle": [], "last": "Heimann M\u00fchlenbock", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heimann M\u00fchlenbock, K. (2013). I see what you mean. Assessing readability for specific target groups. Dis- sertation, Spr\u00e5kbanken, Dept of Swedish, University of Gothenburg.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Gymnasistsvenska", "authors": [ { "first": "T", "middle": [ "G" ], "last": "Hultman", "suffix": "" }, { "first": "M", "middle": [], "last": "Westman", "suffix": "" }, { "first": "Lund", "middle": [], "last": "Liberl\u00e4romedel", "suffix": "" } ], "year": 1977, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hultman, T. G. and Westman, M. (1977). Gymnasistsven- ska. LiberL\u00e4romedel, Lund.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Aligning sentences from standard wikipedia to simple wikipedia", "authors": [ { "first": "W", "middle": [], "last": "Hwang", "suffix": "" }, { "first": "H", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "M", "middle": [], "last": "Ostendorf", "suffix": "" }, { "first": "W", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2015, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "211--217", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hwang, W., Hajishirzi, H., Ostendorf, M., and Wu, W. (2015). Aligning sentences from standard wikipedia to simple wikipedia. In HLT-NAACL, pages 211-217.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Building a monolingual parallel corpus for text simplification using sentence similarity based on alignment between word embeddings", "authors": [ { "first": "T", "middle": [], "last": "Kajiwara", "suffix": "" }, { "first": "M", "middle": [], "last": "Komachi", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "1147--1158", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kajiwara, T. and Komachi, M. (2016). Building a mono- lingual parallel corpus for text simplification using sen- tence similarity based on alignment between word em- beddings. In Proceedings of COLING, Osaka, Japan, pages 1147-1158.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Exploring neural text simplification models", "authors": [ { "first": "S", "middle": [], "last": "Nisioi", "suffix": "" }, { "first": "S", "middle": [], "last": "\u0160tajner", "suffix": "" }, { "first": "S", "middle": [ "P" ], "last": "Ponzetto", "suffix": "" }, { "first": "L", "middle": [ "P" ], "last": "Dinu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "85--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nisioi, S.,\u0160tajner, S., Ponzetto, S. P., and Dinu, L. P. (2017). Exploring neural text simplification models. In Proceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Pa- pers), pages 85-91.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Malt-Parser: A language-independent system for data-driven dependency parsing", "authors": [ { "first": "J", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "J", "middle": [], "last": "Hall", "suffix": "" }, { "first": "J", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "A", "middle": [], "last": "Chanev", "suffix": "" }, { "first": "G", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "S", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "S", "middle": [], "last": "Marinov", "suffix": "" }, { "first": "E", "middle": [], "last": "Marsi", "suffix": "" } ], "year": 2007, "venue": "Natural Language Engineering", "volume": "13", "issue": "2", "pages": "95--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nivre, J., Hall, J., Nilsson, J., Chanev, A., Eryigit, G., K\u00fcbler, S., Marinov, S., and Marsi, E. (2007). Malt- Parser: A language-independent system for data-driven dependency parsing. Natural Language Engineering, 13(2):95-135.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Mimicking word embeddings using subword rnns", "authors": [ { "first": "Y", "middle": [], "last": "Pinter", "suffix": "" }, { "first": "R", "middle": [], "last": "Guthrie", "suffix": "" }, { "first": "J", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "102--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pinter, Y., Guthrie, R., and Eisenstein, J. (2017). Mimick- ing word embeddings using subword rnns. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 102-112.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Towards a corpus of easy to read authority web texts", "authors": [ { "first": "E", "middle": [], "last": "Rennes", "suffix": "" }, { "first": "A", "middle": [], "last": "J\u00f6nsson", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Sixth Swedish Language Technology Conference (SLTC2016)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rennes, E. and J\u00f6nsson, A. (2016). Towards a corpus of easy to read authority web texts. In Proceedings of the Sixth Swedish Language Technology Conference (SLTC2016), Ume\u00e5, Sweden.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Automatic Text Simplification", "authors": [ { "first": "H", "middle": [], "last": "Saggion", "suffix": "" } ], "year": 2017, "venue": "Synthesis Lectures on Human Language Technologies", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saggion, H. (2017). Automatic Text Simplification. Num- ber Vol. 32 in Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Text simplification from professionally produced corpora", "authors": [ { "first": "C", "middle": [], "last": "Scarton", "suffix": "" }, { "first": "G", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "L", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scarton, C., Paetzold, G., and Specia, L. (2018). Text simplification from professionally produced corpora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Asso- ciation (ELRA).", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Unsupervised sparse vector densification for short text similarity", "authors": [ { "first": "Y", "middle": [], "last": "Song", "suffix": "" }, { "first": "D", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1275--1280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Song, Y. and Roth, D. (2015). Unsupervised sparse vector densification for short text similarity. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1275-1280.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Translating from Complex to Simplified Sentences", "authors": [ { "first": "L", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 9th international conference on Computational Processing of the Portuguese Language (PROPOR)", "volume": "", "issue": "", "pages": "30--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Specia, L. (2010). Translating from Complex to Simplified Sentences. In Proceedings of the 9th international con- ference on Computational Processing of the Portuguese Language (PROPOR), pages 30-39.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Sentence simplification by monolingual machine translation", "authors": [ { "first": "S", "middle": [], "last": "Wubben", "suffix": "" }, { "first": "A", "middle": [], "last": "Van Den Bosch", "suffix": "" }, { "first": "E", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1015--1024", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wubben, S., van den Bosch, A., and Krahmer, E. (2012). Sentence simplification by monolingual machine trans- lation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1015-1024, Jeju Island, Korea.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Problems in current text simplification research: New data can help", "authors": [ { "first": "W", "middle": [], "last": "Xu", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "C", "middle": [], "last": "Napoles", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "", "pages": "283--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, W., Callison-Burch, C., and Napoles, C. (2015). Prob- lems in current text simplification research: New data can help. Transactions of the Association for Computa- tional Linguistics, 3:283-297.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Optimizing statistical machine translation for text simplification", "authors": [ { "first": "W", "middle": [], "last": "Xu", "suffix": "" }, { "first": "C", "middle": [], "last": "Napoles", "suffix": "" }, { "first": "E", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Q", "middle": [], "last": "Chen", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "401--415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, W., Napoles, C., Pavlick, E., Chen, Q., and Callison- Burch, C. (2016). Optimizing statistical machine trans- lation for text simplification. Transactions of the Associ- ation for Computational Linguistics, 4:401-415.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Sentence simplification with deep reinforcement learning", "authors": [ { "first": "X", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "M", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "584--594", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, X. and Lapata, M. (2017). Sentence simplifica- tion with deep reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 584-594, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A constrained sequence-to-sequence neural model for sentence simplification", "authors": [ { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Z", "middle": [], "last": "Ye", "suffix": "" }, { "first": "Y", "middle": [], "last": "Feng", "suffix": "" }, { "first": "D", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yan", "middle": [], "last": "", "suffix": "" }, { "first": "R", "middle": [], "last": "", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Y., Ye, Z., Feng, Y., Zhao, D., and Yan, R. (2017). A constrained sequence-to-sequence neural model for sentence simplification. abs/1704.02312.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A monolingual tree-based translation model for sentence simplification", "authors": [ { "first": "Z", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "D", "middle": [], "last": "Bernhard", "suffix": "" }, { "first": "I", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd international conference on computational linguistics", "volume": "", "issue": "", "pages": "1353--1361", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhu, Z., Bernhard, D., and Gurevych, I. (2010). A mono- lingual tree-based translation model for sentence simpli- fication. In Proceedings of the 23rd international con- ference on computational linguistics, pages 1353-1361. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Part of speech tagging: Shallow or deep learning?", "authors": [ { "first": "R", "middle": [], "last": "Ostling", "suffix": "" } ], "year": 2018, "venue": "North European Journal of Language Technology", "volume": "5", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ostling, R. (2018). Part of speech tagging: Shallow or deep learning? North European Journal of Language Technology, 5:1-15.", "links": null } }, "ref_entries": { "TABREF1": { "text": "Results of the human evaluation of MA and HA. Good=3, Good Partial=2, Partial=1 and Bad=0.", "num": null, "html": null, "content": "