{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:57:59.245162Z" }, "title": "Coreference-Based Text Simplification", "authors": [ { "first": "Rodrigo", "middle": [], "last": "Wilkens", "suffix": "", "affiliation": { "laboratory": "", "institution": "LiLPa -University of Strasbourg", "location": {} }, "email": "rswilkens@gmail.com" }, { "first": "Bruno", "middle": [], "last": "Oberle", "suffix": "", "affiliation": { "laboratory": "", "institution": "LiLPa -University of Strasbourg", "location": {} }, "email": "b.oberle@zoho.eu" }, { "first": "Amalia", "middle": [], "last": "Todirascu", "suffix": "", "affiliation": { "laboratory": "", "institution": "LiLPa -University of Strasbourg", "location": {} }, "email": "todiras@unistra.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Text simplification aims at adapting documents to make them easier to read by a given audience. Usually, simplification systems consider only lexical and syntactic levels, and, moreover, are often evaluated at the sentence level. Thus, studies on the impact of simplification in text cohesion are lacking. Some works add coreference resolution in their pipeline to address this issue. In this paper, we move forward in this direction and present a rule-based system for automatic text simplification, aiming at adapting French texts for dyslexic children. The architecture of our system takes into account not only lexical and syntactic but also discourse information, based on coreference chains. Our system has been manually evaluated in terms of grammaticality and cohesion. We have also built and used an evaluation corpus containing multiple simplification references for each sentence. It has been annotated by experts following a set of simplification guidelines, and can be used to run automatic evaluation of other simplification systems. Both the system and the evaluation corpus are freely available.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Text simplification aims at adapting documents to make them easier to read by a given audience. Usually, simplification systems consider only lexical and syntactic levels, and, moreover, are often evaluated at the sentence level. Thus, studies on the impact of simplification in text cohesion are lacking. Some works add coreference resolution in their pipeline to address this issue. In this paper, we move forward in this direction and present a rule-based system for automatic text simplification, aiming at adapting French texts for dyslexic children. The architecture of our system takes into account not only lexical and syntactic but also discourse information, based on coreference chains. Our system has been manually evaluated in terms of grammaticality and cohesion. We have also built and used an evaluation corpus containing multiple simplification references for each sentence. It has been annotated by experts following a set of simplification guidelines, and can be used to run automatic evaluation of other simplification systems. Both the system and the evaluation corpus are freely available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Text cohesion, a crucial feature for text understanding, is reinforced by explicit cohesive devices such as coreference (expressions referring to the same discourse entity: Dany Boon-the French actor-his film) and anaphoric (an anaphor and its antecedent: it-the fox) chains. Coreference chains involves at least 3 referring expressions (such as proper names, noun phrases (NP), pronouns) indicating the same discourse entity (Schnedecker, 1997) , while anaphoric chains involves a directed relation between the anaphor (the pronoun) and its antecedent. However, coreference and anaphora resolution is a difficult task for people with language disabilities, such as dyslexia (Vender, 2017; Jaffe et al., 2018; Sprenger-Charolles and Ziegler, 2019) . Moreover, when concurrent referents are present in the text, the pronoun resolution task is even more difficult (Giv\u00f3n, 1993; McMillan et al., 2012; Li et al., 2018) : the pronouns may be ambiguous and their resolution depends on user knowledge about the main topic (Le Bou\u00ebdec and Martins, 1998) . This poses special issues to some NLP tasks, such as text simplification. Automatic text simplification (ATS) adapts text for specific target audience such as L1 or L2 language learners or people with language or cognitive disabilities, as autism (Yaneva and Evans, 2015) and dyslexia (Rello et al., 2013) . Existing simplification systems work at the lexical or syntactic level, or both. Lexical simplification aims to replace complex words by simpler ones (Rello et al., 2013; Fran\u00e7ois et al., 2016; Billami et al., 2018) , while syntactic simplification transforms complex structures (Seretan, 2012; Brouwers et al., 2014a) . However, these transformations change the discourse structure and might violate some cohesion or coherence constraints. Problems appear at discourse level because of lexical or syntactic simplifications ignoring coreference. In the following example, the substitution of hy\u00e8ne 'hyena' by animal 'animal' introduces an ambiguity in coreference resolution, since the animal might be le renard 'the fox' or la hy\u00e8ne 'the hyena'. Original: Le renard se trouvait au fond du puits et appellait.", "cite_spans": [ { "start": 426, "end": 445, "text": "(Schnedecker, 1997)", "ref_id": "BIBREF30" }, { "start": 675, "end": 689, "text": "(Vender, 2017;", "ref_id": "BIBREF42" }, { "start": 690, "end": 709, "text": "Jaffe et al., 2018;", "ref_id": "BIBREF14" }, { "start": 710, "end": 747, "text": "Sprenger-Charolles and Ziegler, 2019)", "ref_id": "BIBREF36" }, { "start": 862, "end": 875, "text": "(Giv\u00f3n, 1993;", "ref_id": "BIBREF12" }, { "start": 876, "end": 898, "text": "McMillan et al., 2012;", "ref_id": "BIBREF21" }, { "start": 899, "end": 915, "text": "Li et al., 2018)", "ref_id": "BIBREF19" }, { "start": 1020, "end": 1046, "text": "Bou\u00ebdec and Martins, 1998)", "ref_id": "BIBREF17" }, { "start": 1296, "end": 1320, "text": "(Yaneva and Evans, 2015)", "ref_id": "BIBREF44" }, { "start": 1334, "end": 1354, "text": "(Rello et al., 2013)", "ref_id": "BIBREF28" }, { "start": 1507, "end": 1527, "text": "(Rello et al., 2013;", "ref_id": "BIBREF28" }, { "start": 1528, "end": 1550, "text": "Fran\u00e7ois et al., 2016;", "ref_id": "BIBREF11" }, { "start": 1551, "end": 1572, "text": "Billami et al., 2018)", "ref_id": "BIBREF3" }, { "start": 1636, "end": 1651, "text": "(Seretan, 2012;", "ref_id": "BIBREF32" }, { "start": 1652, "end": 1675, "text": "Brouwers et al., 2014a)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "La hy\u00e8ne l'approcha. 'The fox was at the bottom of the well. The hyena approached it.' Simplified: Le renard se trouvait au fond du puits. L'animal l'approcha. 'The fox was at the bottom of the well. The animal approached it.' However, few existing syntactic simplification systems (e.g. Siddharthan (2006) and Canning (2002) ) operate at the discourse level and replace pronouns by antecedents or fix these discourse constraints after the syntactic simplification process (Quiniou and Daille, 2018) . In this paper, we evaluate the influence of coreference in the text simplification task. In order to achieve this goal, we propose a rule-based text simplification architecture aware of coreference information, and we analyse its impact at the lexical and syntactic levels as well as for text cohesion and coherence. We also explore the use of coreference information as a simplification device in order to adapt NP accessibility and improve some coreference-related issues. For this purpose, we have developed an evaluation corpus, annotated by human experts, following discourse-based simplification guidelines. This paper is organised as follows. We present related work on cohesion markers such as coreference chains, as well as lexical and syntactic simplification systems that take into account these elements (Section 2). Then, we present the architecture of our rule-based simplification system alongside the corpus used to build it and the corpus used to evaluate it (Section 3). The rules themselves, and the system evaluation, are presented in Section 4. Finally, Section 5 presents final remarks.", "cite_spans": [ { "start": 288, "end": 306, "text": "Siddharthan (2006)", "ref_id": "BIBREF35" }, { "start": 311, "end": 325, "text": "Canning (2002)", "ref_id": "BIBREF7" }, { "start": 473, "end": 499, "text": "(Quiniou and Daille, 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Few systems explore discourse-related features (e.g. entity densities and syntactic transitions) to evaluate text readability alongside other lexical or morphosyntactic properties (\u0160tajner et al., 2012; Todirascu et al., 2016; Pitler and Nenkova, 2008) . Linguistic theories such as Accessibility theory (Ariel, 2001) organise referring expressions and their surface forms into a hierarchy that predicts the structure of cohe-sion markers such as coreference chains. In this respect, a new discourse entity is introduced by a low accessibility referring expression, such as a proper noun or a full NP. On the contrary, pronouns and possessive determiners are used to recall already known entities. This theory is often used to explain coreference chain structure and properties (Todirascu et al., 2017) . Other linguistic theories such as Centering theory (Grosz et al., 1995) predict discourse centres following a typology of centre shift or maintenance and explains linguistic parameters related to coherence issues. Simplification systems frequently ignore existing cohesive devices. This aspect is however taken into account by, for instance, Siddharthan (2006) , Brouwers et al. (2014a) and Quiniou and Daille (2018) . Canning (2002) replaces anaphor by their antecedent for a specific target audience. Siddharthan (2004) first uses anaphora detection to replace pronouns by NP. Then a set of ordered hand-made syntactic rules is applied (e.g. conjunctions are simplified before relative clauses). Rhetorical Structure Theory (Mann and Thompson, 1988) is used to reorder the output of the syntactic simplification and anaphoric relations are checked after simplification. Moreover, Siddharthan (2006) proposes a model based on Centering theory (Grosz et al., 1995) to recover broken cohesion relations, by using a specific pronoun resolution system for English. The model allows the replacement of a pronoun by its immediate antecedent. Few systems use a coreference resolution module to solve coreference issues (Barbu et al., 2013) . For French, Quiniou and Daille (2018) develop a simple pronoun resolution module, inspired by (Mitkov, 2002 ) (e.g. searching antecedents in two sentences before the pronoun). This system previously detects expletive pronouns to exclude them from pronoun resolution. Brouwers et al. (2014a) mainly propose syntactic simplification using hand-made rules implemented with Tregex and Tsurgeon (Levy and Andrew, 2006) . The only rules handling anaphora replace pronouns with NP from the previous or the current sentence. To sum up, only a few ATS approaches, mostly for English, propose discourse simplification rules or rules checking discourse constraints. 1", "cite_spans": [ { "start": 180, "end": 202, "text": "(\u0160tajner et al., 2012;", "ref_id": null }, { "start": 203, "end": 226, "text": "Todirascu et al., 2016;", "ref_id": "BIBREF40" }, { "start": 227, "end": 252, "text": "Pitler and Nenkova, 2008)", "ref_id": "BIBREF25" }, { "start": 304, "end": 317, "text": "(Ariel, 2001)", "ref_id": "BIBREF1" }, { "start": 778, "end": 802, "text": "(Todirascu et al., 2017)", "ref_id": "BIBREF41" }, { "start": 856, "end": 876, "text": "(Grosz et al., 1995)", "ref_id": "BIBREF13" }, { "start": 1147, "end": 1165, "text": "Siddharthan (2006)", "ref_id": "BIBREF35" }, { "start": 1168, "end": 1191, "text": "Brouwers et al. (2014a)", "ref_id": "BIBREF5" }, { "start": 1196, "end": 1221, "text": "Quiniou and Daille (2018)", "ref_id": "BIBREF27" }, { "start": 1224, "end": 1238, "text": "Canning (2002)", "ref_id": "BIBREF7" }, { "start": 1308, "end": 1326, "text": "Siddharthan (2004)", "ref_id": "BIBREF34" }, { "start": 1531, "end": 1556, "text": "(Mann and Thompson, 1988)", "ref_id": "BIBREF20" }, { "start": 1687, "end": 1705, "text": "Siddharthan (2006)", "ref_id": "BIBREF35" }, { "start": 1749, "end": 1769, "text": "(Grosz et al., 1995)", "ref_id": "BIBREF13" }, { "start": 2018, "end": 2038, "text": "(Barbu et al., 2013)", "ref_id": "BIBREF2" }, { "start": 2135, "end": 2148, "text": "(Mitkov, 2002", "ref_id": "BIBREF22" }, { "start": 2308, "end": 2331, "text": "Brouwers et al. (2014a)", "ref_id": "BIBREF5" }, { "start": 2431, "end": 2454, "text": "(Levy and Andrew, 2006)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Taking into account our goal of analysing the impact of coreference in text simplification, we compiled two different types of corpora (one for the evaluation and other for the simplification reference), described in Section 3.1., we propose a coreference-aware architecture in Section 3.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3." }, { "text": "One of the most critical elements in text simplification is the target audience since it defines what types of operations should be performed. In this regard, we compiled a reference corpus composed of parallel texts manually adapted for dyslexic children in the context of the Methodolodys association 2 . This corpus consists of five manually adapted paired tales (1,143 words and 84 sentences for the dyslexic texts and 1,969 words and 151 sentences for the original texts). This corpus helps us to better understand simplifications targeting dyslexic children both for coreference chains and at the lexical and syntactic levels. The reference corpus has been preprocessed in two steps. First, we aligned the corpus using the MEDITE tool (Fenoglio and Ganascia, 2006) . This process identifies the transformations performed (phrase deletion or insertion, sentence splitting, etc.) as well as their level (i.e. lexical, syntactic or discourse). The second step consisted in the manual annotation of coreference chains (mentions and coreference relations) (Todirascu et al., 2017; Schnedecker, 1997; Todirascu et al., 2016) and referring expressions accessibility (Ariel, 1990; Ariel, 2001 ). Then, we compared coreference chains properties: chain size, average distance between mentions, lexical diversity (with the stability coefficient defined by Perret (2000) ), annotation (mention) density, link (relation between consecutive mentions) count and density, grammatical categories of the mentions. The reference corpus provides several meaningful descriptions of the simplification phenomenon. However, it is limited in the sense of system evaluation since it provides only one valid simplification, and it may require resources other than those currently available in NLP technology. In order to build an evaluation corpus, we manually collected simplified alternatives to the original texts (3 texts from the reference corpus and 2 new texts). We used the online PsyToolkit tool 3 , and 25 annotators (master students in linguistics and computational linguistics) participated. They all provided information on age, mother tongue and education level, and replied to questionnaires to check reading time and text understanding. Additionally, we summarised the discursive observations identified in the reference corpus (presented in Section 4.1. and 4.2.) as simplification guidelines 4 provided to the annotators. The purpose of these guidelines was to drive the annotators' attention to discourse operations. To create an evaluation corpus, the students proposed simplified alternatives to texts from the original corpus (we replaced 2 texts to broaden the text coverage). These alternatives had to follow the provided guidelines, but the students could also suggest other simplification proposals. Taking into account the task complexity and the time required to simplify a text, we ask them to simplify only some short paragraphs (894 words per person on average). We excluded from our data the responses from 6 students who did not fully understand the task. We aligned the source text and then, we identified ungrammatical transformations and typos, and replaced these answers with the original text. The evaluation corpus also offers complementary simplifications for each text. Thus, it can also be used to select the most significant simplifications required. We obtained several simplified versions for each sentence. The analysis of the simplifications performed in both reference and evaluation corpora is presented in Section 4. Furthermore, the dyslexic children. system that uses the result of this analysis is introduced in Section 3.2., and its evaluation is presented in Section 4.3.", "cite_spans": [ { "start": 741, "end": 770, "text": "(Fenoglio and Ganascia, 2006)", "ref_id": "BIBREF10" }, { "start": 1057, "end": 1081, "text": "(Todirascu et al., 2017;", "ref_id": "BIBREF41" }, { "start": 1082, "end": 1100, "text": "Schnedecker, 1997;", "ref_id": "BIBREF30" }, { "start": 1101, "end": 1124, "text": "Todirascu et al., 2016)", "ref_id": "BIBREF40" }, { "start": 1165, "end": 1178, "text": "(Ariel, 1990;", "ref_id": "BIBREF0" }, { "start": 1179, "end": 1190, "text": "Ariel, 2001", "ref_id": "BIBREF1" }, { "start": 1351, "end": 1364, "text": "Perret (2000)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.1." }, { "text": "In this paper, we propose a rule-based approach to the simplification task since, on one hand, the original and simplified parallel corpora are small, which makes applying machine learning methods difficult; and, on the other hand, this kind of approach allows us to study the impact of each type of transformation on the comprehension and reading capabilities of the target audience. In this case, it is possible to decompose the simplification rules into various levels and to evaluate them separately. In this work, we are particularly interested in discursive simplification, which aims at preserving textual cohesion markers, such as coreference chains (Schnedecker, 2017) . The proposed architecture is composed of four modules, as illustrated in Figure 1 . The first module preprocesses the text in order to facilitate the application of the simplification rules. This module starts by annotating the text with a parser (Qi et al., 2019) and with coreference information, which consist of the delimitation of referring expressions (e.g. proper names or named entities, NP and pronouns) and identification of coreference relations between these expressions. It is based on the architecture proposed by Kantor and Globerson (2019) but trained on the DEMOCRAT corpus 5 (Landragin, 2016 ). Our trained model achieved 85,04% of CoNLL score (the standard evaluation metric for automatic coreference resolution) with predicted mentions 6 . The syntactic simplification module is inspired by the work of Siddharthan (2003) and Brouwers et al. (2014b) , applying deletion and rewriting rules described in the next sections. Then the data is processed by the third module, the discursive simplification module, which modifies the structure of coreference chains detected by the first module. Finally, the last module applies lexical and morphological simplifications by replacing words. This module is based on ReSyf (Billami et al., 2018) and its API 7 , which allows to query by the easiest alternative synonym to a given target word. Since ReSyf proposed different alternatives for each word sense, we selected as output only those that are the simplest in all senses and the most frequent across the senses. To evaluate the system, taking advantage of the alternative simplification references in the evaluation corpus, we used the SARI measure that correlates to some level with human judgement of simplicity (Xu et al., 2016) . Moreover, as a point of comparison, we also present the results of BLEU (Papineni et al., 2002) , used in MT-based simplification methods. A key element in our architecture is the rewriting tool (see Section 3.2.1.), that allows to search for both lexical and morphosyntactic patterns as well as to modify the syntactic parse structure.", "cite_spans": [ { "start": 658, "end": 677, "text": "(Schnedecker, 2017)", "ref_id": "BIBREF31" }, { "start": 927, "end": 944, "text": "(Qi et al., 2019)", "ref_id": "BIBREF26" }, { "start": 1208, "end": 1235, "text": "Kantor and Globerson (2019)", "ref_id": "BIBREF15" }, { "start": 1273, "end": 1289, "text": "(Landragin, 2016", "ref_id": "BIBREF16" }, { "start": 1503, "end": 1521, "text": "Siddharthan (2003)", "ref_id": "BIBREF33" }, { "start": 1526, "end": 1549, "text": "Brouwers et al. (2014b)", "ref_id": "BIBREF6" }, { "start": 1914, "end": 1936, "text": "(Billami et al., 2018)", "ref_id": "BIBREF3" }, { "start": 2411, "end": 2428, "text": "(Xu et al., 2016)", "ref_id": "BIBREF43" }, { "start": 2503, "end": 2526, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 753, "end": 761, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Architecture", "sec_num": "3.2." }, { "text": "The text rewriting tool applies several text transformations and changes to the structure of the sentences: deletion of secondary information, sentence splitting and phrase changes. However, we have to transform the text without violating the grammar. We compared available rewriting tools such as Tregex and Tsurgeon (Levy and Andrew, 2006) , Semgrex (Chambers et al., 2007) , and Semgrex-Plus (Tamburini, 2017) . Levy and Andrew (2006) provide tree query (Tregex) and manipulation (Tsurgeon) tools that can operate on constituent trees. Tree query tools have proven invaluable to NLP research for both data exploration and corpus-based research. Complementary to Tregex queries, Tsurgeon operates at node and edge levels, to change the structure of the trees, allowing, for example, node renaming, deletion, insertion, movement and replacement. Chambers et al. (2007) proposed Semgrex to handle dependencies instead of constituents. The tool identifies semantic patterns supporting inference in texts as an alternative to writing graph traversal coded by hand for each desired pattern. Semgrex allows inter-and extra-dependency graphs relations. For instance, queries may be used to identify direct or indirect governor associations, with or without limitation of the distance between the elements, or even the node positional relation (e.g. immediately precedes, right sibling, right immediate sibling, same nodes). Making a step forward into graph modification alike to Tsurgeon, Tamburini (2017) developed Semgrex-Plus to convert dependency treebanks into various formats. It supports three rewriting operations: replacing the tag of a graph node, and inserting or deleting a dependency edge between two graph nodes. Additionally to those, generic graph processing tools might be adapted for our task. For instance, Bonfante et al. (2018) present GREW, a graph rewriting tool that can perform similar queries to Semgrex, while providing graph operations close to those proposed by Tsurgeon. However, as pointed by Tamburini (2017) , intricacies of the generic tools might have a significant impact on the sentence rewriting process. For querying parsed data, we selected Semgrex because it precisely fits our needs. But, regarding the sentence rewriting goal, we opted to create a new Semgrex-based sentence processing tool, given the parser restrictions and the small set of operations available on Semgrex-Plus. Concerning the operations, we developed the following: (1) Insert injects a node (or tree) in another node; (2) Delete removes a node and its subtree from the sentence graph; (3) Split detaches a node and its subtree; (4) Move detaches a node and its subtree from a tree node, attaching it to another node of the same tree; (5) Replace tag label replaces the node information (e.g. surface and PoS-tag); (6) Replace node substitutes a node by another one; and (7) Copy subgraph creates a deep copy of a node or a tree. The insert, delete, move, and replace node operations are directly based on Tsurgeon while replace label is based both on Tsurgeon and Tsurgeon-plus. The split method is inspired by the Tsurgeon excise and adjoin operations. On the contrary, the copy operation was developed because we needed to copy parts of a sentence into different trees. In Figure 1 : The architecture of the simplification system. addition to these graph operations, we also extended Semgrex to read coreference information when available, and we simplified the morphology feature query by allowing to search by sub-elements without regular expressions. These operations are combined into rules in order to rewrite the text. We detail the process of defining the cohesion rules necessary for our discourse simplification system in Section 4.1. and Section 4.2.", "cite_spans": [ { "start": 318, "end": 341, "text": "(Levy and Andrew, 2006)", "ref_id": "BIBREF18" }, { "start": 352, "end": 375, "text": "(Chambers et al., 2007)", "ref_id": "BIBREF8" }, { "start": 395, "end": 412, "text": "(Tamburini, 2017)", "ref_id": "BIBREF39" }, { "start": 415, "end": 437, "text": "Levy and Andrew (2006)", "ref_id": "BIBREF18" }, { "start": 847, "end": 869, "text": "Chambers et al. (2007)", "ref_id": "BIBREF8" }, { "start": 1821, "end": 1843, "text": "Bonfante et al. (2018)", "ref_id": "BIBREF4" }, { "start": 2019, "end": 2035, "text": "Tamburini (2017)", "ref_id": "BIBREF39" } ], "ref_spans": [ { "start": 3284, "end": 3292, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Text rewriting tool", "sec_num": "3.2.1." }, { "text": "In this section, we present and explain the reference corpus analyses of cohesion changes that have been used to design simplification rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4." }, { "text": "The compilation of several observations from the reference corpus is presented in this section. It supports the discourse level transformations applied in this work. For our purpose, we use Accessibility theory (Ariel, 1990; Ariel, 2001 ), which proposes a hierarchy of referring expressions, from those with low accessibility (such as proper nouns or definite NP; these are usually newly introduced expressions) to highly accessible ones (such as pronouns or determiners; these have usually been introduced previously). Moreover, we use Centering theory (Grosz et al., 1995) , which predicts situations when the attention centre shifts to a new one, resulting in a change of the syntactic function of the centre. By exploiting these observations, we propose three categories of rules. First, we present the results from our analysis, and then we detail these rules. As discussed in Section 3.1., we manually enriched the reference corpus with the annotation proposed in Todirascu et al. (2017) , Schnedecker (1997) and Todirascu et al. (2016) . This properties are presented in Section 3.1. and in Table 1 . Next, we compared these annotations in both the simplified and the original texts to find discourse simplification (cohesion) rules. The comparison between the coreference chain properties in the original and simplified texts is the first step to define the cohesion rules. Next, we have to identify changes in the structure of coreference chains induced by simplifications before defining the cohesion rules. We start our study of the cohesive elements by comparing the properties and transformations of five text pairs. Each of those was manually annotated with coreference chains. Due to the lack of available data containing original and simplified texts for dyslexic people, our corpus is relatively small when compared to others simplification corpora. Moreover, manual coreference annotation is a timeconsuming and challenging task, in terms of referring ex-pression identification (delimiting expressions and finding their type) and of chain identification (linking all the referring expressions belonging to the same chain). The adapted texts present some specific coreference property statistically differences when compared to the original ones (Table 1) : link count (p=0.01), stability coefficient (p=0.01), chain density (p=0.04), link density (p=0.008), and annotation density (p=0.02). Additionally, the average distance between two consecutive referring expressions is higher in original than in adapted texts, as a consequence of text deletions. We also observe interesting correlations for most of the properties (0.74 for link count, 0.81 for stability coefficient, 0.72 for chain density, and 0.74 for link density). We observe differences between original and adapted texts at the coreference level, but despite this, the correlations between the properties are still valid. Besides, a negative correlation (-0.717) is found between the length of the chains and the number of chains. In the adapted texts, longer chains are correlated with a lower number of chains (on average 10.62 against 7.0). Some referents were deleted in adapted versions, which explains this result.", "cite_spans": [ { "start": 211, "end": 224, "text": "(Ariel, 1990;", "ref_id": "BIBREF0" }, { "start": 225, "end": 236, "text": "Ariel, 2001", "ref_id": "BIBREF1" }, { "start": 555, "end": 575, "text": "(Grosz et al., 1995)", "ref_id": "BIBREF13" }, { "start": 971, "end": 994, "text": "Todirascu et al. (2017)", "ref_id": "BIBREF41" }, { "start": 997, "end": 1015, "text": "Schnedecker (1997)", "ref_id": "BIBREF30" }, { "start": 1020, "end": 1043, "text": "Todirascu et al. (2016)", "ref_id": "BIBREF40" } ], "ref_spans": [ { "start": 1099, "end": 1106, "text": "Table 1", "ref_id": null }, { "start": 2265, "end": 2274, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Cohesion changes during the simplification", "sec_num": "4.1." }, { "text": "Adapted The composition of the chains varies with the complexity of the texts, as shown in Figure 2 . In the simplified texts, the pronouns have been deleted or replaced by their referent: this explains that the percentage of personal pronouns (PRO.PER) included in coreference chains is larger in the original texts (36.5% of the mentions) than in the adapted texts (19.4%). This observation is in line with the significant difference for definite noun (NP.DEF) usage (36.0% in the simplified texts but only 18.7% in the original ones) or for proper noun (NP.NAM) usage (3.95% in simplified and 1.91% in original texts). The possessive determiners represent 10.1% in simpler texts but 12.9% in the original texts. This observation is related to our third category (see at the end of the section), concerning possessive NP replacement by a specific referent. Moreover, concerning referring expression accessibility, we observed a significant change in determinant accessibility. This may be observed in the increase of indefinite NPs (NP.INDEF) from 3.11% to 5.03%, while demonstrative NPs (NP.DEM) decrease from 0.48% to 0.36% in simpler texts. This accessibility changing is related with our second category (see end of section), and it is exemplified in cases such as: Original: le 1 loup; cette 2 hy\u00e8ne. 'The 1 wolf; This 2 hyena' Simplified: un 1 loup; la 2 hy\u00e8ne. 'A 1 wolf; The 2 hyena.' Studying the stability coefficient 8 (Perret, 2000) , we observed more stable chains in the dyslexic texts (0.47) than in the original texts (0.60). Thus, the coreference chains present less lexical variation (i.e. more repetitions) in the simple text versions than in the original ones. These observations support the first but also the second category of cohesion rules (see below). To reduce coreference ambiguity, the pronoun il 'it' is replaced by the subject of the previous sentence (le h\u00e9risson 'the hedgehog'):", "cite_spans": [ { "start": 1432, "end": 1446, "text": "(Perret, 2000)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 91, "end": 99, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Properties", "sec_num": null }, { "text": "Original: Le h\u00e9risson voit le loup arriver, mais il 1 n'a pas le temps de se cacher. 'The hedgehog sees the wolf coming, but it 1 has no time to hide himself.' Simplified: Le h\u00e9risson voit le loup arriver, mais le h\u00e9risson 1 n'a pas le temps de se cacher. 'The hedgehog sees that the wolf arriving, but the hedgehog 1 has no time to hide himself.' To reduce working memory, the repeated pronoun is replaced by the referent: Original: Le renard 1 avait tr\u00e8s soif. Il 2 aper\u00e7ut un puits. Sur la poulie, il y avait une corde, et,\u00e0 chaque bout de la corde, il y avait un seau. Il 3 s'assit dans un des seaux et fut entra\u00een\u00e9 au fond. Heureux, il 4 but pendant de longues minutes. 'The fox 1 was very thirsty. It 2 saw a well. On the pulley, there was a rope, and at each end of the rope, there was a bucket. It 3 sat in one of the buckets and was dragged to the bottom. Happily, it 4 drank for long minutes.' Simplified: Le renard 1 avait tr\u00e8s soif. Le renard 2 aper\u00e7ut un puits. Sur la poulie, il y avait une corde, et,\u00e0 chaque bout de la corde, il y avait un seau. Le renard 3 s'assit dans un des seaux et fut entra\u00een\u00e9 au fond. Heureux, le renard 4 but pendant de longues minutes. 'The fox 1 was very thirsty. The fox 2 saw a well. On the pulley there was a rope, and at each end of the rope there was a bucket. The fox 3 sat in one of the buckets and was dragged to the bottom. Happily, the fox 4 drank for long minutes.' Moreover, we define rules from Category 2 to reflect differences between possessive determiners (12.95% vs 10.07%) and proper noun (1.91% vs 3.95%). For instance, the possessive NP (e.g. son mari) should be replaced by its referent (e.g. M. Dupont) in the example: Original: Mme Dupont a pr\u00e9par\u00e9 sa soupe. Son mari 1 dit, pour la premi\u00e8re fois, qu'il n'aime pas sa soupe. 'Mrs Dupont had prepared her soup. Her husband 1 says, for the first time, that he does not like her soup.' 8 A low stability coefficient means that there is a large variety of referring expressions in a given chain, in terms of synonyms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Properties", "sec_num": null }, { "text": "Simplified: Mme Dupont a fait sa soupe. M. Dupont 1 dit, pour la premi\u00e8re fois, qu'il n'aime pas sa soupe. 'Mrs Dupont cooked her soup. Mr. Dupont 1 said for the first time that he does not like her soup.'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Properties", "sec_num": null }, { "text": "The reference corpus alignment also contains pronoun deletions due to the suppression of secondary information. For example, the relative pronoun qui 'who' and the personal pronoun eux 'them' were deleted because the relative clause qui se dirigent vers eux 'who went to them' was deleted. We add a rule to Category 3, concerning information suppression:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Properties", "sec_num": null }, { "text": "Original: En chemin, ils aper\u00e7oivent, au loin, des bandits qui se dirigent vers eux. 'In their way, they saw, far away, bandits who went to them.' Simplified: En chemin, ils aper\u00e7oivent au loin des bandits. 'In their way, they saw, far away, bandits.' Figure 2 : The distribution of referring expression types in the chains for original and simplified texts.", "cite_spans": [], "ref_spans": [ { "start": 252, "end": 260, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Properties", "sec_num": null }, { "text": "All the differences observed in the corpus are summarised in the three following categories, each one containing different rules. Category 1 mark new or repeated entities. A referent should be found for ambiguous pronouns (i.e. several referents might be selected) or successive pronouns in the same chain. This operation decreases the number of processing inferences done by the reader to solve coreference relations. Category 2 specify entities. New entities should be introduced by either an indefinite NP or a proper noun, while definite nouns phrases (formed with a definite article or a demonstrative determiner), being highly accessible, refer to known entities. The change of determiner for a more highly accessible one modifies the accessibility of the referring expression. Category 3 make NP more accessible. Secondary information, such as relative or oblique clauses, should be removed. As a consequence, mentions of coreference chains are deleted (e.g. indefinite pronouns) as well as non-corefent pronouns, such as chacun, quelqu'un.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Properties", "sec_num": null }, { "text": "Possessive NPs are replaced by their explicit referent (a proper noun or another NP).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Properties", "sec_num": null }, { "text": "The rules have been written as simplification guidelines and applied by human annotators (Master students from Linguistics and Computational Linguistics) to create an evaluation corpus for our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Properties", "sec_num": null }, { "text": "Concerning the evaluation corpus, we annotated and ranked the multiple simplification references proposed by the annotators which followed the simplification guidelines. The proposals are not unanimous, in other words, there is not a single case in which all the annotators agreed with the simplification. Moreover, we observed several parts of the texts without any simplification suggestion from the annotators. These observations are also supported by the low value of 0.189 in the Krippendorff inter-annotator agreement, which combines several annotations and annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed simplifications", "sec_num": "4.2." }, { "text": "We build a typology for simplification that also includes the simplification rules. At the lexical level, one of the most applied rules concerns the deletion of modifiers (adjectives and adverbs) and the replacement of words by simpler synonyms. At the morphological level, we consistently observe a change in the tense of the verbs (usually replacing the simple past (pass\u00e9 simple) by the composed past (pass\u00e9 compos\u00e9), but sometimes replacing the simple/composed past by the present). The change of the most frequent words of a morphological family by a word from the same family is observed at a lower frequency. Concerning the syntactic modifications, the suppression of secondary information, such as relative or adverbial subordinate clauses, is noticeable, followed by the sentence reduction (e.g. sentence split at conjunctions and punctuation marks). We also observed some cases of sentence rewriting in order to ensure an SVO (subject-verb-object) structure. These rewriting operations address the cleft and passive sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed simplifications", "sec_num": "4.2." }, { "text": "Additionally, we also observed transformations of negative sentences into positive ones, but at a lower frequency. Furthermore, as expected, we identified several cases of discourse simplification. The most applied rules are from Category 2, followed by those from Category 1, and finally Category 3. Additionally, we also identify 10% of discourse simplifications, such as insertion of pronoun where there is a zero subject, that are not present in the guidelines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed simplifications", "sec_num": "4.2." }, { "text": "After these observations, we coded the most recurrent rules in the rewriting tool presented in Section 3.2.1. At the syntactic level, we addressed the secondary information removal and sentence reduction. For the former, the system searches for conjunctions linking full sentences or NP, splitting them into two separated sentences. At the coreference level, the NP splits require to repeat some elements to keep reference information. Adverbial clauses are deleted when they are not required by the sentence structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed simplifications", "sec_num": "4.2." }, { "text": "The coded rules at the discourse level consisted of five different strategies. At first, some pronouns (e.g. chaque and tout) when non-coreferent and the subordinate pronouns with their clauses are removed. Then, determiners that are in a coreference chain are changed in order to indicate their position in the chain. Moreover, other determiners are changed following Accessibility theory. Similarly, the third rule explicits coreference relations in possessive determiners. The next rule searches for ambiguous pronouns replacing them by their referents. The last rule solves all anaphoric relations of subject pronouns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed simplifications", "sec_num": "4.2." }, { "text": "The rules proposed in the last section feed the simplification system. They are coded using the operations presented in Section 3.2.1. and, as indicated in Section 3.2., the text is annotated with syntactic dependency and coreference information before the simplification pipeline starts. This pipeline stacks the syntactic, discursive and lexical simplifications shown in Figure 1 . Aiming to better understand the impact of the simplification on the coreference, we analysed the errors produced by the system. This evaluation is based on the judgement of three judges (two native speakers and one non-native, but advanced, speaker) who evaluated the grammaticality and familiarity of the system output. During this process, they first focused on text cohesion (without lexical simplifications), and then judged the choice of words (lexical simplifications). This approach was adopted to help them to concentrate on the cohesion aspects without distractions from the lexical issues. The total inter-annotator agreement was 56.59% (41.78% for the cohesion and 68.34% for the lexical judgements). Furthermore, we considered only simplification errors spotted by at least two judges. Concerning the cohesion evaluation, we observed that most errors come from the application of rules from Category 2. It creates referential inconsistencies since it changes the determiner. These errors are caused by coreference annotation tool errors and miss-identification of idioms and collocations. The coreference tool also contributed to errors in rules from Category 1 and 2. These errors may have been caused by both coreference chain divisions (causing determination issues) or merging (mixing different entities). Errors related to Category 3 rules were less frequent, and they are mostly related to coreference chain merging. The syntactic transformations do not generate noticeable errors. However, during preliminary evaluations, we identified that they mostly contribute to two error types: they caused cascade errors related to ambiguity if the coreference information was not kept in sentence splitting operations. The sentence deletion transformations may overdelete central elements due to parsing errors. All these transformations generated a total of 180 errors spread into 207 sentences. Taking into account only the lexical simplifications, the systems produced a total of 96 errors (62.35% of accuracy). Considering that these errors have an undesirable impact on simplification evaluation, we changed back all incorrect transformations. Given the grammatical output and the evaluation corpus (described in Section 3), we can move to simplicity evaluation. We evaluate the simplification using the SARI measure (Xu et al., 2016 ) (presented in Table 2 ). However, this measure, is still new, and it lacks in-depth studies. We selected random manual simplifications from the evaluation corpus and set it as a reference. The results of both the system output and the manual simplification are presented in the same behaviour is observed in sentence-level measures.", "cite_spans": [ { "start": 2716, "end": 2732, "text": "(Xu et al., 2016", "ref_id": "BIBREF43" } ], "ref_spans": [ { "start": 373, "end": 381, "text": "Figure 1", "ref_id": null }, { "start": 2749, "end": 2756, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "System evaluation", "sec_num": "4.3." }, { "text": "To better understand the results, we analysed the best and worse SARI's results by sentence, which lead us to two sources of noise: syntactic and lexical. These issues expose a contradiction in the simplification evaluation. The syntactic noise is related to the removal of secondary information. On one hand, the judges read and understand the texts without significant loss of information, on the other the candidate simplifications tend to keep the secondary information; even if this operation is one of the most performed at the syntactic level. The lexical issues are related to ReSyf. This dictionary contains lexical information graded by complexity, although most of the replacements indicated by this resource are not present in the evaluation corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System evaluation", "sec_num": "4.3." }, { "text": "We have presented a study of discourse-level transformations to simplify French texts. 9 This study focuses on cohesion issues related to text simplification. From the analysis of a corpus of simplified vs not simplified texts, we have first written guidelines for discourse-level simplifications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and further work", "sec_num": "5." }, { "text": "We have then designed a system to automatically applied these simplification guidelines. Our system has been evaluated with a corpus containing alternative simplifications proposed by 19 annotators. This corpus also supported the selection of lexical and syntactic simplification rules. We also presented a proposal for a rule-based coreferenceaware simplification system. It was evaluated in terms of text coherence and lexical substitutions by three judges. An automatic evaluation gives a SARI score of 38.13. During the system evaluation, we identified that most of the miss-simplifications are caused by a lack of language resources. This indicates that the proposed rules seem appropriate, but that extra-linguistic resources are required or should be improved, as the graded lexicon that we used. In a purely rule-based system like ours, tuning further the rules would require a significant development time. As future work, we intend to improve the system performance. We will explore other coreference properties, such as the negative correlation between the length and the number of chains. We will start with the inclusion of more language resources, but we also intend to explore other approaches than rule-based methods, as well as increase the number of rules through the analysis of other corpora and the use of rules tested in other works, such as Drndarevic and Saggion (2012) . A comparison with baseline systems will also complete the evaluation of our system.", "cite_spans": [ { "start": 1379, "end": 1393, "text": "Saggion (2012)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and further work", "sec_num": "5." }, { "text": "For an overview of simplification studies, including systems for different needs arguing for discourse phenomena processing, see Saggion (2017).2 methodolodys.ch/ is an association providing texts and exercises to improve reading and comprehension skills for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "psytoolkit.org 4 The guidelines are available on the Web site of Alector project https://alectorsite.wordpress.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We trained the model on text from the 19th to the 21st century: 295,978 tokens, 81,506 relations, 43,211 chains.6 In NLP, a mention is a referring expressions. 7 gitlab.com/Cental-FR/resyf-package", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The corpora and systems are available at https:// github.com/rswilkens/text-rewrite.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We also plan to validate the simplification with a larger group of annotators, including dyslexic children. Moreover, we would like to include feedback from the simplification target-group.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Accessing noun-phrase antecedents", "authors": [ { "first": "M", "middle": [], "last": "Ariel", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ariel, M. (1990). Accessing noun-phrase antecedents. Routledge.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Accessibility theory: An overview. Text representation: Linguistic and psycholinguistic aspects", "authors": [ { "first": "M", "middle": [], "last": "Ariel", "suffix": "" } ], "year": 2001, "venue": "", "volume": "8", "issue": "", "pages": "29--87", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ariel, M. (2001). Accessibility theory: An overview. Text representation: Linguistic and psycholinguistic aspects, 8:29-87.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Open book: a tool for helping asd users' semantic comprehension", "authors": [ { "first": "E", "middle": [], "last": "Barbu", "suffix": "" }, { "first": "M", "middle": [ "T" ], "last": "Mart\u00edn-Valdivia", "suffix": "" }, { "first": "L", "middle": [ "A" ], "last": "Ure\u00f1a-L\u00f3pez", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Workshop on Natural Language Processing for Improving Textual Accessibility", "volume": "", "issue": "", "pages": "11--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbu, E., Mart\u00edn-Valdivia, M. T., and Ure\u00f1a-L\u00f3pez, L. A. (2013). Open book: a tool for helping asd users' seman- tic comprehension. In Proceedings of the Workshop on Natural Language Processing for Improving Textual Ac- cessibility, pages 11-19.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "ReSyf: a French lexicon with ranked synonyms", "authors": [ { "first": "M", "middle": [ "B" ], "last": "Billami", "suffix": "" }, { "first": "T", "middle": [], "last": "Fran\u00e7ois", "suffix": "" }, { "first": "N", "middle": [], "last": "Gala", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "2570--2581", "other_ids": {}, "num": null, "urls": [], "raw_text": "Billami, M. B., Fran\u00e7ois, T., and Gala, N. (2018). ReSyf: a French lexicon with ranked synonyms. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2570-2581, Santa Fe, New Mexico, USA, August. Association for Computational Linguis- tics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Application of Graph Rewriting to Natural Language Processing", "authors": [ { "first": "G", "middle": [], "last": "Bonfante", "suffix": "" }, { "first": "B", "middle": [], "last": "Guillaume", "suffix": "" }, { "first": "G", "middle": [], "last": "Perrier", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonfante, G., Guillaume, B., and Perrier, G. (2018). Ap- plication of Graph Rewriting to Natural Language Pro- cessing. Wiley Online Library.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Syntactic sentence simplification for french", "authors": [ { "first": "L", "middle": [], "last": "Brouwers", "suffix": "" }, { "first": "D", "middle": [], "last": "Bernhard", "suffix": "" }, { "first": "A.-L", "middle": [], "last": "Ligozat", "suffix": "" }, { "first": "T", "middle": [], "last": "Fran\u00e7ois", "suffix": "" } ], "year": 2014, "venue": "3rd International Workshop on Predicting and Improving Text Readability for Target Reader Populations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brouwers, L., Bernhard, D., Ligozat, A.-L., and Fran\u00e7ois, T. (2014a). Syntactic sentence simplification for french. In 3rd International Workshop on Predicting and Im- proving Text Readability for Target Reader Populations.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Syntactic sentence simplification for french", "authors": [ { "first": "L", "middle": [], "last": "Brouwers", "suffix": "" }, { "first": "D", "middle": [], "last": "Bernhard", "suffix": "" }, { "first": "A", "middle": [], "last": "Ligozat", "suffix": "" }, { "first": "T", "middle": [], "last": "Fran\u00e7ois", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations, PITR@EACL", "volume": "", "issue": "", "pages": "47--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brouwers, L., Bernhard, D., Ligozat, A., and Fran\u00e7ois, T. (2014b). Syntactic sentence simplification for french. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Popula- tions, PITR@EACL 2014, Gothenburg, Sweden, April 27, 2014, pages 47-56.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Syntactic simplification of Text", "authors": [ { "first": "Y", "middle": [ "M" ], "last": "Canning", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Canning, Y. M. (2002). Syntactic simplification of Text. Ph.D. thesis, University of Sunderland.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning alignments and leveraging natural logic", "authors": [ { "first": "N", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "D", "middle": [], "last": "Cer", "suffix": "" }, { "first": "T", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "D", "middle": [], "last": "Hall", "suffix": "" }, { "first": "C", "middle": [], "last": "Kiddon", "suffix": "" }, { "first": "B", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "M.-C", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "D", "middle": [], "last": "Ramage", "suffix": "" }, { "first": "E", "middle": [], "last": "Yeh", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing", "volume": "", "issue": "", "pages": "165--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chambers, N., Cer, D., Grenager, T., Hall, D., Kiddon, C., MacCartney, B., De Marneffe, M.-C., Ramage, D., Yeh, E., and Manning, C. D. (2007). Learning align- ments and leveraging natural logic. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 165-170. Association for Compu- tational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Reducing text complexity through automatic lexical simplification: An empirical study for spanish", "authors": [ { "first": "B", "middle": [], "last": "Drndarevic", "suffix": "" }, { "first": "H", "middle": [], "last": "Saggion", "suffix": "" } ], "year": 2012, "venue": "Procesamiento del lenguaje natural", "volume": "49", "issue": "", "pages": "13--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Drndarevic, B. and Saggion, H. (2012). Reducing text complexity through automatic lexical simplification: An empirical study for spanish. Procesamiento del lenguaje natural, 49:13-20.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Edite, un programme pour l'approche comparative de documents de gen\u00e8se", "authors": [ { "first": "I", "middle": [], "last": "Fenoglio", "suffix": "" }, { "first": "J.-G", "middle": [], "last": "Ganascia", "suffix": "" } ], "year": 2006, "venue": "Manuscrits-Recherche-Invention)", "volume": "27", "issue": "", "pages": "166--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fenoglio, I. and Ganascia, J.-G. (2006). Edite, un pro- gramme pour l'approche comparative de documents de gen\u00e8se. Genesis (Manuscrits-Recherche-Invention), 27(1):166-168.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bleu, contusion, ecchymose: tri automatique de synonymes en fonction de leur difficult\u00e9 de lecture et compr\u00e9hension", "authors": [ { "first": "T", "middle": [], "last": "Fran\u00e7ois", "suffix": "" }, { "first": "M", "middle": [], "last": "Billami", "suffix": "" }, { "first": "N", "middle": [], "last": "Gala", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "", "suffix": "" }, { "first": "D", "middle": [], "last": "", "suffix": "" } ], "year": 2016, "venue": "JEP-TALN-RECITAL 2016", "volume": "", "issue": "", "pages": "15--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fran\u00e7ois, T., Billami, M., Gala, N., and Bernhard, D. (2016). Bleu, contusion, ecchymose: tri automatique de synonymes en fonction de leur difficult\u00e9 de lecture et compr\u00e9hension. In JEP-TALN-RECITAL 2016, vol- ume 2, pages 15-28.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Coherence in text, coherence in mind", "authors": [ { "first": "T", "middle": [], "last": "Giv\u00f3n", "suffix": "" } ], "year": 1993, "venue": "Pragmatics & Cognition", "volume": "1", "issue": "2", "pages": "171--227", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giv\u00f3n, T. (1993). Coherence in text, coherence in mind. Pragmatics & Cognition, 1(2):171-227.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Centering: A framework for modeling the local coherence of discourse", "authors": [ { "first": "B", "middle": [ "J" ], "last": "Grosz", "suffix": "" }, { "first": "S", "middle": [], "last": "Weinstein", "suffix": "" }, { "first": "A", "middle": [ "K" ], "last": "Joshi", "suffix": "" } ], "year": 1995, "venue": "Computational linguistics", "volume": "21", "issue": "2", "pages": "203--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grosz, B. J., Weinstein, S., and Joshi, A. K. (1995). Cen- tering: A framework for modeling the local coherence of discourse. Computational linguistics, 21(2):203-225.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Coreference and focus in reading times", "authors": [ { "first": "E", "middle": [], "last": "Jaffe", "suffix": "" }, { "first": "C", "middle": [], "last": "Shain", "suffix": "" }, { "first": "W", "middle": [], "last": "Schuler", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018)", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaffe, E., Shain, C., and Schuler, W. (2018). Coreference and focus in reading times. In Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018), pages 1-9.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Coreference resolution with entity equalization", "authors": [ { "first": "B", "middle": [], "last": "Kantor", "suffix": "" }, { "first": "A", "middle": [], "last": "Globerson", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "673--677", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kantor, B. and Globerson, A. (2019). Coreference resolu- tion with entity equalization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 673-677.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Description, mod\u00e9lisation et d\u00e9tection automatique des cha\u00eenes de r\u00e9f\u00e9rence (DEMO-CRAT)", "authors": [ { "first": "F", "middle": [], "last": "Landragin", "suffix": "" } ], "year": 2016, "venue": "", "volume": "92", "issue": "", "pages": "11--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Landragin, F. (2016). Description, mod\u00e9lisation et d\u00e9tection automatique des cha\u00eenes de r\u00e9f\u00e9rence (DEMO- CRAT). Bulletin de l'Association Fran\u00e7aise pour l'Intelligence Artificielle, 92:11-15.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "La production d'inf\u00e9rences lors de la compr\u00e9hension de textes chez des adultes : une analyse de la litt\u00e9rature. L'ann\u00e9 psychologique", "authors": [ { "first": "Le", "middle": [], "last": "Bou\u00ebdec", "suffix": "" }, { "first": "B", "middle": [], "last": "Martins", "suffix": "" }, { "first": "D", "middle": [], "last": "", "suffix": "" } ], "year": 1998, "venue": "", "volume": "98", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Le Bou\u00ebdec, B. and Martins, D. (1998). La production d'inf\u00e9rences lors de la compr\u00e9hension de textes chez des adultes : une analyse de la litt\u00e9rature. L'ann\u00e9 psy- chologique, 98.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Tregex and tsurgeon: tools for querying and manipulating tree data structures", "authors": [ { "first": "R", "middle": [], "last": "Levy", "suffix": "" }, { "first": "G", "middle": [], "last": "Andrew", "suffix": "" } ], "year": 2006, "venue": "LREC", "volume": "", "issue": "", "pages": "2231--2234", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levy, R. and Andrew, G. (2006). Tregex and tsurgeon: tools for querying and manipulating tree data structures. In LREC, pages 2231-2234. Citeseer.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The role of syntax during pronoun resolution: Evidence from fmri", "authors": [ { "first": "J", "middle": [], "last": "Li", "suffix": "" }, { "first": "M", "middle": [], "last": "Fabre", "suffix": "" }, { "first": "W.-M", "middle": [], "last": "Luh", "suffix": "" }, { "first": "J", "middle": [], "last": "Hale", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eight Workshop on Cognitive Aspects of Computational Language Learning and Processing", "volume": "", "issue": "", "pages": "56--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, J., Fabre, M., Luh, W.-M., and Hale, J. (2018). The role of syntax during pronoun resolution: Evidence from fmri. In Proceedings of the Eight Workshop on Cogni- tive Aspects of Computational Language Learning and Processing, pages 56-64.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Rethorical structure theory: Toward a functional theory of text organization. Text", "authors": [ { "first": "W", "middle": [], "last": "Mann", "suffix": "" }, { "first": "S", "middle": [], "last": "Thompson", "suffix": "" } ], "year": 1988, "venue": "", "volume": "8", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mann, W. and Thompson, S. (1988). Rethorical structure theory: Toward a functional theory of text organization. Text, 8:243-281, 01.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "fmri evidence for strategic decision-making during resolution of pronoun reference", "authors": [ { "first": "C", "middle": [ "T" ], "last": "Mcmillan", "suffix": "" }, { "first": "R", "middle": [], "last": "Clark", "suffix": "" }, { "first": "D", "middle": [], "last": "Gunawardena", "suffix": "" }, { "first": "N", "middle": [], "last": "Ryant", "suffix": "" }, { "first": "M", "middle": [], "last": "Grossman", "suffix": "" } ], "year": 2012, "venue": "Neuropsychologia", "volume": "50", "issue": "5", "pages": "674--687", "other_ids": {}, "num": null, "urls": [], "raw_text": "McMillan, C. T., Clark, R., Gunawardena, D., Ryant, N., and Grossman, M. (2012). fmri evidence for strategic decision-making during resolution of pronoun reference. Neuropsychologia, 50(5):674-687.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Anaphora Resolution", "authors": [ { "first": "R", "middle": [], "last": "Mitkov", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitkov, R. (2002). Anaphora Resolution. Oxford Univer- sity Press.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W.-J", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311- 318. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Quelques remarques sur l'anaphore nominale aux xive et xve si\u00e8cles. L'Information grammaticale", "authors": [ { "first": "M", "middle": [], "last": "Perret", "suffix": "" } ], "year": 2000, "venue": "", "volume": "87", "issue": "", "pages": "17--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Perret, M. (2000). Quelques remarques sur l'anaphore nominale aux xive et xve si\u00e8cles. L'Information gram- maticale, 87(1):17-23.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Revisiting readability: A unified framework for predicting text quality", "authors": [ { "first": "E", "middle": [], "last": "Pitler", "suffix": "" }, { "first": "A", "middle": [], "last": "Nenkova", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "186--195", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pitler, E. and Nenkova, A. (2008). Revisiting readabil- ity: A unified framework for predicting text quality. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 186-195.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Universal dependency parsing from scratch", "authors": [ { "first": "P", "middle": [], "last": "Qi", "suffix": "" }, { "first": "T", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.10457" ] }, "num": null, "urls": [], "raw_text": "Qi, P., Dozat, T., Zhang, Y., and Manning, C. D. (2019). Universal dependency parsing from scratch. arXiv preprint arXiv:1901.10457.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Towards a diagnosis of textual difficulties for children with dyslexia", "authors": [ { "first": "S", "middle": [], "last": "Quiniou", "suffix": "" }, { "first": "B", "middle": [], "last": "Daille", "suffix": "" } ], "year": 2018, "venue": "11th International Conference on Language Resources and Evaluation (LREC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quiniou, S. and Daille, B. (2018). Towards a diagnosis of textual difficulties for children with dyslexia. In 11th International Conference on Language Resources and Evaluation (LREC).", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Simplify or help?: text simplification strategies for people with dyslexia", "authors": [ { "first": "L", "middle": [], "last": "Rello", "suffix": "" }, { "first": "R", "middle": [], "last": "Baeza-Yates", "suffix": "" }, { "first": "S", "middle": [], "last": "Bott", "suffix": "" }, { "first": "H", "middle": [], "last": "Saggion", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rello, L., Baeza-Yates, R., Bott, S., and Saggion, H. (2013). Simplify or help?: text simplification strategies for people with dyslexia. In Proceedings of the 10th In- ternational Cross-Disciplinary Conference on Web Ac- cessibility, page 15. ACM.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Automatic text simplification: Synthesis lectures on human language technologies", "authors": [ { "first": "H", "middle": [], "last": "Saggion", "suffix": "" } ], "year": 2017, "venue": "", "volume": "10", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saggion, H. (2017). Automatic text simplification: Syn- thesis lectures on human language technologies, vol. 10 (1). California, Morgan & Claypool Publishers.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Nom propre et cha\u00eenes de r\u00e9f\u00e9rence", "authors": [ { "first": "C", "middle": [], "last": "Schnedecker", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schnedecker, C. (1997). Nom propre et cha\u00eenes de r\u00e9f\u00e9rence. Centre d'Etudes Linguistiques des Textes et Discours de l'Universit\u00e9 de Metz, Paris.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Les cha\u00eenes de r\u00e9f\u00e9rence: une configuration d'indices pour distinguer et identifier les genres textuels", "authors": [ { "first": "C", "middle": [], "last": "Schnedecker", "suffix": "" } ], "year": 2017, "venue": "Langue fran\u00e7aise", "volume": "195", "issue": "3", "pages": "53--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schnedecker, C. (2017). Les cha\u00eenes de r\u00e9f\u00e9rence: une configuration d'indices pour distinguer et identifier les genres textuels. Langue fran\u00e7aise, 195(3):53-72.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Acquisition of syntactic simplification rules for french", "authors": [ { "first": "V", "middle": [], "last": "Seretan", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012)", "volume": "", "issue": "", "pages": "4019--4026", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seretan, V. (2012). Acquisition of syntactic simplification rules for french. In Proceedings of the Eighth Interna- tional Conference on Language Resources and Evalua- tion (LREC-2012), pages 4019-4026.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Preserving discourse structure when simplifying text", "authors": [ { "first": "A", "middle": [], "last": "Siddharthan", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 9th European Workshop on Natural Language Generation (ENLG-2003) at EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddharthan, A. (2003). Preserving discourse structure when simplifying text. In Proceedings of the 9th Eu- ropean Workshop on Natural Language Generation (ENLG-2003) at EACL 2003.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Syntactic simplification and Text Cohesion", "authors": [ { "first": "A", "middle": [], "last": "Siddharthan", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddharthan, A. (2004). Syntactic simplification and Text Cohesion. Number 597 in Technical Reports. University of Cambridge, 10.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Syntactic simplification and text cohesion", "authors": [ { "first": "A", "middle": [], "last": "Siddharthan", "suffix": "" } ], "year": 2006, "venue": "Research on Language and Computation", "volume": "4", "issue": "1", "pages": "77--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddharthan, A. (2006). Syntactic simplification and text cohesion. Research on Language and Computation, 4(1):77-109.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Apprendre\u00e0 lire : contr\u00f4le, automatismes et auto-apprentissage", "authors": [ { "first": "L", "middle": [], "last": "Sprenger-Charolles", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Ziegler", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sprenger-Charolles, L. and Ziegler, J. C. (2019). Appren- dre\u00e0 lire : contr\u00f4le, automatismes et auto-apprentissage.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "editor, L'apprentissage de la lecture", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "In A. Bentollila & B. Germain, editor, L'apprentissage de la lecture. Nathan, September.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "What can readability measures really tell us about text complexity", "authors": [ { "first": "S", "middle": [], "last": "Stajner", "suffix": "" }, { "first": "R", "middle": [], "last": "Evans", "suffix": "" }, { "first": "C", "middle": [], "last": "Orasan", "suffix": "" }, { "first": "R", "middle": [], "last": "Mitkov", "suffix": "" } ], "year": 2012, "venue": "Proceedings of workshop on natural language processing for improving textual accessibility", "volume": "", "issue": "", "pages": "14--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stajner, S., Evans, R., Orasan, C., and Mitkov, R. (2012). What can readability measures really tell us about text complexity. In Proceedings of workshop on natural language processing for improving textual accessibility, pages 14-22. Citeseer.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Semgrex-plus: a tool for automatic dependency-graph rewriting", "authors": [ { "first": "F", "middle": [], "last": "Tamburini", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Fourth International Conference on Dependency Linguistics", "volume": "", "issue": "", "pages": "248--254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tamburini, F. (2017). Semgrex-plus: a tool for auto- matic dependency-graph rewriting. In Proceedings of the Fourth International Conference on Dependency Lin- guistics (Depling 2017), pages 248-254.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Are cohesive features relevant for text readability evaluation", "authors": [ { "first": "A", "middle": [], "last": "Todirascu", "suffix": "" }, { "first": "T", "middle": [], "last": "Fran\u00e7ois", "suffix": "" }, { "first": "D", "middle": [], "last": "Bernhard", "suffix": "" }, { "first": "N", "middle": [], "last": "Gala", "suffix": "" }, { "first": "A.-L", "middle": [], "last": "Ligozat", "suffix": "" } ], "year": 2016, "venue": "26th International Conference on Computational Linguistics (COLING 2016)", "volume": "", "issue": "", "pages": "987--997", "other_ids": {}, "num": null, "urls": [], "raw_text": "Todirascu, A., Fran\u00e7ois, T., Bernhard, D., Gala, N., and Ligozat, A.-L. (2016). Are cohesive features relevant for text readability evaluation? In 26th International Con- ference on Computational Linguistics (COLING 2016), pages 987-997.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Cha\u00eenes de r\u00e9f\u00e9rence et lisibilit\u00e9 des textes : Le projet ALLuSIF. Langue fran\u00e7aise", "authors": [ { "first": "A", "middle": [], "last": "Todirascu", "suffix": "" }, { "first": "T", "middle": [], "last": "Fran\u00e7ois", "suffix": "" }, { "first": "D", "middle": [], "last": "Bernhard", "suffix": "" }, { "first": "N", "middle": [], "last": "Gala", "suffix": "" }, { "first": "A.-L", "middle": [], "last": "Ligozat", "suffix": "" }, { "first": "R", "middle": [], "last": "Khobzi", "suffix": "" } ], "year": 2017, "venue": "", "volume": "195", "issue": "", "pages": "35--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Todirascu, A., Fran\u00e7ois, T., Bernhard, D., Gala, N., Ligozat, A.-L., and Khobzi, R. (2017). Cha\u00eenes de r\u00e9f\u00e9rence et lisibilit\u00e9 des textes : Le projet ALLuSIF. Langue fran\u00e7aise, 195(3):35-52, September.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Disentangling Dyslexia: Phonological and Processing Impairment in Developmental Dyslexia", "authors": [ { "first": "M", "middle": [], "last": "Vender", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vender, M. (2017). Disentangling Dyslexia: Phonological and Processing Impairment in Developmental Dyslexia. Frankfurt: Peter Lang.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Optimizing statistical machine translation for text simplification", "authors": [ { "first": "W", "middle": [], "last": "Xu", "suffix": "" }, { "first": "C", "middle": [], "last": "Napoles", "suffix": "" }, { "first": "E", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Q", "middle": [], "last": "Chen", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "401--415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, W., Napoles, C., Pavlick, E., Chen, Q., and Callison- Burch, C. (2016). Optimizing statistical machine trans- lation for text simplification. Transactions of the Associ- ation for Computational Linguistics, 4:401-415.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Six good predictors of autistic text comprehension", "authors": [ { "first": "V", "middle": [], "last": "Yaneva", "suffix": "" }, { "first": "R", "middle": [], "last": "Evans", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "697--706", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaneva, V. and Evans, R. (2015). Six good predictors of autistic text comprehension. In Proceedings of the Inter- national Conference Recent Advances in Natural Lan- guage Processing, pages 697-706.", "links": null } }, "ref_entries": { "TABREF1": { "num": null, "type_str": "table", "text": "This table shows the SARI and BLEU scores as well as other measures related to transformations at the sentence level. The result of the BLEU score points out a low n-gram variability in the evaluation corpus. Thus, a smaller number of operations may be a useful strategy for this corpus. The SARI score does not indicate a big difference. Moreover,", "content": "
System Manual annotation
SARI38.12444.720
BLEU74.08491.986
Compression ratio0.9841.008
Sentence splits1.0261.056
Additions proportion0.1240.108
Deletions proportion0.1260.104
", "html": null }, "TABREF2": { "num": null, "type_str": "table", "text": "System evaluation.", "content": "", "html": null } } } }