ACL-OCL / Base_JSON /prefixR /json /readi /2020.readi-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:57:56.114043Z"
},
"title": "Combining Expert Knowledge with Frequency Information to Infer CEFR Levels for Words",
"authors": [
{
"first": "Alice",
"middle": [],
"last": "Pintard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Le Mans Universit\u00e9",
"location": {
"settlement": "Le Mans",
"country": "France"
}
},
"email": "alice.pintard.etu@univ-lemans.fr"
},
{
"first": "Thomas",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Louvain",
"location": {
"settlement": "Louvain-la-Neuve",
"country": "Belgique"
}
},
"email": "thomas.francois@uclouvain.be"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Traditional approaches to set goals in second language (L2) vocabulary acquisition relied either on word lists that were obtained from large L1 corpora or on collective knowledge and experience of L2 experts, teachers, and examiners. Both approaches are known to offer some advantages, but also to have some limitations. In this paper, we try to combine both sources of information, namely the official reference level description for French language and the FLElex lexical database. Our aim is to train a statistical model on the French RLD that would be able to turn the distributional information from FLElex into one of the six levels of the Common European Framework of Reference for languages (CEFR). We show that such approach yields a gain of 29% in accuracy compared to the method currently used in the CEFRLex project. Besides, our experiments also offer deeper insights into the advantages and shortcomings of the two traditional sources of information (frequency vs. expert knowledge).",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Traditional approaches to set goals in second language (L2) vocabulary acquisition relied either on word lists that were obtained from large L1 corpora or on collective knowledge and experience of L2 experts, teachers, and examiners. Both approaches are known to offer some advantages, but also to have some limitations. In this paper, we try to combine both sources of information, namely the official reference level description for French language and the FLElex lexical database. Our aim is to train a statistical model on the French RLD that would be able to turn the distributional information from FLElex into one of the six levels of the Common European Framework of Reference for languages (CEFR). We show that such approach yields a gain of 29% in accuracy compared to the method currently used in the CEFRLex project. Besides, our experiments also offer deeper insights into the advantages and shortcomings of the two traditional sources of information (frequency vs. expert knowledge).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Second language acquisition (SLA) research established a strong relationship between the development of reading abilities and the knowledge of vocabulary (Laufer, 1992) . For Grabe (2014, 13) : \"The real goal for more advanced L2 reading is an L2 recognition vocabulary level anywhere above 10,000 [words]\". It is no surprise that vocabulary resources used by designers of L2 curricula, publishers of educational materials, and teachers to set vocabulary learning goals are close to such size. For French language, the popular \"Fran\u00e7ais Fondamental\" (Gougenheim et al., 1964) , which was built from a corpus of authentic documents and influenced a whole generation of French teachers and SLA researchers, includes about 8800 words. Similarly, the currently most popular lexical resources are the Reference Level Descriptions (RLDs), based on the Common European Framework of Reference for languages (CEFR), and available in various languages. The French version, designed by a team of experts, also amounts to about 9,000 words and expressions. However, both type of lists -either built from language data or from the expertise of language and teaching experts -are faced with the issue of identifying the most important words to teach at each stage of the learning process.",
"cite_spans": [
{
"start": 154,
"end": 168,
"text": "(Laufer, 1992)",
"ref_id": "BIBREF13"
},
{
"start": 175,
"end": 191,
"text": "Grabe (2014, 13)",
"ref_id": null
},
{
"start": 550,
"end": 575,
"text": "(Gougenheim et al., 1964)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The most common answers to that challenge have been (1) to use frequency lists obtained from a large corpus of texts intended for native readers and split the list into N frequency bands, each of which is related to one of the stage of the learning process; or (2) to rely on expert knowledge, such as teacher expertise or linguists' recommendations, to assign each word to a given level of reading proficiency. This classification of words in developmental stages is a delicate process whose reliability has hardly been assessed in a systematic manner on L2 learners. Besides, the two main sources of information to build vocabulary lists -word frequency in massive corpora or the knowledge of L2 teach-ing experts -have hardly been exploited together 1 . Recently, an alternative research avenue was investigated within the framework of the CEFRLex project. It offers receptive vocabulary lists for 5 languages: English (D\u00fcrlich and Fran\u00e7ois, 2018) , French (Fran\u00e7ois et al., 2014) , Swedish (Fran\u00e7ois et al., 2016) , Dutch (Tack et al., 2018) , and Spanish. Its innovative side resides in the fact that it does not provide a single frequency for each word, but rather a frequency distribution across the six levels of the CEFR. Moreover, frequencies have been estimated on documents intended for L2 learners, i.e. textbooks and simplified readers, instead of L1 texts. As a result, the resource provides further insights about the way a given word is used across the various development stages of the L2 curriculum. It is also possible to compare word frequency at a given level (e.g. A2) in order to define priorities in terms of vocabulary learning goals. Unfortunately, when it comes to assigning a CEFR level at which a given word should be learned, it is not obvious how the frequency distributions should be transformed in a single CEFR level. In this paper, we aim to investigate two main issues. First, we will test whether we can leverage the knowledge from the French RLD to train a mathematical function, based on machine learning algorithms, able to transform any CEFR-Lex distribution into a CEFR level. Second, we will take advantage of these experiments to further characterize the linguistic and pedagogical differences between these two approaches -building a frequency list from a corpus vs. assigning words to proficiency levels based on expert knowledge -to set vocabulary learning goals. The paper is organized as follows: Section 2. provides more details about the two approaches we will compare (frequency lists and RLD) and reports previous attempts to transform CEFR frequency distributions into a unique CEFR level. Section 3. introduces all methodological details related to our experiments: the three lexical resources used in the study (French RLD, Lexique3, and FLELex) and the process by which these resources were prepared for training machine learning algorithms. This section ends up with the description of the experiments carried out. In Section 4., we report the results of the various experiments before taking advantage of a manual error analysis to discuss the differences between expert knowledge and frequency-based lists at Section 5..",
"cite_spans": [
{
"start": 922,
"end": 950,
"text": "(D\u00fcrlich and Fran\u00e7ois, 2018)",
"ref_id": "BIBREF6"
},
{
"start": 960,
"end": 983,
"text": "(Fran\u00e7ois et al., 2014)",
"ref_id": "BIBREF7"
},
{
"start": 994,
"end": 1017,
"text": "(Fran\u00e7ois et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 1026,
"end": 1045,
"text": "(Tack et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 2768,
"end": 2780,
"text": "(French RLD,",
"ref_id": null
},
{
"start": 2781,
"end": 2790,
"text": "Lexique3,",
"ref_id": null
},
{
"start": 2791,
"end": 2802,
"text": "and FLELex)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The process of setting vocabulary goals for L2 learners generally relies on graded lexical resources in which words are assigned to one proficiency level. Such resources are usually built based on the two approaches we have previously outlined: leveraging word frequency information estimated on a corpus or using L2 teaching experts knowledge. Frequency lists, built from a corpus, have been used since the seminal work of Thorndike (1921) who laboriously built the first significant vocabulary for English, including 20,000 words, without the help of any computer. The first computational list was obtained by Ku\u010dera and Francis (1967) from the Brown corpus and has a large influence in education and psychology. At the same time, Gougenheim et al. (1964) published the Fran\u00e7ais Fondamental that would impact a generation of L2 French teachers. More recently, other lists have been developed from larger corpora, such as the CELEX database (Baayen et al., 1993) , the list based on the British National Corpus (Leech et al., 2001) , or SUBTLEX (Brysbaert and New, 2009) . The main shortcomings of such lists for L2 education are that (1) they represent the native distribution of words, which is not fully compatible with the distribution of words in books and textbooks intended for L2 learners; (2) they do not specify at which proficiency level a given word is supposed to be learned. As regards expert knowledge, the most recent and influential resource is connected to the CEFR framework. Since 2001, this framework has been widely adopted within Europe to help standardizing L2 curricula, which involves defining a proficiency scale ranging from A1 (beginners) to C2 (mastery). However, textbook designers, assessment experts and language teachers have agreed that it lacks precision when it comes to describing the linguistic forms that should be learned at a given proficiency level. In a number of countries, efforts have been made to interpret the CEFR guidelines in the form of reference level descriptions 2 . These books describe the language competences expected from an L2 learner in each of the CEFR levels, including lists of words, syntactic structures, and expressions associated with specific communicative functions or themes. Finally, a few papers specifically investigated methods to transform CEFRLex word distribution into a CEFR level",
"cite_spans": [
{
"start": 424,
"end": 440,
"text": "Thorndike (1921)",
"ref_id": "BIBREF20"
},
{
"start": 612,
"end": 637,
"text": "Ku\u010dera and Francis (1967)",
"ref_id": "BIBREF12"
},
{
"start": 733,
"end": 757,
"text": "Gougenheim et al. (1964)",
"ref_id": null
},
{
"start": 942,
"end": 963,
"text": "(Baayen et al., 1993)",
"ref_id": "BIBREF1"
},
{
"start": 1012,
"end": 1032,
"text": "(Leech et al., 2001)",
"ref_id": null
},
{
"start": 1046,
"end": 1071,
"text": "(Brysbaert and New, 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "2."
},
{
"text": "int/t/dg4/linguistic/dnr_EN.asp?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "2."
},
{
"text": "coherent from a pedagogical perspective. (Gala et al., 2013) suggested two approaches. The first one, to which we will refer as First occ, assigns to a given word the level of the textbook it was first observed in. In other words, the level of a word corresponds to the first CEFR level for which FLELex reports a non null frequency. Although simplistic, this rule appeared to be the most effective to predict unknown words reported by four Dutch learners of FFL (Tack et al., 2016) and was consequently used in the CEFRLex interface. The second approach was a variant of the First occ that yields continuous scores and prove to be inferior to the first one. More recently, Alfter et al. (2016) introduced the concept of significant onset of use, that consists in selecting the first level having a sufficiently large enough delta compared to its previous level. All of these studies used mathematical rules to transform distribution into CEFR levels and later use those level as gold-standard for further process. So far, no experiments were reported that tried to cross-validate such mathematical rules, for instance using learners data.",
"cite_spans": [
{
"start": 41,
"end": 60,
"text": "(Gala et al., 2013)",
"ref_id": "BIBREF9"
},
{
"start": 463,
"end": 482,
"text": "(Tack et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 674,
"end": 694,
"text": "Alfter et al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "2."
},
{
"text": "Our approach consists in considering the French RLD as a gold standard regarding the assignment of words to a given CEFR level. We then infer, from this pedagogical information, a statistical model able to transform the word frequency distribution from FLELex into a CEFR level. To carry out this experiment, the following steps had to be realized. The acquisition and digitization of the French RLD word list is described at Section 3.1., which also briefly reminds the reader of the main characteristics of the two other lexical resources used in our study, namely Lexique3 (New et al., 2007) and FLELex (Fran\u00e7ois et al., 2014) . In the next section (Section 3.2.), we describe a preliminary step prior to the statistical modelling, which consists in delineating the intersection between the three resources. This stage aims at ensuring that missing words would not lead to results biased towards one of the resources. We also took advantage of this step to investigate the coverage discrepancies between the French RLD and FLELex as a first way to characterize the differences between the expert knowledge approach and the frequency-based one. Section 3.3. describes the design of two datasets used for our experiments, whereas Section 3.4. presents the different baselines and models tested.",
"cite_spans": [
{
"start": 576,
"end": 594,
"text": "(New et al., 2007)",
"ref_id": "BIBREF15"
},
{
"start": 606,
"end": 629,
"text": "(Fran\u00e7ois et al., 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3."
},
{
"text": "3.1.1. The French RLD word list The RLD for French language was created by Beacco and his collaborators between 2004 and 2016 (Beacco et al., 2008; Riba, 2016) . Each level -corresponding to a distinct book -is split into 10 chapters representing various dimensions of the linguistic knowledge (e.g. vocabulary, syntactic structures, phonemes, graphemes, fonctional skills, etc.), except for C1 and C2 levels which share the same volume and have a different structure 3 . The classification of linguistic forms to a given level was performed based on crite- ria selected by the authors for their relevance and objectivity: essentially the official descriptors from the CEFR, collective knowledge and experience of experts, teachers and examiners, and examples of learner productions deemed to be at a particular level (Beacco et al., 2008) . To our knowledge, the French RLD, also refered to as \"Beacco\" in this study, has not been used so far in any NLP approaches as it was published in paper format only and is not available in a digitized version. As a consequence, we had to digitize the two chapters relative to lexicon, namely chapter 4, focusing on general notions (e.g. quantity, space), and chapter 6 that focuses on specific notions (e.g. human body, feelings, sports). Those chapters share the same architecture across all levels, organizing words within semantic categories, then specifying the part-of-speech (POS) categories and sometimes providing a context. Polysemous words can therefore have up to 8 entries across the four levels (e.g. \"\u00eatre\", to be). However, as FLELex and Lexique3 do not provide fine-grained semantic distinctions for forms (all meanings are gathered under the same orthographic form), we decided to drop the information on semantic category from the French RLD. When a form had several CEFR levels associated to it, we kept the lowest one, which is in line with the way polysemy is handled in FLELex. This process led us to drop about 2,968 entries, going from 8,486 to 5,518 entries. The number of entries per CEFR level is described in Table 1 (#Beacco).",
"cite_spans": [
{
"start": 126,
"end": 147,
"text": "(Beacco et al., 2008;",
"ref_id": null
},
{
"start": 148,
"end": 159,
"text": "Riba, 2016)",
"ref_id": "BIBREF16"
},
{
"start": 818,
"end": 839,
"text": "(Beacco et al., 2008)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 2079,
"end": 2086,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Source Word Lists",
"sec_num": "3.1."
},
{
"text": "As previous approaches relying on word frequencies to assign proficiency levels to words relied on a L1 corpus, we decided to compare the performance obtained with FLELex with a word list whose frequencies were estimated on a large L1 corpus. We used Lexique3 (New et al., 2007) for this purpose, as it is a rather modern database. The lexicon includes about 50,000 lemmas and 125,000 inflected forms whose frequencies were obtained from movie subtitles.",
"cite_spans": [
{
"start": 260,
"end": 278,
"text": "(New et al., 2007)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Lexique3 word list",
"sec_num": "3.1.2."
},
{
"text": "FLELex (Fran\u00e7ois et al., 2014) is one of the resources being part of the CEFRLex project described above. Similarly to the other languages, it offers frequency distributions for French words across the six CEFR levels. There are discourse in terms of rhetorical effectiveness, natural sequencing or adherence to collaborative principles.",
"cite_spans": [
{
"start": 7,
"end": 30,
"text": "(Fran\u00e7ois et al., 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FLELex",
"sec_num": "3.1.3."
},
{
"text": "two versions of FLELex: one is based on the TreeTagger (FLELex-TT) and includes 14,236 entries, but no multiword expressions as they cannot be detected by the Tree-Tagger; the second one is based on a conditional random field (CRF) tagger and amounts to 17,871 entries, including 2,037 multi-word expressions. However, the second version has not yet been manually checked and includes various problematic forms. This is why we decided to carry out our experiments based on the FLELex-TT version. Table 1 summarizes the total number of entries having a non null frequency per level (#FLELex), along with the number of new entries per level, currently used in the CEFRLex project to assign a unique level to a given word (#First occ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FLELex",
"sec_num": "3.1.3."
},
{
"text": "As explained above, in order to ensure a comparability of results for each of the three word lists, we delineated their intersection. A prerequisite step was to arrange the POS tagsets compatibility. The main differences regarding those tagsets are that Beacco divides conjunctions in two categories (coordination and subordination), whereas FLELex and Lexique3 split determiners and prepositions (DET:ART vs. DET:POS and PRP vs. PRP:det). We merged all split categories, keeping the TreeTagger labels (Schmid, 1994) . After standardization, nine POS remained: ADJ, ADV, KON, DET, INT, NOM, PRP, PRO, and VER. Second, we identified words in common between Beacco and FLELex: their intersection contains 4,020 entries. This leaves 1,498 words from Beacco that do not appear in FLELex and 10,216 FLELex words absent from Beacco. Such figures were expected as the coverage of FLELex is larger due to its building principles. However, we were concerned by the fact that so many words from Beacco were not found in FLELex and carried out a manual investigation of these. Most missing words can be related to the following causes:",
"cite_spans": [
{
"start": 502,
"end": 516,
"text": "(Schmid, 1994)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping the RLD to FLELex and Lexique3",
"sec_num": "3.2."
},
{
"text": "\u2022 Beacco includes 113 past participle forms of verbs that have not been lemmatized, whereas it is the case in FLELex (e.g. \"assis\" sat, \"\u00e9pic\u00e9\" seasoned);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping the RLD to FLELex and Lexique3",
"sec_num": "3.2."
},
{
"text": "\u2022 Similarly, Beacco also includes 103 feminine or plural forms which are lemmatized in FLELex (e.g. \"vacances\" holiday, \"lunettes\" glasses, \"serveuse\" waitress, etc.);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping the RLD to FLELex and Lexique3",
"sec_num": "3.2."
},
{
"text": "\u2022 Words were sometimes shared by both resources, but were assigned a different POS-tag, preventing automatic matching (e.g. \"bonjour\" hi ! or \"vite\" be quick are interjections in Beacco, but are tagged as nouns or adverbs in FLELex);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping the RLD to FLELex and Lexique3",
"sec_num": "3.2."
},
{
"text": "\u2022 61 entries were kept with capital letters in Beacco as a way to provide information about the word in use (e.g. \"Attention\" Look up !, \"Courage\" Cheer up !);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping the RLD to FLELex and Lexique3",
"sec_num": "3.2."
},
{
"text": "\u2022 Unlike Beacco, FLELex does not include acronyms (e.g.: \"CD\", \"DVD\", \"CV\", etc.);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping the RLD to FLELex and Lexique3",
"sec_num": "3.2."
},
{
"text": "\u2022 Some words were not captured in FLELex despite their presence in FFL textbooks, because they appear in the instructions, grammatical boxes, maps, or calendars rather than in texts related to comprehension tasks (e.g. \"fois\" time, \"adjectif\" adjective, \"virgule\" comma, \"Asie\" Asia, etc.);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping the RLD to FLELex and Lexique3",
"sec_num": "3.2."
},
{
"text": "\u2022 Other words refer to very salient objects in the real world that are poorly represented in corpora. Since Mich\u00e9a (1953) , they are known as available words and, as was expected, some of them were not found in the corpus used to build FLELex (e.g. \"cuisini\u00e8re\" cooker, \"s\u00e8che-cheveux\" hair-dryer, etc.);",
"cite_spans": [
{
"start": 108,
"end": 121,
"text": "Mich\u00e9a (1953)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping the RLD to FLELex and Lexique3",
"sec_num": "3.2."
},
{
"text": "\u2022 Finally, a few words in Beacco were extremely specific (e.g. \"humagne\", a type of grape or \"escrimeur\" fencer).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping the RLD to FLELex and Lexique3",
"sec_num": "3.2."
},
{
"text": "This manual investigation was, to some extent, reassuring, as a fair amount of missing words from Beacco were due to discrepancies in the lemmatization process between a systematic tool and a human. Lexical availability was also an issue, but a predictable one as it concerns all frequencybased approaches. Finally, it appears that restricting the selection of textbook materials to texts related to receptive tasks might help to better model receptive knowledge of L2 learners, but also comes at a cost as regards coverage. We manually solved some of these issues by lemmatizing the entries; converting the POS of all interjections that were nouns or adverbs in FLELex, and replacing capital letters by lowercase letters. In this process, we lost precious information from the RLD about the function of some linguistic forms, but were able to reintroduce 314 words that were not considered as shared by both lexicons before. As a result, the intersection between both resources amounts to 4,334 words. Finally, in order to compare FLELex with Lexique3, we computed the intersection between all three lexicons. Lexique3 having the larger coverage (51,100), there were only 38 words missing from it. The final intersection therefore includes 4,296 entries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping the RLD to FLELex and Lexique3",
"sec_num": "3.2."
},
{
"text": "Based on this intersection between the three resources, we defined two datasets that will be used for our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preparing datasets for experiments",
"sec_num": "3.3."
},
{
"text": "This first dataset corresponds to the intersection between FLELex, Lexique3 and Beacco as defined at Section 3.2.. It contains 4,296 entries, shared by the three lexicons, and classified from A1 to B2 according to Beacco. In this dataset, each entry (word + POS-tag) is related to its CEFR reference level from Beacco and is described with 8 frequency variables, as shown in Table 2 . The frequency variables includes the 7 frequencies provided by FLELex along with the frequency from Lexique3. The latter will however be used only for the computation of the Lexique3 baseline (see Section 4.).",
"cite_spans": [],
"ref_spans": [
{
"start": 375,
"end": 382,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "BeaccoFLELexAtoB",
"sec_num": "3.3.1."
},
{
"text": "The main application of this study is to develop a more accurate mathematical model to transform FLELex frequencies into a single CEFR level, with the purpose of integrating this model within the web interface of the CEFR-Lex project instead of the First occ heuristic currently used. Therefore, training our model on the intersection described above has a main shortcoming: it is not able to classify any entries beyond B2 level, since it would not have seen any word from the C levels. In the FLELex interface, we nevertheless want to be able to classify words at those levels, as FLELex likely contains more difficult words than Beacco.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BeaccoFLELexC",
"sec_num": "3.3.2."
},
{
"text": "To create this second dataset (BeaccoFLELexC), we first assumed that the 9,903 FLELex entries missing from Beacco can be considered as C level. However, before adding these entries to the 4,296 word intersection, we manually investigated them and noticed that about 2% present high frequencies in A levels textbooks, which is not expected for C words. We thus considered these cases as anomalies. Some causes of these anomalies were already discussed previously, but new issues also arose:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BeaccoFLELexC",
"sec_num": "3.3.2."
},
{
"text": "\u2022 Function words appearing in Beacco's chapter 5, i.e. the grammar section, were not digitized, but they were logically captured in FLELex. They include personal pronouns (\"je\", \"tu\", \"toi\"), interrogative pronouns (\"combien\", \"o\u00f9\", \"comment\", \"quand\"), determiners (\"la\"), prepositions (\"en\", \"sur\"), conjunctions (\"apr\u00e8s\", \"pour\", \"que\"), modals (\"devoir\", \"pouvoir\"), and negative particles (\"non\", \"ne\", \"pas\");",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BeaccoFLELexC",
"sec_num": "3.3.2."
},
{
"text": "\u2022 We also identified a few words appearing in chapter 3, linked to particular communicative functions, that were also excluded from our digitizing process (e.g. \"cher\" dear, \"bise\" kiss, \"peut-\u00eatre\" maybe, \"d'accord\" all right, etc.);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BeaccoFLELexC",
"sec_num": "3.3.2."
},
{
"text": "\u2022 Other words are very likely part of the A levels even if they are not included in Beacco's chapters we digitized (e.g. \"joli\" pretty, \"dormir\" to sleep, \"anglais\" English, or \"espagnol\" Spanish);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BeaccoFLELexC",
"sec_num": "3.3.2."
},
{
"text": "\u2022 Finally, we identified a few remaining tagging problems in FLELex that escaped the manual cleaning process (e.g. \"\u00e9tudiant\" student, \"ami\" friend were found as adjectives in FLELex instead of nouns).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BeaccoFLELexC",
"sec_num": "3.3.2."
},
{
"text": "To resolve some of these issues, we manually corrected tagging problems in FLELex and added the missing words appearing in chapters 3 and 5, assigning them their correct Beacco level. In total, 87 words were thus corrected, but some problems remain for a few entries. The last step in the preparation of this dataset Beac-coFLELexC consisted in creating a balanced dataset. Adding 9,903 C entries obviously produced a classimbalanced issue within the data, which we rebalanced using undersampling of overrepresented categories (C and B2). We used a random undersampling technique based on the number of entries in B1, reducing the size of this dataset from 14,236 to 4,878 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BeaccoFLELexC",
"sec_num": "3.3.2."
},
{
"text": "For our experiments, we decided to use three standard machine learning algorithms, namely tree classification, boosting, and support vector machine (SVM). Neural networks were not considered due to the limited amount of data. We also defined four baselines to compare with, that are described below. All experiments were conducted following the same methodology. We first split each dataset into a training Table 2 : Examples of entries for \"plier\" to fold, \"chanteur\" singer, \"humide\" humid and \"entre\" between from the first dataset, illustrating the variables used in our experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 407,
"end": 414,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3.4."
},
{
"text": "(and validation) set including 80% of the entries and a test set including 20% of the entries. We then applied a grid search on the training set using a stratified 10-fold cross-validation setup to estimate the performance of each set of meta-parameters tested. Once the best set of metaparameters was chosen, we estimated the classification accuracy of the model on the test set. This procedure is more reliable than a standard 10-fold cross-validation setup as the meta-parameters and the parameters are not optimized on the same data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3.4."
},
{
"text": "Four baselines were used in this study. The first one (Maj class) assigns to all words the level of the majority class. It is a common baseline for all classification tasks. The second baseline (First occ) assigns to a given word the level of the textbook it was first observed in. The third baseline (Most freq), used for instance in Todirascu et al. (Todirascu et al., 2019) , assigns to each word the level with the highest frequency. For the fourth baseline, we trained three models (SVM, tree, and boosting) based only on Lexique3 frequencies, as a way to assess whether the L2-specific and more fine-grained frequency information from FLELex would lead to some improvements on the task.",
"cite_spans": [
{
"start": 335,
"end": 376,
"text": "Todirascu et al. (Todirascu et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.4.1."
},
{
"text": "We applied the three above-mentioned algorithms to both our datasets: BeaccoFLELexAtoB and BeaccoFLELexC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The models",
"sec_num": "3.4.2."
},
{
"text": "\u2022 On the former, the optimal meta-parameters found by the grid search for Lexique 3 were: Tree (max depth = 4, min sample leaf = 40, and min sample split = 50); SVM (RBF kernel with C = 0.01 and \u03b3 = 0.001); Boosting with 5 iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The models",
"sec_num": "3.4.2."
},
{
"text": "\u2022 The meta-parameters found for FLELex frequencies were: Tree (max depth = 3, min sample leaf = 20, and min sample split = 50); SVM (RBF kernel with C = 1 and \u03b3 = 0.0001); Boosting with 5 iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The models",
"sec_num": "3.4.2."
},
{
"text": "\u2022 On the latter, BeaccoFLELexC, the optimal metaparameters found using the grid search were: Tree (max depth = 3, min sample leaf = 20, and min sample split = 50); SVM (RBF kernel with C = 1 and \u03b3 = 0.001); Boosting with 5 iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The models",
"sec_num": "3.4.2."
},
{
"text": "In this study, we aim to predict L2 expert knowledge based on word frequencies and thus obtain a machine learning algorithm able to transform a given word's frequency into a unique CEFR level. First, our systematic evaluation on the BeaccoFLELexAtoB dataset, whose results are reported in Table 3 , reveals that the First occ rule, currently used in the CEFRLex interface, yields poor performance. Its accuracy is as low as 25%, which is actually lower than the accuracy reached by a majority class classifier (40%) and its mean absolute error is 1.25, which means that this classification rule can miss targeted levels by even more than one level on average. Similarly, the Most freq rule, sometimes used as a simple and intuitive solution by some researchers, appears to be quite disappointing: its accuracy of 18% reveals that it is actually biased towards wrong answers. Using a machine learning algorithm to train a non-linear and more complex mathematical rule to transform FLELex distributions into CEFR levels seems to be a better path. We were able to reach 54% for the Boosting classifier and a mean absolute error of 0.66. The SVM model is more than twice as good as the First occ rule, and it outperforms the majority class classifier by 13%. On the second dataset, that corresponds better to the pragmatic problem we want to solve, it is interesting to notice that First occ outperforms the dummy baseline using majority class by 5%. The Most freq rule remains the worst option, whereas machine learning remains the best with the boosting algorithm reaching 48% of accuracy and a MAE of 0.75. Performance are slightly behind for the second dataset, but this is generally the case when one increases the number of classes.",
"cite_spans": [],
"ref_spans": [
{
"start": 289,
"end": 296,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4."
},
{
"text": "We also performed an ablation study on both datasets in order to find which frequency level contributed the most to the predictions. Results are presented in Table 4 and clearly shows that the frequency from the A1 to B1 levels are the more informative, especially the A1 level. Furthermore, one can notice that the total frequency (computed over all six levels) is also a good source of information.",
"cite_spans": [],
"ref_spans": [
{
"start": 158,
"end": 165,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4."
},
{
"text": "In our experiments, we wanted to know whether the L2specific and fine-grained frequency information provided in the CEFRLex resources would be better able to predict expert knowledge than a L1 frequency list. Table 3 shows that the models trained with FLELex slightly outperform (+5% in accuracy) the ones trained with Lexique3. However, this comparison is unfair, as the models leveraging FLELex information include more variables than the Lex-ique3 ones (7 vs. 1). Looking at the ablation study table, we can see performance when only the total frequency variable of FLELex is used. In such configuration, FLELex still outperforms Lexique3 by 1% accuracy, which seems to mean that L2 frequencies -even estimated on a much smaller corpus -might be used instead of L1 frequencies. This is, per se, a very interesting result, as the second language acquisition literature tends to believe the opposite and L2 intended word list created from a L1 corpus still remains the standard. In any case, those similar results can also be explained by the high correlation between the frequencies of these two lists, as was already reported in Fran\u00e7ois et al. (2014) . If we consider the performance of the full model (54%) compared to that of the model based only on the total frequency, the 4% improvement could be interpreted as a confirmation of the greater informational richness provided by a frequency distribution over proficiency levels compared to a unique word frequency. ",
"cite_spans": [
{
"start": 1132,
"end": 1154,
"text": "Fran\u00e7ois et al. (2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 209,
"end": 216,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "FLELex vs. Lexique 3",
"sec_num": "4.1."
},
{
"text": "The analysis of the precision, recall, and F1 values for each level reveals that models predictions are affected by one level in particular, level A2, which is already underrepresented in Beacco. Hence, the BeaccoFLELexAtoB dataset only includes 573 words at this level, whereas A1, B1 and B2 levels contain respectively 788, 1158 and 1777 words. Table 5 shows that extreme levels score much better than the middle ones, a recurrent outcome in readability classification tasks. It also reveals that, despite its lower accuracy score compare to the boosting model, the classification Tree model takes less drastic choices when assigning words to a class, which makes it a better option if we want a system that assigns words to all levels. We also noticed that, besides their under-representation in the RLD, A2 words are difficult to predict due to a high correlation between word frequencies in A1, A2 and B1 levels. Another problematic level is the C level, specially from a reading comprehension perspective. According to the CEFR descriptors, a C1 user \"can understand a wide range of demanding, longer texts, and recognise implicit meaning\", while a C2 user \"can understand with ease virtually everything heard or read\". Trying to translate these descriptors into actual words is difficult, as testified by the fact that Riba, who wrote the RLD opus for C levels, expressed some reserves concerning those descriptors, mainly because of the notion of perfection which emanate from them (Riba, 2016) , and the fact that C users depicted by the CEFR are only highly educated individuals, outperforming most of the native speakers. Consequently, we had to use a simple decision to define our C words in the gold-standard: considering everything minus words from levels A1 to B2. A final issue regarding C words is the fact that textbooks for those levels are less numerous than for the lower ones, providing FLELex with fewer words to observe and count.",
"cite_spans": [
{
"start": 1490,
"end": 1502,
"text": "(Riba, 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 347,
"end": 354,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Problematic levels",
"sec_num": "4.2."
},
{
"text": "In this section, we carried out a manual error analysis of some misclassification errors as a way to bring up to light some strengths and weaknesses of both approaches that can be used for selecting appropriate vocabulary in L2 learning: frequency vs. expert knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "One characteristic of the RLDs that is worth remembering is the fact that lexical chapters are organised semantically, as the authors agreed that giving a list of words ranked alphabetically is of little use when it comes to design syllabus or build a teaching sequence (Beacco et al., 2008) . Hence, words evolving around the same notional scope come together, along with their synonyms, antonyms and as a matter of fact words belonging to the same family as well (e.g. \"heureux/malheureux\" happy/unhappy, \"maigre, gros/ maigrir, grossir\" skinny, fat / to loose weight, to put on weight). This conveys the idea that they should be taught together -in other words, at the same CEFR level -since building strong lexical networks is critical for vocabulary retention. Conversely, FLELex does not have such structure and is likely to estimate different frequency distributions for the various terms from a given semantic field. When we transform those distributions using either the First occ rule or even a machine learning algorithm, they are prone to end up at different levels (e.g. of predictions using the SVM: \"gros\"A2 / \"grossir\"B2, \"heureux\"A1 / \"malheureux\"A2). In this regard, using frequency lists to establish a vocabulary progression through several learning stages is limited because words are seen as isolated.",
"cite_spans": [
{
"start": 270,
"end": 291,
"text": "(Beacco et al., 2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical approach",
"sec_num": "5.1."
},
{
"text": "Beacco's semantic organisation also enables it to better capture the effect of situation frequency, usually referred to as lexical availability (Mich\u00e9a, 1953) . The B2 level was the first RLD written, and it consists of extensive lists of words relating to specific centers of interest. The lower levels were compiled later, gradually picking up words from the B2 RLD according to the learning stage they could be taught at. As a result of this semantically-driven procedure, Beacco includes more available words than FLELex (e.g. of missing available words are \"soutien-gorge\" bra, \"cuisini\u00e8re\" cooker, \"s\u00e8che-cheveux\" hair-dryer, etc.).",
"cite_spans": [
{
"start": 144,
"end": 158,
"text": "(Mich\u00e9a, 1953)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical approach",
"sec_num": "5.1."
},
{
"text": "The question of topics covered on the L2 learning path is very relevant for this study, because it highlights the limits of both methods. FLELex computational approach aims to report words frequencies in the CEFR levels in an objective and descriptive way, but by using L2 material, it is compelled to favour certain topics and exclude others. Compared to Beacco, we found that textbooks mainly avoid potentially polemic themes such as religion or death, or subjects in which students could have not much to say such as DIY, and topics commonly considered as complicated, for instance economics or sciences. In contrast, the topics found in FLELex are highly influenced by the choice of texts comprised in the corpus and can sometimes be overrepresented. A clear materialization of this shortcoming appeared when we looked at FLELex frequent words absent from Beacco and discovered that words related to fairytales were abundant (e.g. \"ch\u00e2teau\" castle, \"reine\" queen, \"prince\" prince, \"chevalier\" knigth, \"dragon\" dragon, \"magique\" magic, and even \"\u00e9pouser\" to marry or \"r\u00eaver\" dream). This can be explained by the inclusion of a simplified reader dedicated to King Arthur legend in the corpus. On the other hand, the RLD's semantic structure has a downside since it may lead to loose sight of the CEFR descriptors, specially in topics where finding a progression between the items in terms of language acts is arduous. The most iconic theme we came across is food and drinks, with 150 words absent from FLELex, but geography and the human body also share the same characteristics at a lesser degree. We distinguished those topics from the others because they are mostly composed of nouns with closely related meaning (e.g. in B2, \"pain fran\u00e7ais\", \"baguette\", \"boule\", \"b\u00e2tard\", \"pain de campagne\", \"carr\u00e9\", \"pain int\u00e9gral\", \"pain complet\", \"pistolet\", \"petit pain\", \"sandwich\", \"petit pain au lait\", all being different types of bread). The large number of words in these topics is a reflection of reality usually bypassed in textbooks, since these nouns don't offer a wide variety of communicative situations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topics",
"sec_num": "5.2."
},
{
"text": "FLELex is a descriptive tool built from texts related to reading comprehension tasks in FFL materials, illustrating therefore the contents of written reception activities. The RLD also presents its contents as to be mastered in comprehension tasks, leaving the decision to teachers and curriculum designers regarding what learners should be able to produce (Beacco et al., 2008) . However, we identified four POS in which the ability to produce words seems to be the selection criteria for Beacco: determiners, conjunctions (e.g. \"comme\" in B1), pronouns (e.g. \"le\" in B2), and prepositions. We detected them because the frequencies of those POS are among the highest of the corpus while their levels nevertheless vary from A1 to B2 in Beacco. Even though words belonging to these POS are probably understood at early stages due to repeated exposure, the RLD proposes a gradation in the different learning stages they should be taught at, which is likely motivated either by the CEFR descriptors regarding production and interaction or by intrinsic characteristics of the word. We therefore found that the two approaches are not compatible for those specific POS, as the prescriptive aspect of the RLD implies to take into account learners objectives and abilities in production tasks as well, while FLELex only illustrates the language used in reception tasks.",
"cite_spans": [
{
"start": 357,
"end": 378,
"text": "(Beacco et al., 2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reception or production",
"sec_num": "5.3."
},
{
"text": "Beacco's intent is to propose a reference calibration for CEFR levels, but not a list of words that would be mandatory and identical, in all places and at all times. In the introduction, the authors minimize the inherent normative aspect of their lists, presenting them as only a starting point to insure compatibility between syllabus and exams of different educational systems. Therefore, they display vocabulary in three possible ways:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normative and adaptable",
"sec_num": "5.4."
},
{
"text": "\u2022 closed lists, e.g. \"b\u00e9b\u00e9, enfant, lait\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normative and adaptable",
"sec_num": "5.4."
},
{
"text": "\u2022 open lists, e.g. \"[...] agr\u00e9able, b\u00eate, calme, content\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normative and adaptable",
"sec_num": "5.4."
},
{
"text": "\u2022 list descriptors, e.g. \"[...] noms de nationalit\u00e9s\" Such behavior, intended to human readers, however raises some issues for an automatic approach. Facing list descriptors, we generally ignored them in the digitizing process, which explains why words such as \"anglais\" English and \"espagnol\" Spanish -which are nationalities -were not found in our version of Beacco, although present in FLELex. For our study, open lists and list descriptors are very problematic in the sense that the absence of a word from a level cannot be considered as 100% certain. From a teacher's perspective though, those open lists and item descriptions are coherent with the authors goal to provide content adaptable to all contexts, and indications that the items are to be chosen according to the geographic, cultural and educational situation (e.g. for the nationalities, \"japonais\", \"cor\u00e9en\" and \"vietnamien\" are likely to be taught in A1 to Asian learners, whereas they might not be needed from A1 in a South American classroom).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normative and adaptable",
"sec_num": "5.4."
},
{
"text": "In this research, we aimed to infer CEFR levels from CE-FRLex word frequency distribution using expert knowledge from the French RLD as a gold-standard. Such approach enabled us to apply, for the first time, machine learning algorithms to such task whereas previous work used simple mathematical rules. After standardisation of the data, we trained three machine learning models on two sets, reaching an accuracy score of 0.54 for the dataset BeaccoFLELexA-toB and of 0.48 for the BeaccoFLELexC dataset. These results clearly outperforms results reached by the First occ rule currently used in the CEFRLex interface. Our work has direct repercussions on this project, as our best classifier has been integrated in the interface 4 , offering now the choice between Beacco or First occ to classify words. Our experiments also yield other interesting results. First, comparing our results with those of a L1 frequency word list revealed that the distributional information contained in FLELex indeed seems richer and finer-grained than the one of a standard L1 list. Second, we carried out an analysis on the most important classification errors as a way to sharpen our understanding of the differences existing between the two approaches we compared: frequency and expert knowledge. This analysis stressed the importance of lexical networks in L2 learning to ensure a better representation of available words and of words connected to topics generally avoided in textbooks. We also noticed that although CEFRLex resources only represent receptive skills, Beacco might have sometimes classified words based on criteria relative to both receptive and productive skills. Finally, the presence of list descriptors in RLD is a serious issue for their automatic exploitation, as they contain some implicit knowledge. We believe that all these discrepancies partially explain why our statistical model is not able to better predict Beacco's level. In other words, although a better option than the First occ rule, using expert knowledge also has shortcomings. In the future, we plan to investigate the use of L2 learners data as an alternative source of information to transform CEFRLex distribution into levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "In the case of the English Vocabulary Profile (EVP), the designers have indeed combined lexicographical and pedagogical knowledge with word frequency information(Capel, 2010). However, the frequencies were estimated from a learner corpus and therefore are representative of productive skills rather than receptive ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See the list of concerned languages at http://www.coe.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The RLD book for the C levels(Riba, 2016) was not used in this study, as it doesn't provide lists of lexical items, but rather describe more conceptual abilities, like managing and structuring",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "From distributions to labels: A lexical proficiency analysis using learner corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Alfter",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bizzoni",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Agebj\u00f3rn",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Volodina",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Pil\u00e1n",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the joint workshop on NLP4CALL and NLP for Language Acquisition at SLTC, number 130",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alfter, D., Bizzoni, Y., Agebj\u00f3rn, A., Volodina, E., and Pil\u00e1n, I. (2016). From distributions to labels: A lexical proficiency analysis using learner corpora. In Proceed- ings of the joint workshop on NLP4CALL and NLP for Language Acquisition at SLTC, number 130, pages 1-7. Link\u00f6ping University Electronic Press.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The {CELEX} lexical data base on {CD-ROM}. Linguistic Data Consortium",
"authors": [
{
"first": "R",
"middle": [],
"last": "Baayen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Piepenbrock",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Van Rijn",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baayen, R., Piepenbrock, R., and van Rijn, H. (1993). The {CELEX} lexical data base on {CD-ROM}. Linguistic Data Consortium, Philadelphia: Univ. of Pennsylvania.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Niveau A2 pour le fran\u00e7ais: Un r\u00e9f\u00e9rentiel. Didier",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niveau A2 pour le fran\u00e7ais: Un r\u00e9f\u00e9rentiel. Didier.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Moving beyond ku\u010dera and francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for american english",
"authors": [
{
"first": "M",
"middle": [],
"last": "Brysbaert",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "New",
"suffix": ""
}
],
"year": 2009,
"venue": "Behavior research methods",
"volume": "41",
"issue": "4",
"pages": "977--990",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brysbaert, M. and New, B. (2009). Moving beyond ku\u010dera and francis: A critical evaluation of current word fre- quency norms and the introduction of a new and im- proved word frequency measure for american english. Behavior research methods, 41(4):977-990.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A1-b2 vocabulary: Insights and issues arising from the english profile wordlists project",
"authors": [
{
"first": "A",
"middle": [],
"last": "Capel",
"suffix": ""
}
],
"year": 2010,
"venue": "English Profile Journal",
"volume": "1",
"issue": "1",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Capel, A. (2010). A1-b2 vocabulary: Insights and issues arising from the english profile wordlists project. En- glish Profile Journal, 1(1):1-11.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "EFLLex: A Graded Lexical Resource for Learners of English as a Foreign Language",
"authors": [
{
"first": "L",
"middle": [],
"last": "D\u00fcrlich",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of LREC 2018",
"volume": "",
"issue": "",
"pages": "873--879",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D\u00fcrlich, L. and Fran\u00e7ois, T. (2018). EFLLex: A Graded Lexical Resource for Learners of English as a Foreign Language. In Proceedings of LREC 2018, pages 873- 879.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "FLELex: a graded lexical resource for French foreign learners",
"authors": [
{
"first": "T",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Gala",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Watrin",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Fairon",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of LREC 2014",
"volume": "",
"issue": "",
"pages": "3766--3773",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois, T., Gala, N., Watrin, P., and Fairon, C. (2014). FLELex: a graded lexical resource for French foreign learners. In Proceedings of LREC 2014, pages 3766- 3773.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "SVALex: a CEFR-graded lexical resource for Swedish foreign and second language learners",
"authors": [
{
"first": "T",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Volodina",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ildik\u00f3",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Tack",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of LREC 2016",
"volume": "",
"issue": "",
"pages": "213--219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois, T., Volodina, E., Ildik\u00f3, P., and Tack, A. (2016). SVALex: a CEFR-graded lexical resource for Swedish foreign and second language learners. In Proceedings of LREC 2016, pages 213-219.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Towards a french lexicon with difficulty measures: Nlp helping to bridge the gap between traditional dictionaries and specialized lexicons",
"authors": [
{
"first": "N",
"middle": [],
"last": "Gala",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Fairon",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of eLex2013",
"volume": "",
"issue": "",
"pages": "132--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gala, N., Fran\u00e7ois, T., and Fairon, C. (2013). Towards a french lexicon with difficulty measures: Nlp helping to bridge the gap between traditional dictionaries and spe- cialized lexicons. In Proceedings of eLex2013, pages 132-151.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Key issues in l2 reading development",
"authors": [
{
"first": "W",
"middle": [],
"last": "Grabe",
"suffix": ""
}
],
"year": 2014,
"venue": "CELC Symposium Bridging Research and Pedagogy",
"volume": "",
"issue": "",
"pages": "8--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grabe, W. (2014). Key issues in l2 reading development. In CELC Symposium Bridging Research and Pedagogy, pages 8-18.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Computational analysis of present-day American English",
"authors": [
{
"first": "H",
"middle": [],
"last": "Ku\u010dera",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Francis",
"suffix": ""
}
],
"year": 1967,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ku\u010dera, H. and Francis, W. (1967). Computational analy- sis of present-day American English. Brown University Press, Providence.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "How much lexis is necessary for reading comprehension?",
"authors": [
{
"first": "B",
"middle": [
"; G"
],
"last": "Laufer",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rayson",
"suffix": ""
},
{
"first": "Wilson",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1992,
"venue": "Word frequencies in written and spoken english: based on the british national corpus",
"volume": "",
"issue": "",
"pages": "126--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laufer, B. (1992). How much lexis is necessary for reading comprehension? In Vocabulary and applied linguistics, pages 126-132. Springer. Leech, G., Rayson, P., and Wilson, A. (2001). Word fre- quencies in written and spoken english: based on the british national corpus.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Mots fr\u00e9quents et mots disponibles. un aspect nouveau de la statistique du langage. Les langues modernes",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mich\u00e9a",
"suffix": ""
}
],
"year": 1953,
"venue": "",
"volume": "47",
"issue": "",
"pages": "338--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mich\u00e9a, R. (1953). Mots fr\u00e9quents et mots disponibles. un aspect nouveau de la statistique du langage. Les langues modernes, 47(4):338-344.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The use of film subtitles to estimate word frequencies",
"authors": [
{
"first": "B",
"middle": [],
"last": "New",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brysbaert",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Veronis",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Pallier",
"suffix": ""
}
],
"year": 2007,
"venue": "Applied Psycholinguistics",
"volume": "28",
"issue": "04",
"pages": "661--677",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "New, B., Brysbaert, M., Veronis, J., and Pallier, C. (2007). The use of film subtitles to estimate word frequencies. Applied Psycholinguistics, 28(04):661-677.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Niveaux C1 / C2 pour le fran\u00e7ais: el\u00e9ments pour un r\u00e9f\u00e9rentiel",
"authors": [
{
"first": "P",
"middle": [],
"last": "Riba",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riba, P. (2016). Niveaux C1 / C2 pour le fran\u00e7ais: el\u00e9ments pour un r\u00e9f\u00e9rentiel. Didier.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Probabilistic part-of-speech tagging using decision trees",
"authors": [
{
"first": "H",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of International Conference on New Methods in Language Processing",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schmid, H. (1994). Probabilistic part-of-speech tagging using decision trees. In Proceedings of International Conference on New Methods in Language Processing, volume 12. Manchester, UK.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Evaluating lexical simplification and vocabulary knowledge for learners of french: possibilities of using the flelex resource",
"authors": [
{
"first": "A",
"middle": [],
"last": "Tack",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": ""
},
{
"first": "A.-L",
"middle": [],
"last": "Ligozat",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Fairon",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of LREC2016",
"volume": "",
"issue": "",
"pages": "230--236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tack, A., Fran\u00e7ois, T., Ligozat, A.-L., and Fairon, C. (2016). Evaluating lexical simplification and vocabulary knowledge for learners of french: possibilities of using the flelex resource. In Proceedings of LREC2016, pages 230-236.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "NT2Lex: A CEFR-Graded Lexical Resource for Dutch as a Foreign Language Linked to Open Dutch WordNet",
"authors": [
{
"first": "A",
"middle": [],
"last": "Tack",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Desmet",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Fairon",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tack, A., Fran\u00e7ois, T., Desmet, P., and Fairon, C. (2018). NT2Lex: A CEFR-Graded Lexical Resource for Dutch as a Foreign Language Linked to Open Dutch WordNet. In Proceedings of BEA 2018.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Word knowledge in the elementary school",
"authors": [
{
"first": "E",
"middle": [],
"last": "Thorndike",
"suffix": ""
}
],
"year": 1921,
"venue": "The Teachers College Record",
"volume": "22",
"issue": "4",
"pages": "334--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorndike, E. (1921). Word knowledge in the elementary school. The Teachers College Record, 22(4):334-370.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Polylexfle: une base de donn\u00e9es d'expressions polylexicales pour le fle",
"authors": [
{
"first": "A",
"middle": [],
"last": "Todirascu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Cargill",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": ""
}
],
"year": 2019,
"venue": "Actes de la conf\u00e9rence TALN 2019",
"volume": "",
"issue": "",
"pages": "143--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Todirascu, A., Cargill, M., and Fran\u00e7ois, T. (2019). Polylexfle: une base de donn\u00e9es d'expressions polylexi- cales pour le fle. In Actes de la conf\u00e9rence TALN 2019, pages 143-156.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">Level # FLELex # First occ # Beacco</td></tr><tr><td>A1</td><td>4,097</td><td>4,097</td><td>827</td></tr><tr><td>A2</td><td>5,768</td><td>2,699</td><td>615</td></tr><tr><td>B1</td><td>9,074</td><td>3,980</td><td>1334</td></tr><tr><td>B2</td><td>6,309</td><td>1,299</td><td>2742</td></tr><tr><td>C1</td><td>7,267</td><td>1,665</td><td>x</td></tr><tr><td>C2</td><td>3,932</td><td>496</td><td>x</td></tr></table>",
"num": null,
"text": "Distribution of entries per CEFR level, including the total number of items per level in FLELex, the number of items per level calculated with First occ, and the number of words per level in Beacco.",
"html": null
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Test results on both datasets.",
"html": null
},
"TABREF5": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Variable ablation study on both datasets, using the boosting model.",
"html": null
},
"TABREF7": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "F1 scores per level for the three models, on the BeaccoFLELexAtoB dataset.",
"html": null
}
}
}
}