{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:57:59.914703Z" }, "title": "Incorporating Multiword Expressions in Phrase Complexity Estimation", "authors": [ { "first": "Sian", "middle": [], "last": "Gooding", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Cambridge", "location": {} }, "email": "" }, { "first": "Shiva", "middle": [], "last": "Taslimipoor", "suffix": "", "affiliation": { "laboratory": "", "institution": "ALTA Institute", "location": {} }, "email": "" }, { "first": "Ekaterina", "middle": [], "last": "Kochmar", "suffix": "", "affiliation": { "laboratory": "", "institution": "ALTA Institute", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Multiword expressions (MWEs) were shown to be useful in a number of NLP tasks. However, research on the use of MWEs in lexical complexity assessment and simplification is still an under-explored area. In this paper, we propose a text complexity assessment system for English, which incorporates MWE identification. We show that detecting MWEs using state-of-the-art systems improves predicting complexity on an established lexical complexity dataset.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Multiword expressions (MWEs) were shown to be useful in a number of NLP tasks. However, research on the use of MWEs in lexical complexity assessment and simplification is still an under-explored area. In this paper, we propose a text complexity assessment system for English, which incorporates MWE identification. We show that detecting MWEs using state-of-the-art systems improves predicting complexity on an established lexical complexity dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Complex Word Identification (CWI) is a well-established task in natural language processing, which deals with automated identification of words that a reader might find difficult to understand (Shardlow, 2013) . As such, it is often considered the first step in a lexical simplification pipeline. For instance, after a CWI system identifies sweeping in:", "cite_spans": [ { "start": 193, "end": 209, "text": "(Shardlow, 2013)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "(1) Prime Minister's government took the sweeping action as complex, a simplification system might suggest replacing it with a simpler alternative, for example with wide or broad. However, CWI systems so far have been focusing on complexity identification at the level of individual words (Shardlow, 2013; Gooding and Kochmar, 2018; Yimam et al., 2018) . At the same time, there is extensive evidence that complexity often pertains to expressions consisting of more than one word. Consider ballot stuffing in the following example from the dataset of Yimam et al. (2017) :", "cite_spans": [ { "start": 306, "end": 332, "text": "Gooding and Kochmar, 2018;", "ref_id": "BIBREF9" }, { "start": 333, "end": 352, "text": "Yimam et al., 2018)", "ref_id": null }, { "start": 551, "end": 570, "text": "Yimam et al. (2017)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "(2) There have been numerous falsifications and ballot stuffing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "A CWI system aimed at individual complex word identification would be of a limited use in this case, as trying to simplify ballot stuffing on an individual word basis is likely to produce nonsensical or semantically different expressions like ballot *filling or vote stuffing. Ballot stuffing is an example of a multiword expression (MWE), which has idiosyncratic interpretation that crosses word boundaries or spaces (Sag et al., 2002) . Despite the fact that special consideration of MWEs has been shown to improve results in parsing (Constant et al., 2017) , machine translation (Constant et al., 2017; Carpuat and Diab, 2010) , keyphrase/index term extraction (Newman and Baldwin, 2012) , and sentiment analysis (Williams et al., 2015) and is likely to improve the quality of lexical simplification approaches (Hmida et al., 2018) , not much research addressed complexity identification in MWEs (Ozasa et al., 2007; Fran\u00e7ois and Watrin, 2011) . In this paper, we show that identification of MWEs is a crucial step in a lexical simplification pipeline, and in particular it is important at the stage of lexical complexity assessment. In addition, MWEs span a wide range of various expressions, including verbal constructions (wind down, set aside), nominal compounds (sledge hammers, peace treaty), named entities (Barack Obama, Los Angeles), and fixed phrases (brothers in arms, show of force), among others. Such expressions can be challenging, with various degrees of complexity, for both native and non-native readers. We show that identifying the type of an MWE is helpful at the complexity assessment stage. We also argue that knowing types of MWEs can further assist in selecting an appropriate simplification strategy: for instance, in case of many named entity MWEs and some nominal compounds like prime minister the best simplification strategy might consist in providing a reader with a link to a Wikipedia entry. We present a comprehensive system that:", "cite_spans": [ { "start": 418, "end": 436, "text": "(Sag et al., 2002)", "ref_id": "BIBREF17" }, { "start": 536, "end": 559, "text": "(Constant et al., 2017)", "ref_id": "BIBREF4" }, { "start": 582, "end": 605, "text": "(Constant et al., 2017;", "ref_id": "BIBREF4" }, { "start": 606, "end": 629, "text": "Carpuat and Diab, 2010)", "ref_id": "BIBREF3" }, { "start": 664, "end": 690, "text": "(Newman and Baldwin, 2012)", "ref_id": "BIBREF13" }, { "start": 716, "end": 739, "text": "(Williams et al., 2015)", "ref_id": "BIBREF22" }, { "start": 814, "end": 834, "text": "(Hmida et al., 2018)", "ref_id": "BIBREF11" }, { "start": 899, "end": 919, "text": "(Ozasa et al., 2007;", "ref_id": "BIBREF14" }, { "start": 920, "end": 946, "text": "Fran\u00e7ois and Watrin, 2011)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 discovers MWEs in text;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 identifies MWE type using linguistic patterns; and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 incorporates MWE type into a lexical complexity assessment system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Our system is trained on a novel lexical complexity dataset for English annotated with the types of MWEs (Kochmar et al., 2020) , 1 consisting of 4732 expressions extracted from the complexity-annotated dataset of Yimam et al. (2017) . We discuss this dataset in Section 2. Section 3. details our approach to MWE identification. We then present our lexical complexity assessment system in Section 4., and discuss the results of both MWE detection and complexity assessment systems in Section 5. collapsed property sector, interior ministry troops 9.21 Table 1 : Classes of MWEs annotated in the dataset of Kochmar et al. (2020) and WIKIPEDIA articles), and were asked to highlight words and sequences of words up to 50 characters in length that they considered difficult to understand. As a result, Yimam et al. 2017collected a dataset of 30147 individual words and 4732 \"phrases\" annotated as simple or complex in context. The annotation follows one of the two settings: under binary setting a lexeme receives a label of 1 even if a single annotator selected it as complex (0 if none of the annotators considered it complex), and under probabilistic setting a lexeme receives a label on the scale of [0.0, 0.05, ..., 1.0] representing the proportion of annotators among 20 that selected an item as complex.", "cite_spans": [ { "start": 105, "end": 127, "text": "(Kochmar et al., 2020)", "ref_id": "BIBREF12" }, { "start": 214, "end": 233, "text": "Yimam et al. (2017)", "ref_id": "BIBREF23" }, { "start": 606, "end": 627, "text": "Kochmar et al. (2020)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 552, "end": 559, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "During annotation, annotators were allowed to select any sequence of words, which resulted in selection of expressions that do not form MWEs proper (for instance, his drive), as well as sentence fragments and sequences of unrelated words (for instance, authorities should annul the). Since the annotators in Yimam et al. (2017) were not instructed to select proper MWEs in this data, Kochmar et al. (2020) first re-annotated the selection of 4732 sequences longer than one word from the original dataset with their MWE status and type. In this annotation experiment, Kochmar et al. (2020) followed the annotation instructions and distinguished between the MWE types from Schneider et al. 2014, with a few modifications:", "cite_spans": [ { "start": 308, "end": 327, "text": "Yimam et al. (2017)", "ref_id": "BIBREF23" }, { "start": 384, "end": 405, "text": "Kochmar et al. (2020)", "ref_id": "BIBREF12" }, { "start": 567, "end": 588, "text": "Kochmar et al. (2020)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Complex Phrase Identification Dataset", "sec_num": "2." }, { "text": "\u2022 Additional types for \"phrases\" that are not MWE proper were introduced. These types include Not MWE for cases like authorities should annul the, and Not MWE but contains MWE(s) for longer non-MWE expressions that contain MWEs as sub-units: for example, collapsed property sector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complex Phrase Identification Dataset", "sec_num": "2." }, { "text": "\u2022 Two categories, verb-particle and other phrasal verb, were merged into one due to lack of distinguishing power between the two from the simplification point of view.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complex Phrase Identification Dataset", "sec_num": "2." }, { "text": "\u2022 Categories phatic and proverb were not used because examples of these types do not occur in this data. Table 1 presents the full account of MWE types with examples and their distribution in the dataset of Kochmar et al. (2020) . The dataset was annotated by 3 annotators, all trained in linguistics, over a series of rounds. The annotators achieved observed agreement of at least 0.70 and Fleiss \u03ba (Fleiss, 1981) of at least 0.7145 across the annotation rounds, which suggests substantial agreement. We refer the readers to the original publication (Kochmar et al., 2020) for more details on the annotation procedure.", "cite_spans": [ { "start": 207, "end": 228, "text": "Kochmar et al. (2020)", "ref_id": "BIBREF12" }, { "start": 400, "end": 414, "text": "(Fleiss, 1981)", "ref_id": "BIBREF7" }, { "start": 551, "end": 573, "text": "(Kochmar et al., 2020)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 105, "end": 112, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Complex Phrase Identification Dataset", "sec_num": "2." }, { "text": "We first need to train an MWE identification system to detect the expressions of interest for our study. MWE identification is the task of discriminating, in context, and linking those tokens that together develop a special meaning. This can be modestly modelled using sequence tagging systems. We experiment with two systems: one is BERT-based transformer (Devlin et al., 2018) for token classification, and the other is the publicly available graph convolutional neural network (GCN) based system, which is reported to achieve state-of-the-art results on MWE identification (Rohanian et al., 2019) . The BERT-based token classification system is designed by adding a linear classification layer on top of the hiddenstates output of the BERT architecture. We use the pretrained model of bert-base provided by 'Hugging Face' developers 2 and fine-tune the weights of the whole architecture for a few iterations (i.e. 5 epochs). We use the same configurations that they use for named entity recognition. Among various systems designed to tag corpora for MWEs (Ramisch et al., 2018) the best systems incorporate dependency parse information (Al Saied et al., 2017; Rohanian et al., 2019) . The GCN-based system that we employ consists of GCN and LSTM layers with a linear classification layer on top. As in the original system, we use ELMo for input representation.", "cite_spans": [ { "start": 357, "end": 378, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" }, { "start": 576, "end": 599, "text": "(Rohanian et al., 2019)", "ref_id": "BIBREF16" }, { "start": 1058, "end": 1080, "text": "(Ramisch et al., 2018)", "ref_id": "BIBREF15" }, { "start": 1143, "end": 1162, "text": "Saied et al., 2017;", "ref_id": "BIBREF2" }, { "start": 1163, "end": 1185, "text": "Rohanian et al., 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Multiword Expression Identification", "sec_num": "3." }, { "text": "Since our complexity estimation dataset is not originally designed for MWE identification, we augment our training data with the STREUSLE dataset which is comprehensively annotated for MWEs (Schneider and Smith, 2015) . In Section 5. we show how this addition helps better identification of MWEs.", "cite_spans": [ { "start": 190, "end": 217, "text": "(Schneider and Smith, 2015)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Multiword Expression Identification", "sec_num": "3." }, { "text": "Once MWEs are identified in text, their types are predicted based on linguistic patterns. For instance, an MWE detection system identifies woke up as an MWE in He woke up in the morning as usual. A linguistic patterns-based system then uses the information about the parts-of-speech in this expression to predict its type as verb-particle and other phrasal verbs. Next, the predicted MWE together with its type is passed on to the lexical complexity assessment system that assesses the complexity of the expression (see Section 4.). In Section 5. we first compare the results of the two MWE identification systems. Then we use the best one in evaluating the performance of complexity assessment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiword Expression Identification", "sec_num": "3." }, { "text": "We build a baseline MWE complexity system, whose goal is to assign a complexity score to identified MWEs. The complexity assessment system is trained on phrases that have been annotated as MWEs in our dataset, and tested using the MWEs extracted from the test portion of the shared task dataset (Yimam et al., 2018) .", "cite_spans": [ { "start": 295, "end": 315, "text": "(Yimam et al., 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "MWE Complexity Assessment Systems", "sec_num": "4." }, { "text": "We run experiments using the probabilistic labels, which represent the complexity of phrases on a scale of [0.0...0.70], 3 representing the proportion of 20 annotators that found a phrase complex. The MWE complexity assessment system is a supervised feature-based model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MWE Complexity Assessment Systems", "sec_num": "4." }, { "text": "Our complexity assessment system relies on 6 features. First, we include two traditional features found to correlate highly with word complexity in previous research: length and frequency. These are adapted for phrases by considering (1) the number of words instead of the number of characters for length, and (2) using the average frequency of bigrams within the phrase, which is calculated using the Corpus of Contemporary American English (Davies, 2009) for frequency. Average bigram frequency is used rather than n-gram frequency to account for the differences in MWE lengths and to increase feature coverage. The second category of features focuses on the complexity of words contained within the MWE. We use an open source system of Gooding and Kochmar (2019) to tag words with a complexity score. Since this system does not directly assign complexity scores to MWEs, we use the highest word complexity within the phrase as well as the average word complexity as features. The source genre of the sentence where a phrase occurs (NEWS, WIKINEWS or WIKIPEDIA) is used as another feature, as we hypothesise that different domains (e.g., more general for the NEWS vs. more technical for the WIKIPEDIA articles) may challenge readers to a different extent. Finally, following Kochmar et al. (2020) , who show that different types of MWEs show different complexity levels, we use the type of MWE predicted by the linguistic patterns-based system as a feature. An example of the feature set for the phrase sledge hammers is shown in Table 2 . 3 The upper bound on this scale reflects the fact that at most 14 annotators agreed that a particular phrase is complex. ", "cite_spans": [ { "start": 442, "end": 456, "text": "(Davies, 2009)", "ref_id": "BIBREF5" }, { "start": 739, "end": 765, "text": "Gooding and Kochmar (2019)", "ref_id": "BIBREF10" }, { "start": 1277, "end": 1298, "text": "Kochmar et al. (2020)", "ref_id": "BIBREF12" }, { "start": 1542, "end": 1543, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 1532, "end": 1539, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Features", "sec_num": "4.1." }, { "text": "A set of standard regression algorithms from the scikit-learn 4 library are applied to the dataset. Model predictions are rounded to the closest 0.05 interval. The best performing model, identified via stratified 5-fold cross validation, uses a Multi-layer Perceptron regressor with 6 hidden layers and the lbfgs optimiser, used due to the size of the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Implementation", "sec_num": "4.2." }, { "text": "We report the results of our MWE identification systems compared to the gold standard annotation which is explained in Section 2. We evaluate the systems in terms of the MWE-based precision, recall and F1-score which are defined in Savary et al. (2017) . MWE-based evaluation measures count the strict matching between the prediction and the gold labels where every component of an MWE should be correctly tagged in order for it to be considered true positive. In Table 3 , we report the MWE-based measures for both positive (MWE) and negative (non-MWE) classes. 5 As can be seen in Table 3 , the graph convolutional neural network-based (GCN) system outperforms Bert-transformer token classification for identifying MWEs. We can also see that the addition of external MWE-annotated data from STREUSLE helps improving the overall results. As expected, the data augmentation is especially effective in increasing recall as well as the overall F-measure. The best-performing system, GCN trained on both our MWE data and STREUSLE dataset, achieves the highest F1-scores of 0.72 on not MWE and 0.60 on MW compounds classes, which are also the most prevalent in our data. At the same time, it finds detection of less frequent classes like verb-preposition, verb-noun(-preposition) and conjunction/connective more challenging.", "cite_spans": [ { "start": 232, "end": 252, "text": "Savary et al. (2017)", "ref_id": "BIBREF18" }, { "start": 563, "end": 564, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 464, "end": 471, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 583, "end": 590, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "MWE Identification Results", "sec_num": "5.1." }, { "text": "We use a pipeline system consisting of three stages: (1) MWE identification, (2) MWE type prediction, and (3) MWE complexity prediction. In Table 4 we report the results on the MWE proportion of the 2018 shared task test sets (Yimam et al., 2018) for each stage of the pipeline. We compare our results to the strategy used by the winning shared task system CAMB (Gooding and Kochmar, 2018) , where all phrases are simply assigned the complexity value of 0.05. This baseline is highly competitive, as 1074 of the 2551 examples have a probabilistic score of 0.05, with 61% of MWEs having a value of 0.00 or 0.05. We use Mean Absolute Error (MAE) as our evaluation metric, following the 2018 Shared Task official evaluation strategy (Yimam et al., 2018) . This metric estimates average absolute difference between pairs of the predicted and the gold-standard complexity scores. The initial results in Table 4 consider complexity prediction in isolation, by testing on valid MWEs and providing the gold labels for the MWE types. Our system achieves lower absolute error than the baseline on both NEWS and WIKIPEDIA test sets, but not on the WIKINEWS test set. However, the distribution of probabilistic scores in the WIKINEWS test set is highly skewed, with 79% having scores of 0.05 or 0.00 and the highest complexity score in the dataset being only 0.35; a graph in Figure 1 illustrates the distribution of labels across test sets. In practice we do not have gold standard labels for the MWE types, therefore we use linguistic pattern analysis to predict the MWE labels. The results of combining type and complexity prediction (2,3) follow the same trend as complexity prediction alone, however they also show a decrease in performance across test sets. As Kochmar et al. (2020) show, the type of MWE is highly informative when considering phrase complexity, therefore misclassification at this stage negatively impacts subsequent complexity prediction. We note that our MWE-type detection system achieves the F1-scores around 0.70 on the MW named entities, PP modifier and verb-particle or other phrasal verb classes, followed by F1-scores around 0.60 for the MW compounds and verb-preposition classes. The classes that our system most struggles in identifying include conjunction/connective, coordinated phrase and verb-noun(-preposition).", "cite_spans": [ { "start": 226, "end": 235, "text": "(Yimam et", "ref_id": null }, { "start": 236, "end": 246, "text": "al., 2018)", "ref_id": null }, { "start": 362, "end": 389, "text": "(Gooding and Kochmar, 2018)", "ref_id": "BIBREF9" }, { "start": 730, "end": 750, "text": "(Yimam et al., 2018)", "ref_id": null }, { "start": 1755, "end": 1776, "text": "Kochmar et al. (2020)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 140, "end": 147, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 898, "end": 905, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 1364, "end": 1372, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "End-to-end Complexity System Results", "sec_num": "5.2." }, { "text": "Finally, we consider the entire pipeline including the initial step (1) of MWE identification. As complexity prediction can only be performed on MWEs identified by our system, the size of the test set is reduced, therefore results are not directly comparable to previous stages. However, we note that our system outperforms the baseline across all genres. The baseline performs worse on the MWEs identified by our system as the probabilistic average is higher (0.14 compared to 0.09). A point of interest is that of the MWEs identified by the system, only 0.08% have a complexity value of 0 compared to 18% of the initial test sets. This suggests that the MWE identification step is identifying 'strong' MWEs that are more likely to be considered complex by annotators. This further supports our hypothesis that an MWE identification system can be combined with complexity features into a unified system to provide better complexity identification at the level of phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "End-to-end Complexity System Results", "sec_num": "5.2." }, { "text": "In this paper, we propose a complexity assessment system for predicting complexity of MWEs rather than single word units. We show that augmenting the system with the information about type of expressions improves the performance. Research on lexical complexity assessment would highly benefit from the proposed data and system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6." }, { "text": "https://github.com/ekochmar/MWE-CWI", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/huggingface/ transformers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://scikit-learn.org 5 The negative class (non-MWEs) includes expressions (sequences of words) that are present in the dataset of Yimam et al. (2018) but are not tagged as MWEs in Kochmar et al. (2020), e.g. authorities should annul the.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "pages 66-78, New Orleans, Louisiana, June. Association for Computational Linguistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The second and third authors' research is supported by Cambridge Assessment, University of Cambridge, via the ALTA Figure 1 : Probabilistic label counts across test sets", "cite_spans": [], "ref_spans": [ { "start": 115, "end": 123, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "We are grateful to the anonymous reviewers for their valuable feedback", "authors": [ { "first": "", "middle": [], "last": "Institute", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Institute. We are grateful to the anonymous reviewers for their valuable feedback.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The ATILF-LLF System for Parseme Shared Task: a Transition-based Verbal Multiword Expression Tagger", "authors": [ { "first": "Al", "middle": [], "last": "Saied", "suffix": "" }, { "first": "H", "middle": [], "last": "Candito", "suffix": "" }, { "first": "M", "middle": [], "last": "Constant", "suffix": "" }, { "first": "M", "middle": [], "last": "", "suffix": "" } ], "year": 2017, "venue": "13th Workshop on Multiword Expressions", "volume": "", "issue": "", "pages": "127--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Al Saied, H., Candito, M., and Constant, M. (2017). The ATILF-LLF System for Parseme Shared Task: a Transition-based Verbal Multiword Expression Tagger. In 13th Workshop on Multiword Expressions (MWE 2017), pages 127-132.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Task-based evaluation of multiword expressions: a pilot study in statistical machine translation", "authors": [ { "first": "M", "middle": [], "last": "Carpuat", "suffix": "" }, { "first": "M", "middle": [], "last": "Diab", "suffix": "" } ], "year": 2010, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "242--245", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carpuat, M. and Diab, M. (2010). Task-based evaluation of multiword expressions: a pilot study in statistical machine translation. In Proceedings of NAACL-HLT, pages 242- 245.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Multiword expression processing: A survey", "authors": [ { "first": "M", "middle": [], "last": "Constant", "suffix": "" }, { "first": "G", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "J", "middle": [], "last": "Monti", "suffix": "" }, { "first": "L", "middle": [], "last": "Van Der Plas", "suffix": "" }, { "first": "C", "middle": [], "last": "Ramisch", "suffix": "" }, { "first": "M", "middle": [], "last": "Rosner", "suffix": "" }, { "first": "A", "middle": [], "last": "Todirascu", "suffix": "" } ], "year": 2017, "venue": "Computational Linguistics", "volume": "43", "issue": "4", "pages": "837--892", "other_ids": {}, "num": null, "urls": [], "raw_text": "Constant, M., Eryigit, G., Monti, J., van der Plas, L., Ramisch, C., Rosner, M., and Todirascu, A. (2017). Mul- tiword expression processing: A survey. Computational Linguistics, 43(4):837-892.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The 385+ million word corpus of contemporary american english (1990-2008+): Design, architecture, and linguistic insights", "authors": [ { "first": "M", "middle": [], "last": "Davies", "suffix": "" } ], "year": 2009, "venue": "International journal of corpus linguistics", "volume": "14", "issue": "2", "pages": "159--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Davies, M. (2009). The 385+ million word corpus of con- temporary american english (1990-2008+): Design, ar- chitecture, and linguistic insights. International journal of corpus linguistics, 14(2):159-190.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "J", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "M.-W", "middle": [], "last": "Chang", "suffix": "" }, { "first": "K", "middle": [], "last": "Lee", "suffix": "" }, { "first": "K", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Statistical methods for rates and proportions", "authors": [ { "first": "J", "middle": [ "L" ], "last": "Fleiss", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fleiss, J. L. (1981). Statistical methods for rates and pro- portions. New York: John Wiley, 2nd edition.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "On the contribution of MWE-based features to a readability formula for French as a foreign language", "authors": [ { "first": "T", "middle": [], "last": "Fran\u00e7ois", "suffix": "" }, { "first": "P", "middle": [], "last": "Watrin", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing 2011", "volume": "", "issue": "", "pages": "441--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fran\u00e7ois, T. and Watrin, P. (2011). On the contribution of MWE-based features to a readability formula for French as a foreign language. In Proceedings of the International Conference Recent Advances in Natural Language Pro- cessing 2011, pages 441-447, Hissar, Bulgaria, Septem- ber. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "CAMB at CWI shared task 2018: Complex word identification with ensemblebased voting", "authors": [ { "first": "S", "middle": [], "last": "Gooding", "suffix": "" }, { "first": "E", "middle": [], "last": "Kochmar", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "184--194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gooding, S. and Kochmar, E. (2018). CAMB at CWI shared task 2018: Complex word identification with ensemble- based voting. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Ap- plications, pages 184-194, New Orleans, Louisiana, June. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Complex word identification as a sequence labelling task", "authors": [ { "first": "S", "middle": [], "last": "Gooding", "suffix": "" }, { "first": "E", "middle": [], "last": "Kochmar", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1148--1153", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gooding, S. and Kochmar, E. (2019). Complex word identi- fication as a sequence labelling task. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 1148-1153, Florence, Italy, July. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Assisted lexical simplification for french native children with reading difficulties", "authors": [ { "first": "F", "middle": [], "last": "Hmida", "suffix": "" }, { "first": "M", "middle": [], "last": "Billami", "suffix": "" }, { "first": "T", "middle": [], "last": "Fran\u00e7ois", "suffix": "" }, { "first": "N", "middle": [], "last": "Gala", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Workshop of Automatic Text Adaptation, 11th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hmida, F., Billami, M., Fran\u00e7ois, T., and Gala, N. (2018). Assisted lexical simplification for french native children with reading difficulties. In Proceedings of the Workshop of Automatic Text Adaptation, 11th International Confer- ence on Natural Language Generation.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Detecting multiword expression type helps lexical complexity assessment", "authors": [ { "first": "E", "middle": [], "last": "Kochmar", "suffix": "" }, { "first": "S", "middle": [], "last": "Gooding", "suffix": "" }, { "first": "M", "middle": [], "last": "Shardlow", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kochmar, E., Gooding, S., and Shardlow, M. (2020). De- tecting multiword expression type helps lexical complex- ity assessment. In Proceedings of the Twelfth Interna- tional Conference on Language Resources and Evalua- tion (LREC 2020).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Bayesian text segmentation for index term identification and keyphrase extraction", "authors": [ { "first": "David", "middle": [], "last": "Newman", "suffix": "" }, { "first": "K", "middle": [ "N L J H" ], "last": "Baldwin", "suffix": "" }, { "first": "T", "middle": [], "last": "", "suffix": "" } ], "year": 2012, "venue": "Proceedings of COLING 2012", "volume": "", "issue": "", "pages": "2077--2092", "other_ids": {}, "num": null, "urls": [], "raw_text": "Newman, David, K. N. L. J. H. and Baldwin, T. (2012). Bayesian text segmentation for index term identification and keyphrase extraction. In Proceedings of COLING 2012, pages 2077-2092.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Measuring readability for Japanese learners of English", "authors": [ { "first": "T", "middle": [], "last": "Ozasa", "suffix": "" }, { "first": "G", "middle": [], "last": "Weir", "suffix": "" }, { "first": "M", "middle": [], "last": "Fukui", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 12th Conference of Pan-Pacific Association of Applied Linguistics", "volume": "", "issue": "", "pages": "122--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ozasa, T., Weir, G., and Fukui, M. (2007). Measuring read- ability for Japanese learners of English. In Proceedings of the 12th Conference of Pan-Pacific Association of Applied Linguistics, pages 122-125.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Edition 1.1 of the PARSEME Shared Task on Automatic Identification of Verbal Multiword Expressions", "authors": [ { "first": "C", "middle": [], "last": "Ramisch", "suffix": "" }, { "first": "S", "middle": [], "last": "Cordeiro", "suffix": "" }, { "first": "A", "middle": [], "last": "Savary", "suffix": "" }, { "first": "V", "middle": [], "last": "Vincze", "suffix": "" }, { "first": "V", "middle": [], "last": "Mititelu", "suffix": "" }, { "first": "A", "middle": [], "last": "Bhatia", "suffix": "" }, { "first": "M", "middle": [], "last": "Buljan", "suffix": "" }, { "first": "M", "middle": [], "last": "Candito", "suffix": "" }, { "first": "P", "middle": [], "last": "Gantar", "suffix": "" }, { "first": "V", "middle": [], "last": "Giouli", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramisch, C., Cordeiro, S., Savary, A., Vincze, V., Mititelu, V., Bhatia, A., Buljan, M., Candito, M., Gantar, P., Giouli, V., et al. (2018). Edition 1.1 of the PARSEME Shared Task on Automatic Identification of Verbal Multiword Expressions.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Bridging the gap: Attending to discontinuity in identification of multiword expressions", "authors": [ { "first": "O", "middle": [], "last": "Rohanian", "suffix": "" }, { "first": "S", "middle": [], "last": "Taslimipoor", "suffix": "" }, { "first": "S", "middle": [], "last": "Kouchaki", "suffix": "" }, { "first": "L", "middle": [ "A" ], "last": "Ha", "suffix": "" }, { "first": "R", "middle": [], "last": "Mitkov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2692--2698", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rohanian, O., Taslimipoor, S., Kouchaki, S., Ha, L. A., and Mitkov, R. (2019). Bridging the gap: Attending to dis- continuity in identification of multiword expressions. In Proceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2692-2698.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Multiword expressions: A pain in the neck for nlp", "authors": [ { "first": "I", "middle": [ "A" ], "last": "Sag", "suffix": "" }, { "first": "T", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "F", "middle": [], "last": "Bond", "suffix": "" }, { "first": "A", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "D", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2002, "venue": "Lecture Notes in Computer Science", "volume": "2276", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sag, I. A., Baldwin, T., Bond, F., Copestake, A., and Flickinger, D. (2002). Multiword expressions: A pain in the neck for nlp. Lecture Notes in Computer Science, 2276:1-15.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The PARSEME shared task on automatic identification of verbal multiword expressions", "authors": [ { "first": "A", "middle": [], "last": "Savary", "suffix": "" }, { "first": "C", "middle": [], "last": "Ramisch", "suffix": "" }, { "first": "S", "middle": [], "last": "Cordeiro", "suffix": "" }, { "first": "F", "middle": [], "last": "Sangati", "suffix": "" }, { "first": "V", "middle": [], "last": "Vincze", "suffix": "" }, { "first": "B", "middle": [], "last": "Qasemizadeh", "suffix": "" }, { "first": "M", "middle": [], "last": "Candito", "suffix": "" }, { "first": "F", "middle": [], "last": "Cap", "suffix": "" }, { "first": "V", "middle": [], "last": "Giouli", "suffix": "" }, { "first": "I", "middle": [], "last": "Stoyanova", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 13th Workshop on Multiword Expressions", "volume": "", "issue": "", "pages": "31--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Savary, A., Ramisch, C., Cordeiro, S., Sangati, F., Vincze, V., Qasemizadeh, B., Candito, M., Cap, F., Giouli, V., Stoyanova, I., et al. (2017). The PARSEME shared task on automatic identification of verbal multiword expres- sions. In Proceedings of the 13th Workshop on Multiword Expressions (MWE 2017), pages 31-47.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A corpus and model integrating multiword expressions and supersenses", "authors": [ { "first": "N", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1537--1547", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schneider, N. and Smith, N. A. (2015). A corpus and model integrating multiword expressions and supersenses. In Proceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Lin- guistics: Human Language Technologies, pages 1537- 1547.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Comprehensive annotation of multiword expressions in a social web corpus", "authors": [ { "first": "N", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "S", "middle": [], "last": "Onuffer", "suffix": "" }, { "first": "N", "middle": [], "last": "Kazour", "suffix": "" }, { "first": "E", "middle": [], "last": "Danchik", "suffix": "" }, { "first": "M", "middle": [ "T" ], "last": "Mordowanec", "suffix": "" }, { "first": "H", "middle": [], "last": "Conrad", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "455--461", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schneider, N., Onuffer, S., Kazour, N., Danchik, E., Mor- dowanec, M. T., Conrad, H., and Smith, N. A. (2014). Comprehensive annotation of multiword expressions in a social web corpus. In Proceedings of the Ninth Inter- national Conference on Language Resources and Eval- uation (LREC'14), pages 455-461. European Language Resources Association (ELRA), May.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A comparison of techniques to automatically identify complex words", "authors": [ { "first": "M", "middle": [], "last": "Shardlow", "suffix": "" } ], "year": 2013, "venue": "51st Annual Meeting of the Association for Computational Linguistics Proceedings of the Student Research Workshop", "volume": "", "issue": "", "pages": "103--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shardlow, M. (2013). A comparison of techniques to auto- matically identify complex words. In 51st Annual Meet- ing of the Association for Computational Linguistics Pro- ceedings of the Student Research Workshop, pages 103- 109, Sofia, Bulgaria, August. Association for Computa- tional Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "The role of idioms in sentiment analysis", "authors": [ { "first": "L", "middle": [], "last": "Williams", "suffix": "" }, { "first": "C", "middle": [], "last": "Bannister", "suffix": "" }, { "first": "M", "middle": [], "last": "Arribas-Ayllon", "suffix": "" }, { "first": "A", "middle": [], "last": "Preece", "suffix": "" }, { "first": "I", "middle": [], "last": "Spasi\u0107", "suffix": "" } ], "year": 2015, "venue": "Expert Systems with Applications", "volume": "42", "issue": "21", "pages": "7375--7385", "other_ids": {}, "num": null, "urls": [], "raw_text": "Williams, L., Bannister, C., Arribas-Ayllon, M., Preece, A., and Spasi\u0107, I. (2015). The role of idioms in sentiment analysis. Expert Systems with Applications, 42(21):7375- 7385.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "CWIG3G2 -complex word identification task across three text genres and two user groups", "authors": [ { "first": "S", "middle": [ "M" ], "last": "Yimam", "suffix": "" }, { "first": "S", "middle": [], "last": "\u0160tajner", "suffix": "" }, { "first": "M", "middle": [], "last": "Riedl", "suffix": "" }, { "first": "C", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "401--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yimam, S. M.,\u0160tajner, S., Riedl, M., and Biemann, C. (2017). CWIG3G2 -complex word identification task across three text genres and two user groups. In Pro- ceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Pa- pers), pages 401-407, Taipei, Taiwan, November. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A report on the complex word identification shared task 2018", "authors": [], "year": null, "venue": "Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A report on the complex word identification shared task 2018. In Proceedings of the Thirteenth Workshop on Inno- vative Use of NLP for Building Educational Applications,", "links": null } }, "ref_entries": { "TABREF2": { "content": "", "type_str": "table", "text": "Complexity prediction feature set for sledge hammers", "html": null, "num": null }, "TABREF3": { "content": "
MWE classnon-MWE class
training datamodelPRF1PRF1
Our data trainGCN93.67
", "type_str": "table", "text": "37.37 53.43 66.03 97.97 78.89 BERT-transformer 90.62 29.29 44.27 63.16 97.56 76.68 Our data train + GCN 90.80 39.90 55.44 66.67 96.75 78.94 STREUSLE BERT-transformer 95.95 35.86 52.21 65.68 98.78 78.90", "html": null, "num": null }, "TABREF4": { "content": "
Test SetMAE
System CAMB
(3) Complexity Prediction
News (133)0.06880.0767
Wikipedia (84) 0.06710.0734
WikiNews (79) 0.03750.0327
(2,3) MWE Type Prediction +
Complexity Prediction
News (133)0.07450.0767
Wikipedia (84) 0.07200.0734
Wikinews (79)0.04740.0327
(1,2,3) MWE Identification +
MWE Type Prediction +
Complexity Prediction
News (61)0.08890.0984
Wikipedia (27) 0.12210.1283
WikiNews (23) 0.05720.0595
", "type_str": "table", "text": "Performance of MWE identification systems in the development phase", "html": null, "num": null }, "TABREF5": { "content": "", "type_str": "table", "text": "Complexity assessment system results", "html": null, "num": null } } } }