{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:59:45.228404Z" }, "title": "A Language-Based Approach to Fake News Detection Through Interpretable Features and BRNN", "authors": [ { "first": "Yu", "middle": [], "last": "Qiao", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Amsterdam", "location": {} }, "email": "yu.qiao@rwth-aachen.de" }, { "first": "Daniel", "middle": [], "last": "Wiechmann", "suffix": "", "affiliation": {}, "email": "d.wiechmann@uva.nl" }, { "first": "Elma", "middle": [], "last": "Kerz", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Amsterdam", "location": {} }, "email": "elma.kerz@ifaar.rwth-aachen.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "'Fake news'-succinctly defined as false or misleading information masquerading as legitimate news-is a ubiquitous phenomenon and its dissemination weakens the fact-based reporting of the established news industry, making it harder for political actors, authorities, media and citizens to obtain a reliable picture. State-of-the art language-based approaches to fake news detection that reach high classification accuracy typically rely on black box models based on word embeddings. At the same time, there are increasing calls for moving away from black-box models towards white-box (explainable) models for critical industries such as healthcare, finances, military and news industry. In this paper we performed a series of experiments where bi-directional recurrent neural network classification models were trained on interpretable features derived from multidisciplinary integrated approaches to language. We apply our approach to two benchmark datasets. We demonstrate that our approach is promising as it achieves similar results on these two datasets as the best performing black box models reported in the literature. In a second step we report on ablation experiments geared towards assessing the relative importance of the human-interpretable features in distinguishing fake news from real news.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "'Fake news'-succinctly defined as false or misleading information masquerading as legitimate news-is a ubiquitous phenomenon and its dissemination weakens the fact-based reporting of the established news industry, making it harder for political actors, authorities, media and citizens to obtain a reliable picture. State-of-the art language-based approaches to fake news detection that reach high classification accuracy typically rely on black box models based on word embeddings. At the same time, there are increasing calls for moving away from black-box models towards white-box (explainable) models for critical industries such as healthcare, finances, military and news industry. In this paper we performed a series of experiments where bi-directional recurrent neural network classification models were trained on interpretable features derived from multidisciplinary integrated approaches to language. We apply our approach to two benchmark datasets. We demonstrate that our approach is promising as it achieves similar results on these two datasets as the best performing black box models reported in the literature. In a second step we report on ablation experiments geared towards assessing the relative importance of the human-interpretable features in distinguishing fake news from real news.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The topic of 'disinformation' -an umbrella term used to encompass a wide range of types of information disorder, \"including 'fake news', rumors, deliberately factually incorrect information, inadvertently factually incorrect information, politically slanted information, and 'hyperpartisan' news\" (Tucker et al., 2018) -is attracting more and more attention. This reflects a deeper concern that the prevalence of disinformation leads to an increased political polarization, decreases trust in public institutions, and undermines democracy. For example, the spread of 'fake news' -concisely defined as intentionally false information masquerading as genuine news -for financial and political gains had a potential impact on the contentious Brexit referendum or 2016 U.S. presidential elections (Allcott and Gentzkow, 2017; Ward, 2018) . Against this background, it is hardly surprising that there has been an increased interest in the development of methods, measures and computational tools that efficiently and effectively detect disinformation using machine learning and deep learning techniques. Among different approaches to fake news detection, language-based approaches have emerged as promising (for more details, see Section 2). Here the term 'language-based' is used in a broad sense to include a variety of approaches, such as those that employ traditional linguistic features, readibility features, style-based features, discourse and rhetorical features or those that draw on word embedding techniques. The latter have proven to be particularly successful in detecting fake news. Despite their success, however, their detection is based on latent features that are not human interpretable and thus cannot explain why a piece of news was detected as fake news. As recently pointed out by , white-box (explainable) approaches to fake news detection are desirable, since model-derived explanations can (1) provide valuable insights originally hidden to different stakeholders, such as policy makers, professional journalists and citizens and (2) can contribute to further improvement of fake news detection systems. This paper seeks to respond to recent calls for more explainable (white-box) approaches to fake news detection by performing a series of experiments where bi-directional recurrent neural network classifiers were trained on interpretable features derived from multi-disciplinary integrated approaches to language. The data come from two benchmark datasets and fake news detection is formulated as a binary classification and as a multiclass classification tasks correspondingly. The results of our experiments are promising, as our classification models achieve similar performance as the best-performing black box models reported in the literature. In a second step we report on ablation experiments geared towards assessing the relative importance of the human-interpretable features in distinguishing fake news from real news. The remainder of the paper is organized as follows: After a concise overview of related work in Section 2, Section 3 introduces the two data sets, Section 4 describes our approach to automated text analysis and six groups of language features used in the paper, Section 5 describes the model architecture, the training procedure and the method used to assess the relative feature importance. Sections 6 presents and discusses the main results and concluding remarks follow in Section 7.", "cite_spans": [ { "start": 793, "end": 821, "text": "(Allcott and Gentzkow, 2017;", "ref_id": "BIBREF2" }, { "start": 822, "end": 833, "text": "Ward, 2018)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Here we provide a concise overview of recent approaches geared towards fake news detection that employ machine learning and deep learning techniques and we focus in particular on language-based approaches that are most pertinent to the purposes of this paper (for a more systematic and comprehensive overviews, see recent reviews and surveys by Shu et al. (2018) , Oshikawa et al. (2020) , Zhang and Ghorbani (2020) and Zhou and Zafarani (2020) . Fake news detection is most often formulated as a binary classification task. However, categorizing all the news into two classes (fake vs real) is not the only conceivable way, since there are cases where the news is partially real and partially fake. A common practice is to add more classes distinguishing between several degrees of truthfulness and thus formulating fake news detection as a multi-class classification task. As will become evident later in this paper, we apply our approach to both scenarios. Three approaches to fake news detection frequently described in the literature are: (1) knowledge-based fake news detection (commonly using techniques from information retrieval to determine the veracity/truthfulness of news), (2) language-based fake news detection (drawing on traditional linguistic, style-related, readability or rhetorical features or using word embedding methods to distinguish between fake and real news) and (3) propagation-based fake news detection (typically using network analyses to determine the credibility of news sources at various stages, being created, published online and their spread via social media). Compared to knowledge-based and propagation-based approaches, language-based approaches are advantageous for several reasons, including: (1) they enable near real-time feedback (proactive rather than retroactive), i.e. they are not restricted to being applied only a posteriori (Potthast et al., 2017) and (2) they are scalable. A guiding assumption of language-based approaches is that there are statistical regularities inherent in natural languages and distributional patterns of language use indicative of fake news that are not consciously accessible to fake news creators. Space limitations prevent us from going into further details (but see reviews and survey cited above). In what follows, we will zoom in on previous studies on fake news detection conducted on the bases of the publicly available benchmark datasets used in the corpus study: the ISOT dataset, an 'entire article' dataset comprising 20k+ real and fake news texts (Ahmed et al., 2018) , and the LIAR dataset, a 'claims dataset' comprising 12k+ real-world short statements collected from a variety of online sources (Wang, 2017) (see section 3 for details). Upon introduction of the ISOT dataset, (Ahmed et al., 2018) report on the results of experiments using n-gram features with two different features extraction techniques -Term Frequency (TF) and Term Frequency-Inverted Document Frequency (TF-IDF) -and six different machine learning techniques -Stochastic Gradient Descent, Support Vector Machines, Linear Support Vector Machines (LSVM), K-Nearest Neighbour and Decision Trees. Their best-performing model reached a classification accuracy of 92% using TF-IDF for feature extraction and an LSVM classifier, showing that real and fake news can be discriminated with high accuracy on the basis of the use of multiword sequences. However, subsequent studies have demonstrated that classification accuracy on this dataset can be pushed even higher -beyond the 99% accuracy mark -through the employment of deep neural networks trained on word embedding vectors: (Kula et al., 2020) reported classification accuracy between 95.04% and 99.86% using an LSTM neural network trained on different word em-beddings (glove, news, Twitter, crawl) implemented in the Flair NLP framework (Akbik et al., 2019) . Goldani et al., (2020) achieved a classification accuracy of 99.8% using a non-static capsule network and 'glove.6B.300d' word embeddings (Pennington et al., 2014) . While the dataset ISOT involves a binary classification (fake vs. real), the LIAR dataset presents a six-way multiclass classification problem, where individual claims statement was evaluated for its truthfulness and received a much more finegrained veracity label. In the experiments presented upon publication of the LIAR dataset, (Wang, 2017) provided several benchmarks based on several shallow learning classifiers (e.g. logistic regression and support vector machines) trained on n-gram features and deep learning classifiers (bi-directional long short-term memory and convolutional neural networks architectures) using pre-trained 300-dimensional word2vec embeddings from Google News (Mikolov et al., 2013) . The latter reached a classification accuracy task of up to 27%. Incorporating available meta-data about the subject, speaker and context raised classification accuracy to 27.4%. Subsequent studies have shown that the classification accuracy on the LIAR set can be further increased to just over 45% by more complex hybrid models that integrate the linguistic information with speaker profiles into an attention based LSTM model (Long, 2017) , by supplementing the data with verdict reports written by annotators (Karimi et al., 2018) or by replacing the credibility history in LIAR with a larger credibility source (Kirilin and Strube, 2018) . Importantly, however, all state-of-the-art models designed to detect the veracity of a news article or claim exploit the information contained in high-dimensional word embeddings that are uninterpretable to humans, thereby severely limiting our ability to understand 'why' a given claim or news article was predicted to be fake or real.", "cite_spans": [ { "start": 345, "end": 362, "text": "Shu et al. (2018)", "ref_id": null }, { "start": 365, "end": 387, "text": "Oshikawa et al. (2020)", "ref_id": null }, { "start": 390, "end": 415, "text": "Zhang and Ghorbani (2020)", "ref_id": "BIBREF49" }, { "start": 420, "end": 444, "text": "Zhou and Zafarani (2020)", "ref_id": null }, { "start": 1877, "end": 1900, "text": "(Potthast et al., 2017)", "ref_id": "BIBREF36" }, { "start": 2538, "end": 2558, "text": "(Ahmed et al., 2018)", "ref_id": "BIBREF0" }, { "start": 2770, "end": 2790, "text": "(Ahmed et al., 2018)", "ref_id": "BIBREF0" }, { "start": 3637, "end": 3656, "text": "(Kula et al., 2020)", "ref_id": "BIBREF24" }, { "start": 3852, "end": 3872, "text": "(Akbik et al., 2019)", "ref_id": "BIBREF1" }, { "start": 3875, "end": 3897, "text": "Goldani et al., (2020)", "ref_id": null }, { "start": 4013, "end": 4038, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF35" }, { "start": 4732, "end": 4754, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF31" }, { "start": 5185, "end": 5197, "text": "(Long, 2017)", "ref_id": "BIBREF26" }, { "start": 5269, "end": 5290, "text": "(Karimi et al., 2018)", "ref_id": "BIBREF19" }, { "start": 5372, "end": 5398, "text": "(Kirilin and Strube, 2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The experiments were conducted on two recently released datasets for fake news detection, the ISOT dataset compiled by the Information Security and Object Technology research lab (Ahmed et al., 2018) and the LIAR dataset introduced in (Wang, 2017). The datasets were selected based on their complementary attributes in terms of text types (full articles with average length of about 400 words vs. short statements with an average length of just under 20 words) and the granularity of the veracity labels (binary labels based on source selection and six-way classification based on ratings by politifac.com editors). Both datasets are sufficiently large for training deep models. The ISOT dataset consists of 40,000+ real and fake news articles collected from real-world sources between 2016 and 2017. The real (truthful) news articles were obtained by crawling articles from Reuters.com. The fake news articles were collected from unreliable websites that were flagged by politifact.com, a fact-checking organization in the USA, and Wikipedia. The ISOT dataset contains articles on a variety of topics with a focus on political and world news topics (see Table 2 ). For each article the following information is provided: article title, text, type (topic) and publication date. Close inspection of the dataset revealed that all and only instances of real news were introduced by the words \"WASHINGTON (Reuters)\", indicating the place and name of the news agency that has provided the news article. To prevent our models from capitalizing on this information, all instances of this string were deleted. We also checked for and removed all duplicates in the dataset (N = 6251). Table 2 presents the distribution of articles across news types (real/fake) and topics before and after deduplication (original/cleaned). The dataset was split in training, development, and testing sets using a 80/10/10 split. The LIAR dataset is a recent benchmark dataset for fake news detection that in includes 12,836 real-world short statements collected from a variety of online sourcesincluding Facebook posts, tweets, news releases, TV/radio interviews, campaign speeches, TV ads and debates -on a range of topics -including economy, healthcare, taxes, federal-budget, education, jobs, state budget, candidates-biography, elections, and immigration. Each statement was labeled by an editor from politifact.com on a six-level ordinal scale of truthfulness ranging from \"True\", for completely accurate statements, to \"Pants on Fire\" (from the taunt \"Liar, liar, pants on fire\") for false and ludicrous claims. The distribution of the six labels is relatively well-balanced: with the exception of 1,050 instances of the 'pants-fire' category, the instances for all other labels range from 2,063 to 2,638. The LIAR set further includes a rich set of meta-data for each speaker including party affiliation, current job and home state. The statements in the dataset are also fairly balanced across the two major political par-ties of the US -democrats and republicans -and also contain a significant amount of posts from online social media. The dataset is distributed into training, validation and testing sets in a 80/10/10 manner.", "cite_spans": [ { "start": 179, "end": 199, "text": "(Ahmed et al., 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 1155, "end": 1162, "text": "Table 2", "ref_id": null }, { "start": 1676, "end": 1683, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "The raw texts from the two datasets were automatically analyzed using CoCoGen, a computational tool that implements a sliding window technique to calculate within-text distributions of feature scores (see recently published papers that use this tool, (Str\u00f6bel et al., 2018; Kerz et al., 2020b; Kerz et al., 2020a) . In contrast to the standard approach implemented in other tools for automated text analysis that rely on aggregate scores representing the average value of a feature in a text, the sliding-window approach generates a series of measurements representing the 'local' distributions of scores. A sliding window can be conceived of as a window of size ws, which is defined by the number of sentences it contains. The window is moved across a text sentence-by-sentence, computing one value per window for a given feature. The series of measurements faithfully captures a typically non-uniform distribution of features within a text and is referred here to as a 'contour'. 1 To compute the value of a given feature in a given window m (w(m)), a measurement function is called for each sentence in the window and returns a fraction (wn m /wd m ). CoCoGen uses the Stanford CoreNLP suite for performing tokenization, sentence splitting, part-of-speech tagging, lemmatization and syntactic parsing (Probabilistic Context Free Grammar Parser (Klein and Manning, 2003) ). In its current version, CoCoGen supports a total 154 of features that fall into six categories: (1) features of syntactic complexity (N=19), (2) features of lexical density, sophistication and variation (N=12), (3) information-theoretic features (N=3), (4) register-based n-gram frequency features (N=25), (5) LIWC-style (Linguistic Inquiry and Word Count) features (N=61) and (6) Word-Prevalence measures (N=36). A brief overview of the features and their short descriptions are provided in Table 4 in the Appendix. The inclusion of these features 2 is motivated by contemporary language and cognitive sciences characterized by an integrated, multi-method, and transdisciplinary approach needed to advance our understanding of the human processing and learning mechanisms (Christiansen and Chater, 2017). The first three sets of features are derived from the literature on language development showing that, in the course of their lifespan, humans learn to produce and understand complex syntactic structures, more sophisticated and diverse vocabulary and informationally denser language (see, e.g., Berman, 2007; Lu, 2010 Lu, , 2012 Hartshorne and Germine, 2015; Ehret and Szmrecsanyi, 2019) . The fourth set of features is derived from research on language adaptation (Chang et al., 2012) and research that looks at language from the perspective of complex adaptive systems (Beckner et al., 2009; Christiansen and Chater, 2016) indicating that, based on accumulated language knowledge emerging from lifelong exposure to various types of language inputs, humans learn to adapt their language to meet the functional requirements of different communicative contexts. The features in set five are based on insights from many years of research conducted by Pennebaker and colleagues (Pennebaker et al., 2003; Tausczik and Pennebaker, 2010) , showing that the words people use in their everyday life provide important psychological cues to their thought processes, emotional states, intentions, and motivations. And finally, the inclusion of features in group six is motivated by recent efforts to estimate of of how well words are known in the population through crowdsourcing and corpus-based techniques. An accumulating body of evidence shows that such word prevalence measures are good predictors of human perfomance on various language tasks (Brysbaert et al., 2019; Johns et al., 2020) ", "cite_spans": [ { "start": 251, "end": 273, "text": "(Str\u00f6bel et al., 2018;", "ref_id": "BIBREF43" }, { "start": 274, "end": 293, "text": "Kerz et al., 2020b;", "ref_id": "BIBREF21" }, { "start": 294, "end": 313, "text": "Kerz et al., 2020a)", "ref_id": "BIBREF20" }, { "start": 982, "end": 983, "text": "1", "ref_id": null }, { "start": 1347, "end": 1372, "text": "(Klein and Manning, 2003)", "ref_id": "BIBREF23" }, { "start": 2477, "end": 2490, "text": "Berman, 2007;", "ref_id": "BIBREF5" }, { "start": 2491, "end": 2499, "text": "Lu, 2010", "ref_id": "BIBREF28" }, { "start": 2500, "end": 2510, "text": "Lu, , 2012", "ref_id": "BIBREF29" }, { "start": 2511, "end": 2540, "text": "Hartshorne and Germine, 2015;", "ref_id": "BIBREF16" }, { "start": 2541, "end": 2569, "text": "Ehret and Szmrecsanyi, 2019)", "ref_id": "BIBREF14" }, { "start": 2647, "end": 2667, "text": "(Chang et al., 2012)", "ref_id": "BIBREF7" }, { "start": 2753, "end": 2775, "text": "(Beckner et al., 2009;", "ref_id": null }, { "start": 2776, "end": 2806, "text": "Christiansen and Chater, 2016)", "ref_id": "BIBREF10" }, { "start": 3157, "end": 3182, "text": "(Pennebaker et al., 2003;", "ref_id": "BIBREF34" }, { "start": 3183, "end": 3213, "text": "Tausczik and Pennebaker, 2010)", "ref_id": "BIBREF44" }, { "start": 3720, "end": 3744, "text": "(Brysbaert et al., 2019;", "ref_id": "BIBREF6" }, { "start": 3745, "end": 3764, "text": "Johns et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 1868, "end": 1875, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Automated Text Analysis", "sec_num": "4" }, { "text": "For the classification, we used Bi-directional Recurrent Neural Network (BRNN) classifiers with Gated Recurrent Unit (GRU) cells . BRNNs have been shown to outperform unidirectional RNNs in application areas ranging from acoustic modeling (Sak et al., 2014) to machine translation . Bi-directional neural network models have also been employed in previous studies on the two datasets investigated here (Wang, 2017; Kula et al., 2020) , making them well suited for purposes of comparison and, more specifically, for examining whether and to what extent a classifier trained on human-interpretable features can approximate the performance of a state-of-the-art classifier trained on word embeddings. Since the two datasets differ in terms of the availability of meta-data (ISOT: no meata-data, LIAR: rich information on subject, speaker and context) and with respect to the granularity at which truthfullness was assessed (ISOT: binary, LIAR: 6-way multiclass), the BRNN classifiers were adapted so as to take these differences into account. Figure 1 shows the architecture of models used in the present paper. X = (x 1 , x 2 , . . . , x n ) is the output from CoCoGen, which is a sequence of 154dimensional vectors. To integrate the context information, the words in the context description were mapped to 300-dimensional word embedding vectors using the dependency based word-embedding implemented in spaCy (Honnibal and Montani, 2017) , represented by C = (c 1 , c 2 , . . . , c n ). Instead of one-hot encoding, we use word embeddings and BRNN to encode the context meta information here, as otherwise the feature vector for context information would result in 5075-dimensional sparse one-hot vectors. J = (j 1 , j 2 , . . . , j n ) is a sequence of word embeddings for the job title of the speaker of a given text, following the same reasoning as above. S = (s 1 , s 2 , . . . , j n ) and P = (p 1 , s 2 , . . . , p n ) are 70 and 25 dimensional on-hot vector for state information and party affiliation of the speaker. The structure of the classifier for ISOT dataset is shown in 1 on the left hand side in Figure 1 . The lower part encircled by the dashed red line represents the recurrent network, where the CoCoGen output for a given text is fed into a 2-layer BRNN consisting of GRU cells with 200 hidden units in each layer. h 10 , h 20 represent the initial hidden states of the first and second layer of the BRNN respectively in the forward direction and h 10 , h 20 represent the initial hidden states of the first and second layer of BRNN respectively in the backward direction. h 2n and h 2n represent the last hidden states of the second layer of the BRNN in the forward and backward direction respectively. These layers are concatenated and passed through a feed-forward neural network, encircled by the blue dashed line in Figure 1 . This network consists of three linear layers, whose output dimensions are 200, 100 and 2. Between layers 1 and 2 as well as between layers 2 and 3 we inserted a Batch Normalization (BN) layer, a Parametric ReLU (PReLU) activation function layer and a Dropout layer with a dropout rate of 0.5. A softmax layer is applied before the final output\u0177. For the LIAR dataset, we built three BRNN models: (1) a model using only the CoCoGen output (X), (2) a model using CoCoGen output and the context information (X + C) and", "cite_spans": [ { "start": 239, "end": 257, "text": "(Sak et al., 2014)", "ref_id": "BIBREF40" }, { "start": 415, "end": 433, "text": "Kula et al., 2020)", "ref_id": "BIBREF24" }, { "start": 1407, "end": 1435, "text": "(Honnibal and Montani, 2017)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 1040, "end": 1048, "text": "Figure 1", "ref_id": null }, { "start": 2111, "end": 2119, "text": "Figure 1", "ref_id": null }, { "start": 2840, "end": 2848, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "(3) a model using CoCoGen output, the context information and the speaker profile, which comprises information about the job, the state and the party of the of a speaker (X + C + J + S + P). The structure of CoCoGen-only model is identical to model built for the ISOT dataset, with the exception that the output layer has a size of 6 instead of 2. In the CoCoGen + Context model shown in sub-figure 2 in Figure 1 , the sequence vector X = (x 1 , x 2 , . . . , x n ) represents the CoCoGen output as described for the ISOT model above. BRNN blocks in sub-figure 2 has a same structure as the lower part of sub-figure 1, which is a 2-layer bidirectional RNN, whose output is a concatenation of the last hidden state of uppermost layer in forward and backward direction respectively. The BRNN on the left size in sub-figure 2 has a hidden state size of 200, while the BRNN on the right side has one of 10. The Feed-forward 1 block is identical to the Feed-forward part shown in sub-figure 1. Sub- figure 3 shows the structure of model making use of CoCoGen features + context + speaker profiles. S and P are one-hot encoded vectors described as above. They are squeezed to 10-dimensional vectors through a feed-forward neural network, Feed-forward 2, whose structure is shown in the lower right part of Figure 1 . Feed-forward 2 consists of two linear layers, the output of which are 20 for Linear 1 and 10 for Linear 2 respectively. The BRNN for CoCoGen output and context are identical to the corresponding BRNN blocks mentioned above. The BRNN for job title information encoding has the same structure and hidden state size as BRNN for context. All output from BRNN blocks and Feed-forward 2 blocks are concatenated and fed into Feed-forward 1 block, whose structure is shown in the upper part of sub-figure 1 with the exception that linear layers have output size of 210, 105 and 5 respectively. Since the labels of the LIAR dataset are ordinal in nature, i.e. pants-fire < false < barely-true < half-true < mostly-true < true, the classification of instance in liar dataset can be treated as an ordinal classification problem. To adapt the neural network classifier to the ordinal classification task, we followed the NNRank approach described in (Cheng et al., 2008) , which 1 2 3", "cite_spans": [ { "start": 2249, "end": 2269, "text": "(Cheng et al., 2008)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 404, "end": 412, "text": "Figure 1", "ref_id": null }, { "start": 994, "end": 1002, "text": "figure 3", "ref_id": null }, { "start": 1300, "end": 1308, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "Feed-forward 2 BRNN Feed-forward 1 Figure 1 : Structure of the BRNN classifiers built for the ISOT and LIAR datasets: The structure in 1 represents the model architecture used for ISOT and LIAR that makes use of textual information only (all CoCoGen features). The structures in 2 and 3 represent the model extensions that incorporate contextual meta-data (C) and speaker profiles (J = job title, P = party affiliation, S = speaker).", "cite_spans": [], "ref_spans": [ { "start": 35, "end": 43, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "is a generalization of ordinal perception learning in neural networks (Crammer and Singer, 2002) and outperforms a neural network classifier on several benchmark datasets. Instead of one-hot encoding of class labels and using softmax as the output layer of a neural network, in NNRank, a class label for class k is encoded as (y 1 , y 2 , . . . , y i , . . . , y C\u22121 ), in which y i = 1 for i \u2264 k and y i = 0 otherwise, where C is the number of classes. For the output layer, a sigmoid function was used. For prediction, the output of the neural network (o 1 , o 2 , . . . , o C\u22121 ) is scanned from left to right. It stops after encountering o i , which is the first element of the output vector that is smaller than a threshold T (e.g. 0.5), or when there is no element left to be scanned. The predicted class of the output vector is the index k of the last element, whose value is greater than or equal to T . Finally, for the purpose of comparison, we also recreated the convolutional neural network (CNN) model described in ). This CNN model consists of filters of size 2, 3 and 4. Each size has 128 filters with a max-pooling operation being performed on each output filter. The result of the max-pooling was fed into a feed-forward neural network for the classification. As an additional baseline, we further built structurally equivalent BRNN classifiers based on sentence embeddings from Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) . 3 All models are implemented using PyTorch (Pytorch, 2019) . For the BRNNs and the CNN that don't use the ordinal information cross entropy loss was used as a loss function:", "cite_spans": [ { "start": 70, "end": 96, "text": "(Crammer and Singer, 2002)", "ref_id": "BIBREF12" }, { "start": 1418, "end": 1446, "text": "(Reimers and Gurevych, 2019)", "ref_id": "BIBREF38" }, { "start": 1449, "end": 1450, "text": "3", "ref_id": null }, { "start": 1492, "end": 1507, "text": "(Pytorch, 2019)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "L(\u0176 , c) = \u2212 C i=1 p(y i ) log(p(\u0177 i ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "where c is the true class label of the current observation, C is the number of classes, (p(y 1 ), . . . , p(y C )) is a one-hot vector with p(y i ) = 1 i = c 0 otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "and\u0176 = (p(\u0177 1 ), p(\u0177 2 ), . . . , p(\u0177 C ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "is the output vector of the softmax layer, which can be viewed as the predicted probabilities of the observed instance falling into to each of the classes. For training BRNNs using ordinal information binary cross entropy was used:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "L(\u0176 , c) = \u2212 1 C C i=1 (y i log(\u0177) + (1 \u2212 y i ) log(1 \u2212\u0177))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "in which c = (y 1 , y 2 , . . . , y N ), C = 14 is number of responses and\u0176 = (\u0177 1 ,\u0177 2 , . . . ,\u0177 N ) is the output vector of the sigmoid layer rounded to closest integer. We tuned all hyperparameters on the validation set using a grid search over sets of optimizers S = {Adamax, Adagrad, RMSprop}, learning rates L = {0.01, 0.001, 0.0001} and normalization methods N = {Standardization, Min-max}. The optimal hyperparameter combinations are provided in Table 5 in the Appendix.", "cite_spans": [], "ref_spans": [ { "start": 455, "end": 462, "text": "Table 5", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "To determine the relative importance of the language features groups, we conducted feature ablation experiments. Classical forward or backward sequential selection algorithms that proceed by sequentially adding or discarding features require a quadratic number of model training and evaluation in order to obtain a feature ranking (Langley, 1994) . In the context of neural network models, training a quadratic number of models can become prohibitive. To alleviate this problem, we used an adapted version of the iterative sensitivity-based pruning algorithm proposed by (D\u00edaz-Villanueva et al., 2010) . This algorithm ranks the features based on a 'sensitivity measure' (Moody, 1994; Utans and Moody, 1991) and removes the least relevant variables one at a time. The classifier is then retrained on the resulting subset and a new ranking is calculated over the remaining features. This process is repeated until all features are removed. In this fashion, rather than training n(n+1) 2 models required for sequential algorithms, the number of models trained is reduced to n m , where m is the number of features or feature groups that can be removed at each step. We report the results obtained after the removal of a single feature group at each step. At step t, a neural network model M t is trained on the training set. The training set at step t consists of instances with feature groups F t = {f 1 , f 2 , . . . , f Dt } where f 1 , . . . f Dt are the remaining feature groups at the current step, whose importance rank is to be determined. We define X t as the test set with feature set F t and X i t as the same dataset as X t except we set the i th feature f i of each instance within the dataset to its average. Furthermore, we define g(X) as the classification accuracy of M t,n for a dataset X. The sensitivity of a feature group f i at step t is obtained from:", "cite_spans": [ { "start": 331, "end": 346, "text": "(Langley, 1994)", "ref_id": "BIBREF25" }, { "start": 571, "end": 601, "text": "(D\u00edaz-Villanueva et al., 2010)", "ref_id": "BIBREF13" }, { "start": 671, "end": 684, "text": "(Moody, 1994;", "ref_id": "BIBREF32" }, { "start": 685, "end": 707, "text": "Utans and Moody, 1991)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "S i,t = g(X t ) \u2212 g(X i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "t ) The most important feature group at step t can be found by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "f\u00ee :\u00ee = i:f i \u2208Ft (S i,t )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "Then we set the rank for feature f\u00ee:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "Rank\u00ee = t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "In the end, feature f\u00ee is dropped from F t and the corresponding columns in training and test dataset are also dropped simultaneously:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "F t+1 = F t \u2212 {f\u00ee}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "This procedure is repeated, until |F t | = 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "The performance metrics of the classification models for both datasets (global accuracy, precision and recall) are presented in Table 1 , along with comparisons with the results of previous studies (a extended version of the table with performance data of additional models is provided in the Appendix). The results of our BRNN classifiers trained on interpretable features are highly competitive with those obtained from state-of-the-art RNN, CNN and capsule networks that exploit word embeddings to represent textual contents. In fact, in both datasets, our classifiers match the performance of the best-performing models within half a percent: For ISOT, the CAPSULE-glove (Goldani et al., 2020) and LSTM-glove (Kula et al., 2020) both achieve an accuracy of 99.8%, while BRNN CoCoGen achieves 99.3%. Moreover, the BRNN CoCoGen model outperformed the LSTMs presented in Kula et al. (2020) that utilize three other word embeddings implemented in the Flair library (news, Twitter, crawl) by up to 4.3% and improved on the performance on the n-gram-based LSVM model by 7.3%. For the LIAR data set, the difference in classification accuracy between BRNN CoCoGen and the CNN utilizing 300-dimensional word embeddings trained on Google News presented in Wang (2017) amounts to 0.2%, when meta-data on context and speaker profiles is taken into account. Excluding all meta-data, the BRNN CoCoGen (ordered) model reached an accuracy of 27.7%, which is even slightly higher than the performance of the Bi-LSTM 300dim word2vec embeddings (Google news) model. Our CNN CoCoGen model achieved a classification accuracy of 25.6%, which is 1.4% below the performance of the corresponding CNN model presented in Wang (2017) , CNN 300-dim word2vec embeddings (Google News). Interestingly, however, this model suffered from a substantial drop in accuracy to 24.8%, once it was infused with contextual meta-data. In contrast, all BRNN CoCoGen models invariably benefited from the addition of any type of meta-data. While performance with the CAPSULE-glove networks presented in Goldani et al. 2020is limited by their selective integration of meta-data, it is worth noting that the CNN CoCoGen model outperformed all their models without recourse to meta-data. Taken together these results present strong evidence that successful detection of fake news can be achieved without sacrificing transparency. It is also worth pointing out that approaching the fake news detection task as an ordinal classification problem had considerable effects on a classifiers performance. Specifically, we observed (1) that classification accuracy slightly increased by 0.6% relative to a unordered classification approach and (2) that classification behavior shifted from a bias towards recall to a bias towards precision. Furthermore, comparison of the confusion matrices of our classifiers revealed that changing to the ordinal classification approach had positive effects on the distribution of errors: The ordinal classification problem is monotonic, meaning that the further a misclassification is from the main diagonal of a confusion matrix, the more severe it is. The confusion matrix of the best-performing BRNN CoCoGen model shows that for five out of the six classes (pants-fire, false, half-true, mostly-true, true) the most frequent prediction was the true class and the number of misclassifications decreases with increasing distance to the true class. In contrast, in the case of the unordered classifiers, we observed that the extreme categories ('pants-fire' and 'true') were avoided and predictions to the intermediate categories were preferred, especially in classifiers without meta-data information (confusion matrices for all models are provided in the Appendix). To the best of the authors knowledge, current models on multi-class fake news detection do not concern with the order of labels (Oshikawa et al., 2018) . Our results indicate that future work can benefit from taking an ordinal classification approach. The results of our feature ablation experiments revealed a similar rank order in feature importance in both datasets (detailed results can be found in Table 14 in the Appendix): In each case, classification performance was mainly driven by features from the groups Lexical, LIWC, Syntactic and register-based n-grams, and to a lesser extent by information theoretic and word-prevalence-based features. Specifically, Table 14 indicates that -in the casse of the ISOT dataset -dropping the features from the LIWC group results in the largest decrease in classification accuracy of 5.1% on the validation set, resulting in a drop in accuracy on the test set to 93.8%. Re-training the model without the LIWC features yields the new baseline of 99.1%, indicating that the remaining features contained enough information to allow the retrained model to compensate for the loss of the LIWC information. After the elimination of the next two most-important feature groups, the syntactic and lexical groups, the retrained model at iteration 3 is still able to achieve an accuracy on the validation set of 97.9%. However, after the drop of the n-gram feature group, classification accuracy on the drops to 76.3% (validation) and 76.2% (test), indicating that the lost information from the four top-feature groups cannot be compensated for by information from the remaining feature groups, i.e. information theoretic and word-prevalence-based features. In the case of the the LIAR dataset, the relative influence of the six feature groups is more even and the predictive power of the model (27.2% accuracy on the test set) appears to stem from exploiting information from all six feature groups. For a closer examination of how individual features within each feature-group distinguished between real and fake news, we derived standard scores by performing z-standardization on all indicators and determined the difference between mean standard scores of real and fake news (DeltaScore index i = Score index i, f ake news \u2212 Score index i, real news ) (a complete Kula et al., 2020; 3 = Goldani et al., 2020; 4 = Wang, 2017 of the Delta Scores revealed some interesting patterns. For example, real news articles and claims are characterized by (1) relatively higher lexical diversity (as measured by type-token ratio features), (2) stronger reliance of multiword sequences from the news and academic register (measured by registerbased n-gram frequency measures), (3) greater phrasal syntactic complexity (as measured, e.g., by the number of complex nominals per clause) and (4) more frequent use or word from particular domains, such as work, money, power or word classes, such as preposition and quantifiers. In contrast, fake news are characterized by (1) greater syntactic complexity (as measured by, e.g. by the number of clauses per sentence), (2) frequent use of multiword sequences form the domain of fiction, (3) higher lexical sophistication scores (as measured in terms of relatively infrequent words) and (4) a strong reliance on personal pronouns, adverbs and emotion words. While limitations of space preclude an in-depth discussion, these results demonstrate that the use of interpretable features can provide new insights and knowledge about the characteristics of fake news and explain \"why\" a piece of news was detected as fake news see for a discussion of explainable fake news detection.", "cite_spans": [ { "start": 713, "end": 732, "text": "(Kula et al., 2020)", "ref_id": "BIBREF24" }, { "start": 872, "end": 890, "text": "Kula et al. (2020)", "ref_id": "BIBREF24" }, { "start": 3879, "end": 3902, "text": "(Oshikawa et al., 2018)", "ref_id": "BIBREF33" }, { "start": 6055, "end": 6073, "text": "Kula et al., 2020;", "ref_id": "BIBREF24" }, { "start": 6074, "end": 6099, "text": "3 = Goldani et al., 2020;", "ref_id": null }, { "start": 6100, "end": 6114, "text": "4 = Wang, 2017", "ref_id": null } ], "ref_spans": [ { "start": 128, "end": 135, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 4154, "end": 4162, "text": "Table 14", "ref_id": "TABREF1" }, { "start": 4419, "end": 4427, "text": "Table 14", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "In recent years, there is a growing recognition of the need to move away from black-box models towards white-box models for solving practical problems, in particular in the context of critical industries, including healthcare, criminal justice, and news (Rudin, 2019) . This is due to the fact that human experts in a given application domain need both accurate but also understandable models (Loyola-Gonzalez, 2019) . In this paper, we have made a contribution to this development in the domain of fake news detection. We have demonstrated that models trained on human interpretable features in combination with deep learning classifiers can compete with black box models based on word embeddings. In the future we intend to extend this work in two directions: First, we plan to apply our approach to fake news detection in German whose research still lags far behind that available for English. Second, we also plan to apply our approach to the detection of rumours and conspiracy theories to tackle and combat the ongoing Covid-19 infodemic. Table 6 : Evaluation results on the ISOT and LIAR datasets on the validation and test sets. Models indexed as \"CoCoGen\" comprise textual features only. Models with \"+\" are hybrid models with textual and meta-data. The labels \"ordered\" and \"unordered\" indicate whether an ordinal and nominal classification method was applied. -true true pants-fire 1 23 24 31 11 2 false 2 49 55 84 53 6 barely-true 3 36 62 80 27 4 half-true 1 38 58 108 55 5 mostly-true 1 19 41 104 74 2 true 3 19 43 71 66 6 ", "cite_spans": [ { "start": 254, "end": 267, "text": "(Rudin, 2019)", "ref_id": "BIBREF39" }, { "start": 393, "end": 416, "text": "(Loyola-Gonzalez, 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 1045, "end": 1052, "text": "Table 6", "ref_id": null }, { "start": 1371, "end": 1577, "text": "-true true pants-fire 1 23 24 31 11 2 false 2 49 55 84 53 6 barely-true 3 36 62 80 27 4 half-true 1 38 58 108 55 5 mostly-true 1 19 41 104 74 2 true 3 19 43 71 66 6", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "In general, for a text comprising n sentences, there are w = n \u2212 ws + 1 windows. Given the constraint that there has to be at least one window, a text has to comprise at least as many sentences at the ws is wide n \u2265 w.2 CoCoGen was designed with extensibility in mind, so that additional features can easily be implemented. It uses an abstract measure class for the implementation of additional features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "SBERT is a finetuned BERT network using siamese and triplet network structures that. It has been shown to outperform other state-of-the-art sentence embeddings methods on common semantic textual similarity and transfer learning tasks(Reimers and Gurevych, 2019).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Detecting opinion spams and fake news using text classification", "authors": [ { "first": "Hadeer", "middle": [], "last": "Ahmed", "suffix": "" }, { "first": "Sherif", "middle": [], "last": "Issa Traore", "suffix": "" }, { "first": "", "middle": [], "last": "Saad", "suffix": "" } ], "year": 2018, "venue": "Security and Privacy", "volume": "1", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hadeer Ahmed, Issa Traore, and Sherif Saad. 2018. Detecting opinion spams and fake news using text classifica- tion. Security and Privacy, 1(1):e9.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Flair: An easy-to-use framework for state-of-the-art nlp", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Bergmann", "suffix": "" }, { "first": "Duncan", "middle": [], "last": "Blythe", "suffix": "" }, { "first": "Kashif", "middle": [], "last": "Rasul", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Schweter", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", "volume": "", "issue": "", "pages": "54--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. Flair: An easy-to-use framework for state-of-the-art nlp. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54-59.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Social media and fake news in the 2016 election", "authors": [ { "first": "Hunt", "middle": [], "last": "Allcott", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Gentzkow", "suffix": "" } ], "year": 2017, "venue": "Journal of economic perspectives", "volume": "31", "issue": "2", "pages": "211--247", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hunt Allcott and Matthew Gentzkow. 2017. Social media and fake news in the 2016 election. Journal of economic perspectives, 31(2):211-36.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Developing linguistic knowledge and language use across adolescence", "authors": [ { "first": "A", "middle": [], "last": "Ruth", "suffix": "" }, { "first": "", "middle": [], "last": "Berman", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruth A Berman. 2007. Developing linguistic knowledge and language use across adolescence.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Word prevalence norms for 62,000 english lemmas", "authors": [ { "first": "Marc", "middle": [], "last": "Brysbaert", "suffix": "" }, { "first": "Pawe\u0142", "middle": [], "last": "Mandera", "suffix": "" }, { "first": "F", "middle": [], "last": "Samantha", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Mccormick", "suffix": "" }, { "first": "", "middle": [], "last": "Keuleers", "suffix": "" } ], "year": 2019, "venue": "Behavior research methods", "volume": "51", "issue": "2", "pages": "467--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Brysbaert, Pawe\u0142 Mandera, Samantha F McCormick, and Emmanuel Keuleers. 2019. Word prevalence norms for 62,000 english lemmas. Behavior research methods, 51(2):467-479.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Language adaptation and learning: Getting explicit about implicit learning", "authors": [ { "first": "Franklin", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Marius", "middle": [], "last": "Janciauskas", "suffix": "" }, { "first": "Hartmut", "middle": [], "last": "Fitz", "suffix": "" } ], "year": 2012, "venue": "Language and Linguistics Compass", "volume": "6", "issue": "5", "pages": "259--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franklin Chang, Marius Janciauskas, and Hartmut Fitz. 2012. Language adaptation and learning: Getting explicit about implicit learning. Language and Linguistics Compass, 6(5):259-278.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A neural network approach to ordinal regression", "authors": [ { "first": "Jianlin", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Gianluca", "middle": [], "last": "Pollastri", "suffix": "" } ], "year": 2008, "venue": "IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence)", "volume": "", "issue": "", "pages": "1279--1284", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianlin Cheng, Zheng Wang, and Gianluca Pollastri. 2008. A neural network approach to ordinal regression. In 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), pages 1279-1284. IEEE.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On the properties of neural machine translation: Encoder-decoder approaches", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merrienboer", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. CoRR, abs/1409.1259.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Creating language: Integrating evolution, acquisition, and processing", "authors": [ { "first": "H", "middle": [], "last": "Morten", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Christiansen", "suffix": "" }, { "first": "", "middle": [], "last": "Chater", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morten H Christiansen and Nick Chater. 2016. Creating language: Integrating evolution, acquisition, and pro- cessing. MIT Press.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Towards an integrated science of language", "authors": [ { "first": "H", "middle": [], "last": "Morten", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Christiansen", "suffix": "" }, { "first": "", "middle": [], "last": "Chater", "suffix": "" } ], "year": 2017, "venue": "Nature Human Behaviour", "volume": "1", "issue": "8", "pages": "1--3", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morten H Christiansen and Nick Chater. 2017. Towards an integrated science of language. Nature Human Behaviour, 1(8):1-3.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Pranking with ranking", "authors": [ { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2002, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "641--647", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koby Crammer and Yoram Singer. 2002. Pranking with ranking. In Advances in neural information processing systems, pages 641-647.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning improved feature rankings through decremental input pruning for support vector based drug activity prediction", "authors": [ { "first": "Wladimiro", "middle": [], "last": "D\u00edaz-Villanueva", "suffix": "" }, { "first": "J", "middle": [], "last": "Francesc", "suffix": "" }, { "first": "Vicente", "middle": [], "last": "Ferri", "suffix": "" }, { "first": "", "middle": [], "last": "Cerver\u00f3n", "suffix": "" } ], "year": 2010, "venue": "International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems", "volume": "", "issue": "", "pages": "653--661", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wladimiro D\u00edaz-Villanueva, Francesc J Ferri, and Vicente Cerver\u00f3n. 2010. Learning improved feature rank- ings through decremental input pruning for support vector based drug activity prediction. In International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, pages 653-661. Springer.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Compressing learner language: An information-theoretic measure of complexity in sla production data", "authors": [ { "first": "Katharina", "middle": [], "last": "Ehret", "suffix": "" }, { "first": "Benedikt", "middle": [], "last": "Szmrecsanyi", "suffix": "" } ], "year": 2019, "venue": "Second Language Research", "volume": "35", "issue": "1", "pages": "23--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katharina Ehret and Benedikt Szmrecsanyi. 2019. Compressing learner language: An information-theoretic measure of complexity in sla production data. Second Language Research, 35(1):23-45.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Saeedeh Momtazi, and Reza Safabakhsh. 2020. Detecting fake news with capsule neural networks", "authors": [ { "first": "Mohammad", "middle": [], "last": "Hadi", "suffix": "" }, { "first": "Goldani", "middle": [], "last": "", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.01030" ] }, "num": null, "urls": [], "raw_text": "Mohammad Hadi Goldani, Saeedeh Momtazi, and Reza Safabakhsh. 2020. Detecting fake news with capsule neural networks. arXiv preprint arXiv:2002.01030.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "When does cognitive functioning peak? the asynchronous rise and fall of different cognitive abilities across the life span", "authors": [ { "first": "K", "middle": [], "last": "Joshua", "suffix": "" }, { "first": "Laura", "middle": [ "T" ], "last": "Hartshorne", "suffix": "" }, { "first": "", "middle": [], "last": "Germine", "suffix": "" } ], "year": 2015, "venue": "Psychological science", "volume": "26", "issue": "4", "pages": "433--443", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joshua K Hartshorne and Laura T Germine. 2015. When does cognitive functioning peak? the asynchronous rise and fall of different cognitive abilities across the life span. Psychological science, 26(4):433-443.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing", "authors": [ { "first": "Matthew", "middle": [], "last": "Honnibal", "suffix": "" }, { "first": "Ines", "middle": [], "last": "Montani", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Estimating the prevalence and diversity of words in written language", "authors": [ { "first": "T", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Melody", "middle": [], "last": "Johns", "suffix": "" }, { "first": "Michael N", "middle": [], "last": "Dye", "suffix": "" }, { "first": "", "middle": [], "last": "Jones", "suffix": "" } ], "year": 2020, "venue": "Quarterly Journal of Experimental Psychology", "volume": "73", "issue": "6", "pages": "841--855", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brendan T Johns, Melody Dye, and Michael N Jones. 2020. Estimating the prevalence and diversity of words in written language. Quarterly Journal of Experimental Psychology, 73(6):841-855.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Multi-source multi-class fake news detection", "authors": [ { "first": "Hamid", "middle": [], "last": "Karimi", "suffix": "" }, { "first": "Proteek", "middle": [], "last": "Roy", "suffix": "" }, { "first": "Sari", "middle": [], "last": "Saba-Sadiya", "suffix": "" }, { "first": "Jiliang", "middle": [], "last": "Tang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1546--1557", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hamid Karimi, Proteek Roy, Sari Saba-Sadiya, and Jiliang Tang. 2018. Multi-source multi-class fake news detection. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1546- 1557.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Understanding the dynamics of second language writing through keystroke logging and complexity contours", "authors": [ { "first": "Elma", "middle": [], "last": "Kerz", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Pruneri", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Wiechmann", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Qiao", "suffix": "" }, { "first": "Marcus", "middle": [], "last": "Str\u00f6bel", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 12th Language Resources and Evaluation Conference (LREC2020))", "volume": "", "issue": "", "pages": "182--188", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elma Kerz, Fabio Pruneri, Daniel Wiechmann, Yu Qiao, and Marcus Str\u00f6bel. 2020a. Understanding the dynamics of second language writing through keystroke logging and complexity contours. In Proceedings of The 12th Language Resources and Evaluation Conference (LREC2020)), pages 182-188.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Becoming linguistically mature: Modeling english and german children's writing development across school grades", "authors": [ { "first": "Elma", "middle": [], "last": "Kerz", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Qiao", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Wiechmann", "suffix": "" }, { "first": "Marcus", "middle": [], "last": "Str\u00f6bel", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications (BEA2020))", "volume": "", "issue": "", "pages": "65--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elma Kerz, Yu Qiao, Daniel Wiechmann, and Marcus Str\u00f6bel. 2020b. Becoming linguistically mature: Model- ing english and german children's writing development across school grades. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications (BEA2020)), pages 65-74.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Exploiting a speaker's credibility to detect fake news", "authors": [ { "first": "Angelika", "middle": [], "last": "Kirilin", "suffix": "" }, { "first": "Micheal", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2018, "venue": "Proceedings of Data Science, Journalism & Media workshop at KDD (DSJM'18)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angelika Kirilin and Micheal Strube. 2018. Exploiting a speaker's credibility to detect fake news. In Proceedings of Data Science, Journalism & Media workshop at KDD (DSJM'18).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Accurate unlexicalized parsing", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st annual meeting of the association for computational linguistics", "volume": "", "issue": "", "pages": "423--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Christopher D Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st annual meeting of the association for computational linguistics, pages 423-430.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Sentiment analysis for fake news detection by means of neural networks", "authors": [ { "first": "Sebastian", "middle": [], "last": "Kula", "suffix": "" }, { "first": "Micha\u0142", "middle": [], "last": "Chora\u015b", "suffix": "" }, { "first": "Rafa\u0142", "middle": [], "last": "Kozik", "suffix": "" }, { "first": "Pawe\u0142", "middle": [], "last": "Ksieniewicz", "suffix": "" }, { "first": "Micha\u0142", "middle": [], "last": "Wo\u017aniak", "suffix": "" } ], "year": 2020, "venue": "International Conference on Computational Science", "volume": "", "issue": "", "pages": "653--666", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Kula, Micha\u0142 Chora\u015b, Rafa\u0142 Kozik, Pawe\u0142 Ksieniewicz, and Micha\u0142 Wo\u017aniak. 2020. Sentiment analysis for fake news detection by means of neural networks. In International Conference on Computational Science, pages 653-666. Springer.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Selection of relevant features in machine learning", "authors": [ { "first": "Pat", "middle": [], "last": "Langley", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the AAAI Fall symposium on relevance", "volume": "", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pat Langley. 1994. Selection of relevant features in machine learning. In Proceedings of the AAAI Fall symposium on relevance, pages 1-5.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Fake news detection through multi-perspective speaker profiles. Association for Computational Linguistics", "authors": [ { "first": "Yunfei", "middle": [], "last": "Long", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yunfei Long. 2017. Fake news detection through multi-perspective speaker profiles. Association for Computa- tional Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Black-box vs. white-box: Understanding their advantages and weaknesses from a practical point of view", "authors": [ { "first": "Octavio", "middle": [], "last": "Loyola-Gonzalez", "suffix": "" } ], "year": 2019, "venue": "IEEE Access", "volume": "7", "issue": "", "pages": "154096--154113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Octavio Loyola-Gonzalez. 2019. Black-box vs. white-box: Understanding their advantages and weaknesses from a practical point of view. IEEE Access, 7:154096-154113.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Automatic analysis of syntactic complexity in second language writing", "authors": [ { "first": "Xiaofei", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2010, "venue": "International Journal of Corpus Linguistics", "volume": "15", "issue": "4", "pages": "474--496", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaofei Lu. 2010. Automatic analysis of syntactic complexity in second language writing. International Journal of Corpus Linguistics, 15(4):474-496.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "The relationship of lexical richness to the quality of ESL learners' oral narratives. The Modern Language", "authors": [ { "first": "Xiaofei", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2012, "venue": "Journal", "volume": "96", "issue": "2", "pages": "190--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaofei Lu. 2012. The relationship of lexical richness to the quality of ESL learners' oral narratives. The Modern Language Journal, 96(2):190-208.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The stanford corenlp natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Prismatic", "middle": [], "last": "Inc", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Prismatic Inc, Steven J. Bethard, and David Mcclosky. 2014. The stanford corenlp natural language processing toolkit. In In ACL, System Demonstrations.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Prediction risk and architecture selection for neural networks", "authors": [ { "first": "John", "middle": [], "last": "Moody", "suffix": "" } ], "year": 1994, "venue": "From statistics to neural networks", "volume": "", "issue": "", "pages": "147--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Moody. 1994. Prediction risk and architecture selection for neural networks. In From statistics to neural networks, pages 147-165. Springer.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A survey on natural language processing for fake news detection", "authors": [ { "first": "Ray", "middle": [], "last": "Oshikawa", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Qian", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.00770" ] }, "num": null, "urls": [], "raw_text": "Ray Oshikawa, Jing Qian, and William Yang Wang. 2018. A survey on natural language processing for fake news detection. arXiv preprint arXiv:1811.00770.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Psychological aspects of natural language use: Our words, our selves. Annual review of psychology", "authors": [ { "first": "Matthias", "middle": [ "R" ], "last": "James W Pennebaker", "suffix": "" }, { "first": "Kate", "middle": [ "G" ], "last": "Mehl", "suffix": "" }, { "first": "", "middle": [], "last": "Niederhoffer", "suffix": "" } ], "year": 2003, "venue": "", "volume": "54", "issue": "", "pages": "547--577", "other_ids": {}, "num": null, "urls": [], "raw_text": "James W Pennebaker, Matthias R Mehl, and Kate G Niederhoffer. 2003. Psychological aspects of natural language use: Our words, our selves. Annual review of psychology, 54(1):547-577.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word represen- tation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A stylometric inquiry into hyperpartisan and fake news", "authors": [ { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Kiesel", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Reinartz", "suffix": "" }, { "first": "Janek", "middle": [], "last": "Bevendorff", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1702.05638" ] }, "num": null, "urls": [], "raw_text": "Martin Potthast, Johannes Kiesel, Kevin Reinartz, Janek Bevendorff, and Benno Stein. 2017. A stylometric inquiry into hyperpartisan and fake news. arXiv preprint arXiv:1702.05638.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Pytorch: Tensors and dynamic neural networks in python with strong gpu acceleration", "authors": [ { "first": "", "middle": [], "last": "Pytorch", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pytorch. 2019. Pytorch: Tensors and dynamic neural networks in python with strong gpu acceleration. https: //github.com/pytorch/pytorch.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Sentence-BERT: Sentence embeddings using siamese BERT-networks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence embeddings using siamese BERT-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead", "authors": [ { "first": "Cynthia", "middle": [], "last": "Rudin", "suffix": "" } ], "year": 2019, "venue": "Nature Machine Intelligence", "volume": "1", "issue": "5", "pages": "206--215", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206-215.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Long short-term memory recurrent neural network architectures for large scale acoustic modeling", "authors": [ { "first": "Hasim", "middle": [], "last": "Sak", "suffix": "" }, { "first": "W", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Fran\u00e7oise", "middle": [], "last": "Senior", "suffix": "" }, { "first": "", "middle": [], "last": "Beaufays", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hasim Sak, Andrew W Senior, and Fran\u00e7oise Beaufays. 2014. Long short-term memory recurrent neural network architectures for large scale acoustic modeling.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Fake news detection on social media: A data mining perspective", "authors": [ { "first": "Kai", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Amy", "middle": [], "last": "Sliva", "suffix": "" }, { "first": "Suhang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jiliang", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "ACM SIGKDD explorations newsletter", "volume": "19", "issue": "1", "pages": "22--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social media: A data mining perspective. ACM SIGKDD explorations newsletter, 19(1):22-36.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "defend: Explainable fake news detection", "authors": [ { "first": "Kai", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Limeng", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Suhang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Dongwon", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", "volume": "", "issue": "", "pages": "395--405", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai Shu, Limeng Cui, Suhang Wang, Dongwon Lee, and Huan Liu. 2019. defend: Explainable fake news detection. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 395-405.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Text genre classification based on linguistic complexity contours using a recurrent neural network", "authors": [ { "first": "Marcus", "middle": [], "last": "Str\u00f6bel", "suffix": "" }, { "first": "Elma", "middle": [], "last": "Kerz", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Wiechmann", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Qiao", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Tenth International Workshop 'Modelling and Reasoning in Context' co-located with the 27th International Joint Conference on Artificial Intelligence (IJCAI 2018) and the 23rd European Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "56--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus Str\u00f6bel, Elma Kerz, Daniel Wiechmann, and Yu Qiao. 2018. Text genre classification based on linguistic complexity contours using a recurrent neural network. In Proceedings of the Tenth International Workshop 'Modelling and Reasoning in Context' co-located with the 27th International Joint Conference on Artificial Intelligence (IJCAI 2018) and the 23rd European Conference on Artificial Intelligence, pages 56-63.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "The psychological meaning of words: Liwc and computerized text analysis methods", "authors": [ { "first": "R", "middle": [], "last": "Yla", "suffix": "" }, { "first": "James", "middle": [ "W" ], "last": "Tausczik", "suffix": "" }, { "first": "", "middle": [], "last": "Pennebaker", "suffix": "" } ], "year": 2010, "venue": "Journal of language and social psychology", "volume": "29", "issue": "1", "pages": "24--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of language and social psychology, 29(1):24-54.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Social media, political polarization, and political disinformation: A review of the scientific literature. Political polarization, and political disinformation: a review of the scientific literature", "authors": [ { "first": "A", "middle": [], "last": "Joshua", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Tucker", "suffix": "" }, { "first": "Pablo", "middle": [], "last": "Guess", "suffix": "" }, { "first": "Cristian", "middle": [], "last": "Barber\u00e1", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Vaccari", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Siegel", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Sanovich", "suffix": "" }, { "first": "Brendan", "middle": [], "last": "Stukal", "suffix": "" }, { "first": "", "middle": [], "last": "Nyhan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joshua A Tucker, Andrew Guess, Pablo Barber\u00e1, Cristian Vaccari, Alexandra Siegel, Sergey Sanovich, Denis Stukal, and Brendan Nyhan. 2018. Social media, political polarization, and political disinformation: A review of the scientific literature. Political polarization, and political disinformation: a review of the scientific literature (March 19, 2018).", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Selecting neural network architectures via the prediction risk: Application to corporate bond rating prediction", "authors": [ { "first": "Joachim", "middle": [], "last": "Utans", "suffix": "" }, { "first": "John", "middle": [], "last": "Moody", "suffix": "" } ], "year": 1991, "venue": "Proceedings First International Conference on Artificial Intelligence Applications on Wall Street", "volume": "", "issue": "", "pages": "35--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joachim Utans and John Moody. 1991. Selecting neural network architectures via the prediction risk: Application to corporate bond rating prediction. In Proceedings First International Conference on Artificial Intelligence Applications on Wall Street, pages 35-41. IEEE.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "liar, liar pants on fire\": A new benchmark dataset for fake news detection", "authors": [ { "first": "William", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wang", "middle": [], "last": "", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.00648" ] }, "num": null, "urls": [], "raw_text": "William Yang Wang. 2017. \" liar, liar pants on fire\": A new benchmark dataset for fake news detection. arXiv preprint arXiv:1705.00648.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Social networks, the 2016 us presidential election, and kantian ethics: applying the categorical imperative to cambridge analytica's behavioral microtargeting", "authors": [ { "first": "Ken", "middle": [], "last": "Ward", "suffix": "" } ], "year": 2018, "venue": "Journal of media ethics", "volume": "33", "issue": "3", "pages": "133--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ken Ward. 2018. Social networks, the 2016 us presidential election, and kantian ethics: applying the categorical imperative to cambridge analytica's behavioral microtargeting. Journal of media ethics, 33(3):133-148.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "An overview of online fake news: Characterization, detection, and discussion", "authors": [ { "first": "Xichen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ali", "middle": [ "A" ], "last": "Ghorbani", "suffix": "" } ], "year": 2020, "venue": "Information Processing & Management", "volume": "57", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xichen Zhang and Ali A Ghorbani. 2020. An overview of online fake news: Characterization, detection, and discussion. Information Processing & Management, 57(2):102025.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Fake news: Fundamental theories, detection strategies and challenges", "authors": [ { "first": "Xinyi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Reza", "middle": [], "last": "Zafarani", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the twelfth ACM international conference on web search and data mining", "volume": "", "issue": "", "pages": "836--837", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinyi Zhou, Reza Zafarani, Kai Shu, and Huan Liu. 2019. Fake news: Fundamental theories, detection strategies and challenges. In Proceedings of the twelfth ACM international conference on web search and data mining, pages 836-837.", "links": null } }, "ref_entries": { "TABREF0": { "num": null, "html": null, "content": "
Validation setTest set
Dataset ModelAccuracy Precision Recall Accuracy Precison Recall
ISOTLSVM unigram 50k 1---0.920--
LSTM-glove 2---0.998--
CAPSULE-glove 3---0.998--
BRNN SBERT0.9980.9980.9980.9970.997 0.997
BRNN CoCoGen0.9940.9940.9940.9930.993 0.993
LIAR Bi-LSTM 300-dim word2vec 40.223--0.233--
embeddings (Google News)
CNN 300-dim word2vec 40.247--0.274--
embeddings (Google News)
+ context + speaker profile
CAPSULE-glove + Party 30.261--0.240--
CAPSULE-glove + State 30.240--0.243--
CAPSULE-glove + Job 30.254--0.251--
BRNN SBERT (ordered)0.2920.2720.3270.2700.296 0.249
BRNN CoCoGen (ordered)0.2510.2810.2180.2370.217 0.207
BRNN CoCoGen (ordered)0.2640.2800.2410.2530.281 0.238
+ context
BRNN CoCoGen (ordered) +0.2840.3050.2630.2720.304 0.258
context + speaker profile
Inspection
", "text": "DeltaScores for the top-20 features for both datasets is provided in the Appendix).", "type_str": "table" }, "TABREF1": { "num": null, "html": null, "content": "", "text": "Evaluation results on the ISOT and LIAR datasets on the validation and test sets. 1 = Ahmed et al., 2018; 2 =", "type_str": "table" }, "TABREF2": { "num": null, "html": null, "content": "
Feature groupSize SubtypesExample/Description
Syntactic complexity 18 Length of production unite.g. mean length of clause
Subordinatione.g. clauses per sentences
Coordinatione.g. Coordinate phrases per clause
Particular structurese.g. Complex nominals per clause
Lexical richness12 Lexical densitye.g. ration contents words / all words
Lexical diversitye.g. type token ratio
Lexical sophisticatione.g. words on General Service List
Register-based25 Spoken (n \u2208 [1, 5])measures of frequencies
n-gram frequencyFiction (n \u2208 [1, 5])of n-grams of order 1-5
Magazine (n \u2208 [1, 5])from five language registers
News (n \u2208 [1, 5])
Academic (n \u2208 [1, 5])
Information theory3Kolmogorov Deflatemeasures use Deflate algorithm
Kolmogorov Deflate Syntacticand relate size of compressed file
Kolmogorov Deflate Morphological to size of original file
LIWC-style60 2300 words from > 70classes include e.g.
classesfunction, grammar
perceptual, cognitive
and biological processes,
personal concerns, affect,
social, basic drives, ...
Word-Prevalence36 crowdsourcing-basedmeasures capture information on
corpus-basedword frequency, contextual
diversity and semantic
distinctiveness differentiated
across language variety (US, UK)
and gender (male, female)
Dataset ModelOptimizer learning rate Normalization Method
ISOTBRNNAdamax0.001Standardization
LIARBRNN (ordered)Adamax0.001Standardization
BRNN (unordered)RMSprop0.001Min-Max
CNNRMSprop0.01Standardization
BRNN + contextAdamax0.0001Standardization
BRNN + context + speaker profile (unordered) RMSprop0.0001Standardization
BRNN + context + speaker profile (ordered)Adamax0.001Standardization
", "text": "Concise overview of the six feature groups", "type_str": "table" }, "TABREF3": { "num": null, "html": null, "content": "
Validation setTest set
", "text": "Optimal combinations of optimizer, learning rate and normalization methods identified via grid search.", "type_str": "table" }, "TABREF4": { "num": null, "html": null, "content": "
MeasureDelta Score MeasureDelta Score
1 Lexical Density0.183LIWC Adverb0.894
2 LIWC Focus future0.132LIWC Ipron0.733
3 LIWC Relig0.117ngram 2 fic0.728
4 Lexical Div CNDW0.115LIWC You0.681
5 Lexical Div TTR0.115LIWC Focuspresent0.677
6 Lexical Soph BNC0.114Mor Kolmogorov0.670
7 LIWC Verb0.106Syntactic ClausesPerSentence0.664
8 LIWC Hear0.099Base Kolmogorov0.652
9 MeanLengthWord0.098LIWC Certain0.630
10 Base Kolmogorov0.095Syntactic Kolmogorov0.629
11 LIWC Negate0.091ngram 3 fic0.620
12 Lexical Soph ANC0.091LIWC See0.561
13 LIWC Posemo0.087Syntactic DepClausesPerTUnit0.502
14 Morphological Kolmogorov0.087LIWC Interrog0.498
15 Syntactic Kolmogorov0.084LIWC I0.438
16 Syntactic VerbPhrasesPerTUnit0.081LIWC They0.431
17 MeanSyllablesPerWord0.076LIWC Shehe0.420
18 Lexical Soph NGSL0.075LIWC Female0.385
19 LIWC Risk0.062LIWC Swear0.380
20 LIWC Focuspresent0.059ngram 1 fic0.370
Top-20 Measures Real News
LIARISOT
MeasureDelta Score MeasureDelta Score
1 LIWC Quant-0.212Syntactic ComplexNomPerClause-0.940
2 LIWC Compare-0.197Syntactic MeanLengthClause-0.929
3 LIWC Adj-0.171LIWC Prep-0.763
4 ngram 2 news-0.148LIWC Hear-0.747
5 ngram 2 acad-0.147LIWC Power-0.739
6 ngram 2 mag-0.145LIWC Work-0.734
7 LIWC Time-0.133LIWC Article-0.723
8 WordPrevalence-0.132LIWC Focus past-0.666
9 ngram 1 acad-0.131LIWC Space-0.545
10 ngram 3 acad-0.128Syntactic CoordPhrasesPerClause-0.509
11 ngram 3 mag-0.123NP PreModWords-0.493
12 ngram 3 news-0.122Lexical Div TTR-0.465
13 ngram 1 mag-0.120Lexical Div CNDW-0.465
14 ngram 1 fic-0.118Lexical Div RTTR-0.413
15 ngram 1 news-0.118Lexical Div CTTR-0.403
16 LIWC Number-0.116LIWC Money-0.314
17 ngram 1 spok-0.111Lexical Soph BNC-0.309
18 LIWC Space-0.110ngram 5 news-0.306
19 LIWC Prep-0.106LIWC Achieve-0.303
20 ngram 2 spok-0.105Lexical Density-0.302
", "text": "Add captionTop-20 Measures Fake News LIAR ISOT", "type_str": "table" }, "TABREF5": { "num": null, "html": null, "content": "
: confusion matrix of liar dataset BRNN model
pants-fire false barely-true half-true mostly-true true
pants-fire043231160
false0 107465730
barely-true078861650
half-true083984890
mostly-true0443841100
true060060880
", "text": "", "type_str": "table" }, "TABREF6": { "num": null, "html": null, "content": "
: confusion matrix of liar dataset BRNN model (non-ordinal)
pants-fire false barely-true half-true mostly-true true
pants-fire042129200
false0 114357750
barely-true086271521
half-true088497760
mostly-true0772511110
true074146870
", "text": "", "type_str": "table" }, "TABREF7": { "num": null, "html": null, "content": "
: confusion matrix of liar dataset CNN model
pants-fire false barely-true half-true mostly-true true
pants-fire15292214111
false8576562498
barely-true8386266335
half-true5357299504
mostly-true3225086773
true31851685810
", "text": "", "type_str": "table" }, "TABREF8": { "num": null, "html": null, "content": "
: confusion matrix of liar dataset BRNN model meta context
pants-fire false barely-true half-true mostly-true true
pants-fire20271814121
false8546474445
barely-true5395965422
half-true23246111695
mostly-true1213685944
true3253164796
", "text": "", "type_str": "table" }, "TABREF9": { "num": null, "html": null, "content": "", "text": "", "type_str": "table" } } } }