|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:59:47.245041Z" |
|
}, |
|
"title": "Automatic Detection of Hungarian Clickbait and Entertaining Fake News", |
|
"authors": [ |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "MTA-SZTE Research Group on Artificial Intelligence", |
|
"institution": "", |
|
"location": { |
|
"settlement": "Szeged", |
|
"country": "Hungary" |
|
} |
|
}, |
|
"email": "vinczev@inf.u-szeged.hu" |
|
}, |
|
{ |
|
"first": "Martina", |
|
"middle": [ |
|
"Katalin" |
|
], |
|
"last": "Szab\u00f3", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "martina@inf.u-szeged.hu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Online news do not always come from reliable sources and they are not always even realistic. The constantly growing number of online textual data has raised the need for detecting deception and bias in texts from different domains recently. In this paper, we identify different types of unrealistic news (clickbait and fake news written for entertainment purposes) written in Hungarian on the basis of a rich feature set and with the help of machine learning methods. Our tool achieves competitive scores: it is able to classify clickbait, fake news written for entertainment purposes and real news with an accuracy of over 80%. It is also highlighted that morphological features perform the best in this classification task.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Online news do not always come from reliable sources and they are not always even realistic. The constantly growing number of online textual data has raised the need for detecting deception and bias in texts from different domains recently. In this paper, we identify different types of unrealistic news (clickbait and fake news written for entertainment purposes) written in Hungarian on the basis of a rich feature set and with the help of machine learning methods. Our tool achieves competitive scores: it is able to classify clickbait, fake news written for entertainment purposes and real news with an accuracy of over 80%. It is also highlighted that morphological features perform the best in this classification task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The growing number of online news and the ability to easily and rapidly distribute information on the internet increasingly stimulate demand for automatic fact checking . As a consequence, linguistic aspects of deception, bias and uncertainty detection have raised worldwide interest recently and have been thoroughly studied in a variety of NLP-applications (Zhou et al., 2004; Mihalcea and Strapparava, 2009; Szarvas et al., 2012; Choi et al., 2012; Girlea et al., 2016; Barr\u00f3n-Cede\u00f1o et al., 2019) . However, determining the trustworthiness of news and separating facts from misinformation is still a challenging and often controversial task (Graves, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 359, |
|
"end": 378, |
|
"text": "(Zhou et al., 2004;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 410, |
|
"text": "Mihalcea and Strapparava, 2009;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 411, |
|
"end": 432, |
|
"text": "Szarvas et al., 2012;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 433, |
|
"end": 451, |
|
"text": "Choi et al., 2012;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 452, |
|
"end": 472, |
|
"text": "Girlea et al., 2016;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 473, |
|
"end": 500, |
|
"text": "Barr\u00f3n-Cede\u00f1o et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 645, |
|
"end": 659, |
|
"text": "(Graves, 2018)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There may be several motivations for spreading fake news on the internet. Hoax websites, for instance, publish dubious news (clickbait) in order to spread different kinds of misinformation or make money by spreading commercials. To be more specific, fabricated news draw disproportionate attention on social networks most of the time, outperforming conventional news (Graves, 2018) . Consequently, publishing interesting fake news is a great way to spread advertisements on the internet. At the same time, there are pages where the primary purpose is to entertain readers by spreading fake news. In this case the readers are aware of the deceptive nature of the information provided and they read it just for fun.", |
|
"cite_spans": [ |
|
{ |
|
"start": 367, |
|
"end": 381, |
|
"text": "(Graves, 2018)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we will focus on two types of fake news written in Hungarian, namely clickbait and fake news written for entertainment purposes. Our main goal is to distinguish these two types from real news with machine learning methods. We will also analyse what type of linguistic information can contribute the most to performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The paper is structured as follows: First, we will give a short review of the relevant research work and we detail the importance and benefits of the recent analysis. Then, we will present the corpus analysed, along with its basic statistical data and the methods and tools we used in the experiments. Next, we will introduce our rich feature set consisting of statistical, morphological, syntactic, semantic and pragmatic features applied for statistical significance analysis and machine learning experiments. We will discuss the findings of the significance analysis, and then, we will provide a detailed description of the machine learning experiments, along with the results. Finally, we discuss the results and add some ideas for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Most of the authors address the issue of automatic fact extraction and binary classification verification task based on machine learning methods and annotated datasets. There are several studies addressing the phenomena of deception and bias in different types of discourse. For instance, most of the authors analyze the phenomena of deception and bias either in speech text (Fetzer, 2008; Fraser, 2010; Scheithauer, 2007; Simon-Vandenbergen et al., 2007) or address the issue of automatic fact extraction and binary classification verification task (Greene and Resnik, 2009; Rubin et al., 2015; Wang, 2017; Graves, 2018) . In addition, uncertainty detectors have mostly been developed in the biological and medical domains (Szarvas et al., 2012) . Also, a few studies seem to address the issue of the systematic analysis of a huge amount of propaganda texts (Propaganda Analysis, 1938; Rashkin et al., 2017; Barr\u00f3n-Cede\u00f1o et al., 2019; Vincze et al., 2019; .", |
|
"cite_spans": [ |
|
{ |
|
"start": 375, |
|
"end": 389, |
|
"text": "(Fetzer, 2008;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 403, |
|
"text": "Fraser, 2010;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 422, |
|
"text": "Scheithauer, 2007;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 423, |
|
"end": 455, |
|
"text": "Simon-Vandenbergen et al., 2007)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 550, |
|
"end": 575, |
|
"text": "(Greene and Resnik, 2009;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 576, |
|
"end": 595, |
|
"text": "Rubin et al., 2015;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 607, |
|
"text": "Wang, 2017;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 608, |
|
"end": 621, |
|
"text": "Graves, 2018)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 724, |
|
"end": 746, |
|
"text": "(Szarvas et al., 2012)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 859, |
|
"end": 886, |
|
"text": "(Propaganda Analysis, 1938;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 887, |
|
"end": 908, |
|
"text": "Rashkin et al., 2017;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 909, |
|
"end": 936, |
|
"text": "Barr\u00f3n-Cede\u00f1o et al., 2019;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 937, |
|
"end": 957, |
|
"text": "Vincze et al., 2019;", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Recently, several databases have been compiled for the English language which contain both fake and real news. For instance, the dataset described in Vlachos and Riedel (2014) consists of a total of 221 statements, along with their veracity value. The Liar corpus contains approximately 13,000 short political statements (Wang, 2017). The Emergent Corpus consists of 300 statements and 2500 news related to the semantic content of the statements, along with their veracity value (Ferreira and Vlachos, 2016 ). Our study is most similar in vein to Rubin et al. (2016) , which compares satirical news to real news, collected from different websites. P\u00e9rez-Rosas et al. (2018) used crowdsourcing to generate fake news on the basis of real news and then carried out machine learning experiments to separate the two types. There are also a few studies that identify clickbaits (see e.g. Biyani et al. (2016) for English and Karadzhov et al. (2017) for Bulgarian).", |
|
"cite_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 175, |
|
"text": "Vlachos and Riedel (2014)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 479, |
|
"end": 506, |
|
"text": "(Ferreira and Vlachos, 2016", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 547, |
|
"end": 566, |
|
"text": "Rubin et al. (2016)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 648, |
|
"end": 673, |
|
"text": "P\u00e9rez-Rosas et al. (2018)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 882, |
|
"end": 902, |
|
"text": "Biyani et al. (2016)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 919, |
|
"end": 942, |
|
"text": "Karadzhov et al. (2017)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The above studies focus on the phenomena of deception and bias almost exclusively in sources written in English, so the reported research findings and models are mostly limited to the English language. The same goes for the currently existing datasets. Santos et al. (2020) forms an exception, which, however, aims at the distinction of Brazilian Portuguese fake and real news. To the best of our knowledge, our recent research work is the first attempt at the automatic detection of Hungarian fake news.", |
|
"cite_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 273, |
|
"text": "Santos et al. (2020)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The main contributions of our paper are the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 We present a novel dataset for detecting fake news in Hungarian;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 We define a rich feature set of linguistic parameters for detecting different characteristics of different types of Hungarian news examined;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 We carry out a detailed statistical analysis of linguistic parameters that may distinguish real news, clickbait and entertaining fake news in Hungarian;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 We perform machine learning experiments with the above mentioned feature set for detecting real news, clickbait and fake news written for entertainment purposes;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 We analyze the effect of each set of features and identify the best combination of these features for the task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our study was conducted on a corpus of 180 online news compiled by us. First, the real news come from several national and regional news portals, e.g. www.index.hu, www.origo.hu and www. delmagyar.hu. Second, the fake news were collected from two sources. On the one hand, we selected some special news published on the 1st of April (Fools' Day) on some of the news portals basically spreading real news, as this day is traditionally seen as an occasion for making pranks. These news were intended to play a trick on readers. On the other hand, we downloaded articles from the H\u00edrcs\u00e1rda website 1 . These texts are mostly based on parodies of current political and public events, and their explicit purpose is not to spread real or fake news but to entertain the readers. Third, the clickbait news were downloaded from websites that were collected by a Hungarian real news portal and were claimed to be unreliable 2 . Most of the texts were published during the time period between 2017 and 2019. For each type of text, we collected 60 documents. The news were randomly selected but we made sure that a given topic should not be included in the corpus more than one time. The basic data of our corpus are presented in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1218, |
|
"end": 1225, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The corpus", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this section, we present our methods for identifying the three types of news with machine learning methods as well as providing statistical significance analysis for the linguistic features defined for the task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "As a first step, we automatically preprocessed the texts with magyarlanc (Zsibrita et al., 2013) , a toolkit written in JAVA for the linguistic processing of Hungarian texts. With this tool, the text was first split into sentences, then tokenised and lemmatised, and finally morphologically and syntactically analyzed. Based on the output of magyarlanc, we extracted a high number of linguistic features. Our feature set consists of the following features:", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 96, |
|
"text": "(Zsibrita et al., 2013)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature set", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Basic statistical features: the number of sentences; the number of words; the number and rate of lemmas; the average sentence length.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature set", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Morphological features: Part-of-speech (or POS) features: the number and frequency of nouns, proper nouns, verbs, adjectives, (demonstrative) pronouns, numerals, adverbs, conjunctions and unanalyzed words (i.e. those with an \"unknown\" POS tag); the number of punctuation marks; the number and frequency of imperative and conditional verbs; the number and frequency of past and present tense verbs; the number and frequency of first person singular verbs and of first person plural verbs; the number and frequency of frequentative, causative and modal verbs; the number and frequency of comparative and superlative adjectives.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature set", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Syntactic features: the number and frequency of subjects and objects as Hungarian is a pro-drop language, meaning that pronominal subjects and objects might not be overt in the clause; the number and frequency of adverbials; the number and frequency of coordinations and subordinations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature set", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Semantic features: the number and frequency of negation words; the number and frequency of content words and function words; the number of frequency of public verbs, private verbs and suasive verbs based on a Hungarian translation of lists found in Quirk et al. (1985) . Uncertainty features: the number and frequency of words belonging to several classes of linguistic uncertainty based on Vincze (2014a). Sentiment features: the number and frequency of positive and negative words based on a list of sentiment phrases. We applied two different Hungarian dictionaries for sentiment analysis: one list was a translation of Liu (2012) , while the other one contained Hungarian slang words (Szab\u00f3, 2015) , respectively. Emotion features: the number and frequency of words belonging to the emotions described in Szab\u00f3 et al. (2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 270, |
|
"text": "Quirk et al. (1985)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 625, |
|
"end": 635, |
|
"text": "Liu (2012)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 690, |
|
"end": 703, |
|
"text": "(Szab\u00f3, 2015)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 811, |
|
"end": 830, |
|
"text": "Szab\u00f3 et al. (2016)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature set", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For list-based semantic features we used a simple dictionary-based method. Thus, if a lemma in our corpora matched with any item in our lists, it was counted as a hit.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature set", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Pragmatic features: the number and frequency of speech act verbs, based on a list manually constructed by us; the number and frequency of literal quotes and citations, detected on the basis of quotation marks and dashes at the beginning of the sentences. The number and frequency of discourse markers. To find discourse markers in the texts we applied a word list based on D\u00e9r and Mark\u00f3 (2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 375, |
|
"end": 395, |
|
"text": "D\u00e9r and Mark\u00f3 (2007)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature set", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In order to measure the effect of each feature in distinguishing real news, clickbait and entertaining fake news, we performed statistical significance tests (pairwise t-tests) for all features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistical significance analysis", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Here we assume that the distinction between different types of fake news and real news is primarily a semantic-pragmatic problem. As a consequence, we also presuppose that morphological and syntactic features of the texts in the three subcorpora will not necessarily differ from each other.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistical significance analysis", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The results of the statistical significance analysis are shown in Table 2 . For the sake of simplicity, we provide p-values for features only where a statistically significant difference was found. For comparison, we also report the mean values for each feature for all classes in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 73, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 288, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Statistical significance analysis", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In addition to the statistical significance analysis, we seek to automatically discriminate clickbait and fake news from real news. For this purpose, we made use of the above mentioned rich feature set including statistical, morphological, syntactic, semantic and pragmatic characteristics of Hungarian texts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Machine learning experiments", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In our experiments, we used a Random Forest classifier (Breiman, 2001 ) with ten fold cross validation since it does not easily overfit. . In order to examine which features play the most important role in distinguishing the three groups of news, we divided the features into five main groups based on a linguistic classification, and experimented with all possible combinations of these groups, yielding an extensive ablation analysis. The baseline method was majority classification, which achieved an accuracy of 33.33%.", |
|
"cite_spans": [ |
|
{ |
|
"start": 55, |
|
"end": 69, |
|
"text": "(Breiman, 2001", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Machine learning experiments", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this section, we present the results of both our statistical analysis and machine learning experiments. Table 2 shows the features that exhibit significant differences among three groups of news. Basically, our hypothesis is just partly confirmed: the findings do not show the outstanding role of semantic-pragmatic features because at the same time, morphological characteristics of the tokens proved to be also essential.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 114, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Apart from morphological features, the frequency distribution of certain types of semantic contents also seem to be significantly different in the three subcorpora. For instance, there is a significant difference of the frequency of uncertainty markers such as condition, weasel, peacock and hedge between clickbait news and both of the other corpora. While the first type belongs to semantic uncertainty, the latter three are types of uncertainty at the discourse level (Vincze, 2013) . In the case of semantic uncertainty, the lexical content (meaning) of the uncertainty marker (cue) is responsible for uncertainty, e.g. may, possible, believe etc. (Vincze, 2014b) . In contrast to semantic uncertainty (Szarvas et al., 2012) , in the case of discourse level uncertainty, \"the missing or intentionally omitted information is not related to the propositional content of the utterance but to other factors\", e.g. for cues like some, often, much etc. Bias evoked by discourse-level uncertainty might be viewed as a characteristic feature of clickbait news.", |
|
"cite_spans": [ |
|
{ |
|
"start": 471, |
|
"end": 485, |
|
"text": "(Vincze, 2013)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 652, |
|
"end": 667, |
|
"text": "(Vincze, 2014b)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 706, |
|
"end": 728, |
|
"text": "(Szarvas et al., 2012)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results of statistical significance analysis", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In the best scenario, our machine learning algorithm achieved an accuracy of 82.78%, which was yielded by combining statistical, morphological, semantic and pragmatic features. This proved to be the best combination of features for identifying fake news (with an F-score of 81.4). The combination of morphological and semantic features seemed to be the most effective for identifying real news, obtaining an F-score of 80.3. As for clickbait news, the combination of statistical, syntactic, semantic and pragmatic features yielded the best result (an F-score of 89.1). More detailed results are shown in Table 4 . Table 4 : Results of machine learning experiments. Acc: accuracy, P: precision, R: recall, F: F-measure. stat: statistical, morph: morphological, synt: syntactic, sem: semantic, prag: pragmatic.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 604, |
|
"end": 611, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 614, |
|
"end": 621, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results of machine learning experiments", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "As can be seen, our algorithm is notably more effective than the baseline method: even the least effective combination of features (i.e. syntactic features on their own) obtained an accuracy of 57.78%. The data also show that the system performs best when it comes to the detection of clickbait news: in the best case scenario, the algorithm properly identified 53 texts and only 2 texts were misclassified as fake news and 5 as real news. The performance was a bit weaker in the case of fake and real news. Overall, the method can be considered effective as it classified only 14 unreal news as real news from the 120 clickbait and fake news.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results of machine learning experiments", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We also examined the efficiency of each feature set in our ablation experiments. As shown in Table 4 , morphological features proved to be most effective, but semantic features also contributed to the success of the automatic detection. The results of the significance tests indicated that syntactic and pragmatic features had a somewhat weaker role in distinguishing the classes, which fact was also confirmed by our machine learning experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 100, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results of machine learning experiments", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Our results showed that our machine learning experiments achieved an accuracy of 82.78% in the classification task of real, clickbait and entertaining fake news. In this case we used all the feature groups with the exception of syntactic features. Here we discuss our results, with special regard to morphological and semantic features. Moreover, we also provide an error analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion of the results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Examining the role of each feature set, our hypothesis is just partly confirmed: the findings do not show the outstanding role of semantic-pragmatic features in our machine learning experiments. Rather, it was morphology that proved to be the most effective.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The role of morphology", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "In addition to the significant differences of frequency distributions of specific morphological features, morphological characteristics of the subcorpora examined played a decisive role in our machine learning experiments. For instance, when we applied morphological features of the subcorpora exclusively, we achieved an F-score of 80.6 that can be considered notably high compared to the best performance of our algorithm (82.8). At the same time, without these features the best performance we achieved was 75.3% (using statistical, semantic and pragmatic features).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The role of morphology", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "In order to further examine the role of morphology, we divided the features into part-of-speech and deeper morphological features and reran the experiments with only parts-of-speech features and deep morphological features, respectively. This analysis showed that as long as the algorithm achieved 78% using part-of-speech features exclusively, the performance was only 58% when we used the deep morphological features, highlighting the outstanding role of POS tags in the task. Therefore, by using only POS-level information (i.e. keeping the feature set very simple), we can achieve an encouraging result for identifying Hungarian fabricated news.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The role of morphology", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "To investigate this further, we analyzed the frequency distribution of parts-of-speech in a meticulous way. There is a significant difference of the frequency distribution of verbs and adverbs among all the subcorpora examined: there are significantly more verbs and adverbs in clickbait news. What is more, there is a significant difference of the noun, adjective and pronoun rates between clickbait and entertaining fake news, as well as clickbait and real news. More specifically, contributors of clickbait news use less nouns and adjectives and more pronouns than contributors of real news and entertaining fake news. From these results we can conclude that the sentence structure of the clickbait news is notably different from that of the real news: by using more verbal elements, clickbait news seem to highlight the events and happenings and pay less attention to the participants of the events (i.e. nominal elements). In other words, clickbait news emphasize what happened and the details behind the act (e.g. actors, objects etc.) are less noteworthy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The role of morphology", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "As for other morphological features of the subcorpora, authors of clickbait news use more imperative and less conditional verb forms than the other two text types. While the former feature may be considered as a sign of the need for action, the latter might be viewed as a sign of uncertainty. The results also showed that there is a higher frequency of past tense in real news than present tense compared to clickbait and entertaining fake news. In other words, real and fake news appear to concentrate on past events, i.e. they have a descriptive function, whereas clickbait news focus on the \"here and now\", representing a more \"active\" and \"powerful\" discourse, this enticing readers to click on them. It is also worth mentioning that there is a higher occurrence of first person plural verb forms in clickbait and in entertaining fake news than in real news. This feature may be considered as a linguistic strategy for manipulation since in the unreal news, it may evoke a shared feeling of common ground, attempting to deceive the reader (B\u00e1rth\u00e1zi, 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1044, |
|
"end": 1060, |
|
"text": "(B\u00e1rth\u00e1zi, 2008)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The role of morphology", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "As for frequency distribution of certain types of semantic contents there is a significantly higher frequency of uncertainty markers such as condition, weasel, peacock and hedge in clickbait news compared to the fake and real news. At the same time, the frequency difference between fake and real news is not significant in this case. The latter feature shows that fake news tend to present information as factual statements, thereby increasing the apparent authenticity and credibility of the content.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The role of semantics", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "With regard to the sentiment and emotional content of the texts, there is a significantly higher frequency of positive sentiment rate in clickbait news. What is more, there is a significantly higher frequency of \"love\" and \"joy\" in clickbait news compared to the other two text types. Data also show that these emotions are more frequent in fake news than in real news as well. Based on these results we may conclude that positive attitude characterizes unreliable news more than real news. These results correlate with a previous research finding about the linguistic features of Hungarian communist propaganda texts (Vincze et al., 2019) , where it was also proved that propaganda texts bound in positive emotions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 618, |
|
"end": 639, |
|
"text": "(Vincze et al., 2019)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The role of semantics", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "However, we also should mention that there are more words representing anxiety and fear in clickbait news than real and fake news as well. This might reflect the emphatic role of emotions in clickbait news, which is probably related to the general purpose of these texts, i.e. the more emotional a text is, the more probably it will generate clicks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The role of semantics", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Below we present some examples of news that are misclassified by the system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Real news classified as fake news:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "\u2022 Golden State Warriors are reluctant to celebrate their latest title at the White House 3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "\u2022 180-year old postcard making company wound up 4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Fake news classified as real news:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "\u2022 Formula-1 in Szeged, Hungary 5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "\u2022 Fake Grabovoi numbers appeared 6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Clickbait news classified as real news:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "\u2022 Scary: Nostradamus's prophecies for 2019 7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "\u2022 10 shocking photos without an explanation 8", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Analyzing the incorrectly classified documents, it was revealed that short real news were often misclassified as real news tend to be longer than fake news. On the other hand, clickbait news that contained a lower rate of imperatives and/or a higher rate of conditionals were more prone to be classified as real news. Finally, fake news with lots of negative words were often misclassified as real news.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "In this work, we reported on the automatic discrimination between Hungarian real news, clickbait news and fake news written for entertaining purposes. Our results confirm that it is possible to successfully detect untrustful news based on a rich -morphological, syntactic, semantic and pragmatic -feature set, and especially morphological features play an important role in the process. Besides morphological features, the added value of semantic features was also apparent, while at the same time, syntactic features did not have a notable effect on our results. Our experiments show the potential for exploiting the morphological, more specifically part-of-speech features for fake news detection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "As a next step of the research, on the basis of the findings of the recent analysis, we would like to apply our methods and tools on texts belonging to other domains. We will attempt to further train our algorithm to discriminate texts containing propagandistic features, bias, misinformation or disinformation. Moreover, we would like to compare these results to previous findings concerning the linguistic features of Hungarian Communist propaganda texts (Vincze et al., 2019) . Finally, we would like to compare our results obtained on Hungarian texts with those on English corpora (Barr\u00f3n-Cede\u00f1o et al., 2019) and possibly other languages as well.", |
|
"cite_spans": [ |
|
{ |
|
"start": 457, |
|
"end": 478, |
|
"text": "(Vincze et al., 2019)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "https://www.hircsarda.hu", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://hvg.hu/tudomany/20150119_atveros_weboldalak", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://index.hu/sport/kosarlabda/2017/09/24/donald_trump_sertodes_golden_state_ warriors_steph_curry_lebron_james_kobe_bryant/ 4 http://www.origo.hu/gazdasag/20170927-180-eves-kepeslap-keszito-csaladivallalkozas-szunik-meg.html 5 https://szegedpanorama.blogspot.hu/2013/04/forma-1-es-verseny-szegeden.html 6 http://hircsarda.hu/2015/02/19/mar_hamisitjak_a_grabovoj-szamokat/ 7 http://eztnezdmeg.com/hatborzongato-nostradamus-szerint-ez-var-rank-2019-ben/ 8 http://www.hirvarazs.info/altalanos/tiz-dobbenetes-foto-amire-sose-talaltakmagyarazatot/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported by grantTUDFO/47138-1/2019-ITM of the Ministry for Innovation and Technology, Hungary and by the Hungarian Artificial Intelligence National Laboratory.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Proppy: Organizing the news based on their propagandistic content", |
|
"authors": [ |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Barr\u00f3n-Cede\u00f1o", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Israa", |
|
"middle": [], |
|
"last": "Jaradat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giovanni", |
|
"middle": [], |
|
"last": "Da San", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Martino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Information Processing Management", |
|
"volume": "56", |
|
"issue": "5", |
|
"pages": "1849--1864", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alberto Barr\u00f3n-Cede\u00f1o, Israa Jaradat, Giovanni Da San Martino, and Preslav Nakov. 2019. Proppy: Organizing the news based on their propagandistic content. Information Processing Management, 56(5):1849-1864.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Manipul\u00e1ci\u00f3, valamint manipul\u00e1ci\u00f3ra alkalmas nyelvhaszn\u00e1lati eszk\u00f6z\u00f6k a sajt\u00f3rekl\u00e1mokban", |
|
"authors": [ |
|
{ |
|
"first": "Eszter", |
|
"middle": [], |
|
"last": "B\u00e1rth\u00e1zi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Magyar Nyelv", |
|
"volume": "104", |
|
"issue": "4", |
|
"pages": "443--463", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eszter B\u00e1rth\u00e1zi. 2008. Manipul\u00e1ci\u00f3, valamint manipul\u00e1ci\u00f3ra alkalmas nyelvhaszn\u00e1lati eszk\u00f6z\u00f6k a sajt\u00f3rekl\u00e1mokban. Magyar Nyelv, 104(4):443-463.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "8 amazing secrets for getting more clicks\": Detecting clickbaits in news streams using article informality", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Biyani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Tsioutsiouliklis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Blackmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Biyani, K. Tsioutsiouliklis, and John Blackmer. 2016. \"8 amazing secrets for getting more clicks\": Detecting clickbaits in news streams using article informality. In AAAI.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Random forests", |
|
"authors": [ |
|
{ |
|
"first": "Leo", |
|
"middle": [], |
|
"last": "Breiman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Machine Learning", |
|
"volume": "45", |
|
"issue": "", |
|
"pages": "5--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leo Breiman. 2001. Random forests. Machine Learning, 45(1):5-32, October.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Hedge detection as a lens on framing in the gmo debates: A position paper", |
|
"authors": [ |
|
{ |
|
"first": "Eunsol", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenhao", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lillian", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cristian", |
|
"middle": [], |
|
"last": "Danescu-Niculescu-Mizil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Spindel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "70--79", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eunsol Choi, Chenhao Tan, Lillian Lee, Cristian Danescu-Niculescu-Mizil, and Jennifer Spindel. 2012. Hedge detection as a lens on framing in the gmo debates: A position paper. In Proceedings of the Workshop on Extra- Propositional Aspects of Meaning in Computational Linguistics, pages 70-79. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A magyar diskurzusjel\u00f6l\u0151k szupraszegment\u00e1lis jel\u00f6lts\u00e9ge", |
|
"authors": [ |
|
{ |
|
"first": "Ilona", |
|
"middle": [], |
|
"last": "Csilla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "D\u00e9r", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mark\u00f3", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Nyelvelm\u00e9let-nyelvhaszn\u00e1lat", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "61--67", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Csilla Ilona D\u00e9r and Alexandra Mark\u00f3. 2007. A magyar diskurzusjel\u00f6l\u0151k szupraszegment\u00e1lis jel\u00f6lts\u00e9ge. In Nyelvelm\u00e9let-nyelvhaszn\u00e1lat, pages 61-67. Tinta, Sz\u00e9kesfeh\u00e9rv\u00e1r-Budapest.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Emergent: a novel data-set for stance classification", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Ferreira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1163--1168", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William Ferreira and Andreas Vlachos. 2016. Emergent: a novel data-set for stance classification. In Proceed- ings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1163-1168, San Diego, California, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "And I Think That Is a Very Straightforward Way of Dealing With It\"-The Communicative Function of Cognitive Verbs in Political Discourse", |
|
"authors": [ |
|
{ |
|
"first": "Anita", |
|
"middle": [], |
|
"last": "Fetzer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of Language and Social Psychology", |
|
"volume": "27", |
|
"issue": "", |
|
"pages": "384--396", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anita Fetzer. 2008. \"And I Think That Is a Very Straightforward Way of Dealing With It\"-The Communicative Function of Cognitive Verbs in Political Discourse. Journal of Language and Social Psychology, 27:384-396, 12.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Chapter 11. hedging in political discourse", |
|
"authors": [ |
|
{ |
|
"first": "Bruce", |
|
"middle": [], |
|
"last": "Fraser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Perspectives in Politics and Discourse", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bruce Fraser. 2010. Chapter 11. hedging in political discourse. In Perspectives in Politics and Discourse, pages 201-214. 01.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Psycholinguistic features for deceptive role detection in werewolf", |
|
"authors": [ |
|
{ |
|
"first": "Codruta", |
|
"middle": [], |
|
"last": "Girlea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roxana", |
|
"middle": [], |
|
"last": "Girju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eyal", |
|
"middle": [], |
|
"last": "Amir", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "417--422", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Codruta Girlea, Roxana Girju, and Eyal Amir. 2016. Psycholinguistic features for deceptive role detection in werewolf. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, pages 417-422.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Understanding the promise and limits of automated fact-checking", |
|
"authors": [ |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lucas Graves. 2018. Understanding the promise and limits of automated fact-checking. Technical report.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "More than words: Syntactic packaging and implicit sentiment", |
|
"authors": [ |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Greene", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "503--511", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephan Greene and Philip Resnik. 2009. More than words: Syntactic packaging and implicit sentiment. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 503-511, Boulder, Colorado, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "We built a fake news / click bait filter: What happened next will blow your mind! ArXiv", |
|
"authors": [ |
|
{ |
|
"first": "Georgi", |
|
"middle": [], |
|
"last": "Karadzhov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pepa", |
|
"middle": [], |
|
"last": "Gencheva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Koychev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Georgi Karadzhov, Pepa Gencheva, Preslav Nakov, and Ivan Koychev. 2017. We built a fake news / click bait filter: What happened next will blow your mind! ArXiv, abs/1803.03786.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "P\u00e1rt\u00e9let: A hungarian corpus of propaganda texts from the hungarian socialist era", |
|
"authors": [ |
|
{ |
|
"first": "Zolt\u00e1n", |
|
"middle": [], |
|
"last": "Kmetty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dorottya", |
|
"middle": [], |
|
"last": "Demszky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orsolya", |
|
"middle": [], |
|
"last": "Ring", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bal\u00e1zs", |
|
"middle": [], |
|
"last": "Nagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martina", |
|
"middle": [ |
|
"Katalin" |
|
], |
|
"last": "Szab\u00f3", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2381--2388", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zolt\u00e1n Kmetty, Veronika Vincze, Dorottya Demszky, Orsolya Ring, Bal\u00e1zs Nagy, and Martina Katalin Szab\u00f3. 2020. P\u00e1rt\u00e9let: A hungarian corpus of propaganda texts from the hungarian socialist era. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 2381-2388.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Sentiment Analysis and Opinion Mining", |
|
"authors": [ |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bing Liu. 2012. Sentiment Analysis and Opinion Mining. Morgan & Claypool Publishers.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The lie detector: Explorations in the automatic recognition of deceptive language", |
|
"authors": [ |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlo", |
|
"middle": [], |
|
"last": "Strapparava", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the ACL-IJCNLP 2009 Conference Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "309--312", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rada Mihalcea and Carlo Strapparava. 2009. The lie detector: Explorations in the automatic recognition of decep- tive language. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 309-312. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Automatic detection of fake news", |
|
"authors": [ |
|
{ |
|
"first": "Ver\u00f3nica", |
|
"middle": [], |
|
"last": "P\u00e9rez-Rosas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bennett", |
|
"middle": [], |
|
"last": "Kleinberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Lefevre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3391--3401", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ver\u00f3nica P\u00e9rez-Rosas, Bennett Kleinberg, Alexandra Lefevre, and Rada Mihalcea. 2018. Automatic detection of fake news. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3391- 3401, Santa Fe, New Mexico, USA, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Institute for Propaganda Analysis. 1938. How to Detect Propaganda", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Propaganda Analysis. Publications of the Institute for Propaganda Analysis", |
|
"volume": "I", |
|
"issue": "", |
|
"pages": "210--218", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Institute for Propaganda Analysis. 1938. How to Detect Propaganda. In Propaganda Analysis. Publications of the Institute for Propaganda Analysis, volume I, pages 210-218.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A Comprehensive Grammar of the English Language", |
|
"authors": [ |
|
{ |
|
"first": "Randolph", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sidney", |
|
"middle": [], |
|
"last": "Greenbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "Geoffrey Leech", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Randolph Quirk, Sidney Greenbaum, Geoffrey Leech, and Jan Svartvik. 1985. A Comprehensive Grammar of the English Language. Longman, London.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Truth of varying shades: Analyzing language in fake news and political fact-checking", |
|
"authors": [ |
|
{ |
|
"first": "Eunsol", |
|
"middle": [], |
|
"last": "Hannah Rashkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jin", |
|
"middle": [ |
|
"Yea" |
|
], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svitlana", |
|
"middle": [], |
|
"last": "Jang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Volkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2931--2937", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of varying shades: An- alyzing language in fake news and political fact-checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2931-2937, Copenhagen, Denmark, September. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Deception detection for news: Three types of fakes", |
|
"authors": [ |
|
{ |
|
"first": "Victoria", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Rubin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yimin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niall", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Conroy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 78th ASIS&T Annual Meeting: Information Science with Impact: Research in and for the Community, ASIST '15", |
|
"volume": "83", |
|
"issue": "", |
|
"pages": "1--83", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victoria L. Rubin, Yimin Chen, and Niall J. Conroy. 2015. Deception detection for news: Three types of fakes. In Proceedings of the 78th ASIS&T Annual Meeting: Information Science with Impact: Research in and for the Community, ASIST '15, pages 83:1-83:4, Silver Springs, MD, USA. American Society for Information Science.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Fake news or truth? using satirical cues to detect potentially misleading news", |
|
"authors": [ |
|
{ |
|
"first": "Victoria", |
|
"middle": [], |
|
"last": "Rubin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niall", |
|
"middle": [], |
|
"last": "Conroy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yimin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Cornwell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Second Workshop on Computational Approaches to Deception Detection", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--17", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victoria Rubin, Niall Conroy, Yimin Chen, and Sarah Cornwell. 2016. Fake news or truth? using satirical cues to detect potentially misleading news. In Proceedings of the Second Workshop on Computational Approaches to Deception Detection, pages 7-17, San Diego, California, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Measuring the impact of readability features in fake news detection", |
|
"authors": [ |
|
{ |
|
"first": "Roney", |
|
"middle": [], |
|
"last": "Santos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriela", |
|
"middle": [], |
|
"last": "Pedro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sidney", |
|
"middle": [], |
|
"last": "Leal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oto", |
|
"middle": [], |
|
"last": "Vale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thiago", |
|
"middle": [], |
|
"last": "Pardo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalina", |
|
"middle": [], |
|
"last": "Bontcheva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolina", |
|
"middle": [], |
|
"last": "Scarton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1404--1413", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roney Santos, Gabriela Pedro, Sidney Leal, Oto Vale, Thiago Pardo, Kalina Bontcheva, and Carolina Scarton. 2020. Measuring the impact of readability features in fake news detection. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 1404-1413, Marseille, France, May. European Language Resources Association.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Metaphors in election night television coverage in Britain, the United States and Germany", |
|
"authors": [], |
|
"year": 2007, |
|
"venue": "Political Discourse in the Media: Cross-cultural perspectives", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rut Scheithauer. 2007. Metaphors in election night television coverage in Britain, the United States and Germany. In Political Discourse in the Media: Cross-cultural perspectives, pages 75-106. 01.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Presupposition and 'taking-for-granted' in mass communicated political argument An illustration from British, Flemish and Swedish political colloquy", |
|
"authors": [ |
|
{ |
|
"first": "Anne-Marie", |
|
"middle": [], |
|
"last": "Simon-Vandenbergen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "White", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karin", |
|
"middle": [], |
|
"last": "Aijmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Political Discourse in the Media", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anne-Marie Simon-Vandenbergen, Peter White, and Karin Aijmer. 2007. Presupposition and 'taking-for-granted' in mass communicated political argument An illustration from British, Flemish and Swedish political colloquy. In Political Discourse in the Media, pages 31-74. 01.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Magyar nyelv\u0171 sz\u00f6vegek em\u00f3ci\u00f3elemz\u00e9s\u00e9nek elm\u00e9leti nyelv\u00e9szeti\u00e9s nyelvtechnol\u00f3giai probl\u00e9m\u00e1i", |
|
"authors": [ |
|
{ |
|
"first": "Martina", |
|
"middle": [ |
|
"Katalin" |
|
], |
|
"last": "Szab\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gergely", |
|
"middle": [], |
|
"last": "Morvay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martina Katalin Szab\u00f3, Veronika Vincze, and Gergely Morvay. 2016. Magyar nyelv\u0171 sz\u00f6vegek em\u00f3ci\u00f3elemz\u00e9s\u00e9nek elm\u00e9leti nyelv\u00e9szeti\u00e9s nyelvtechnol\u00f3giai probl\u00e9m\u00e1i. In T\u00e1vlatok a mai magyar alkalmazott nyelv\u00e9szetben. Tinta, Budapest.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Exploring the dynamic changes of key concepts of the hungarian socialist era with natural language processing methods", |
|
"authors": [ |
|
{ |
|
"first": "Martina", |
|
"middle": [ |
|
"Katalin" |
|
], |
|
"last": "Szab\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orsolya", |
|
"middle": [], |
|
"last": "Ring", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bal\u00e1zs", |
|
"middle": [], |
|
"last": "Nagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L\u00e1szl\u00f3", |
|
"middle": [], |
|
"last": "Kiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00falia", |
|
"middle": [], |
|
"last": "Koltai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e1bor", |
|
"middle": [], |
|
"last": "Berend", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L\u00e1szl\u00f3", |
|
"middle": [], |
|
"last": "Vid\u00e1cs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Attila", |
|
"middle": [], |
|
"last": "Guly\u00e1s", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zolt\u00e1n", |
|
"middle": [], |
|
"last": "Kmetty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Historical Methods: A Journal of Quantitative and Interdisciplinary History", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--13", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martina Katalin Szab\u00f3, Orsolya Ring, Bal\u00e1zs Nagy, L\u00e1szl\u00f3 Kiss, J\u00falia Koltai, G\u00e1bor Berend, L\u00e1szl\u00f3 Vid\u00e1cs, Attila Guly\u00e1s, and Zolt\u00e1n Kmetty. 2020. Exploring the dynamic changes of key concepts of the hungarian socialist era with natural language processing methods. Historical Methods: A Journal of Quantitative and Interdisciplinary History, pages 1-13.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Egy magyar nyelv\u0171 szentimentlexikon l\u00e9trehoz\u00e1s\u00e1nak tapasztalatai\u00e9s dilemm\u00e1i", |
|
"authors": [ |
|
{ |
|
"first": "Martina", |
|
"middle": [ |
|
"Katalin" |
|
], |
|
"last": "Szab\u00f3", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Seg\u00e9dk\u00f6nyvek a nyelv\u00e9szet tanulm\u00e1nyoz\u00e1s\u00e1hoz 177", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "278--285", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martina Katalin Szab\u00f3. 2015. Egy magyar nyelv\u0171 szentimentlexikon l\u00e9trehoz\u00e1s\u00e1nak tapasztalatai\u00e9s dilemm\u00e1i. In Seg\u00e9dk\u00f6nyvek a nyelv\u00e9szet tanulm\u00e1nyoz\u00e1s\u00e1hoz 177, pages 278-285. Tinta, Budapest.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Cross-Genre and Cross-Domain Detection of Semantic Uncertainty", |
|
"authors": [ |
|
{ |
|
"first": "Gy\u00f6rgy", |
|
"middle": [], |
|
"last": "Szarvas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rich\u00e1rd", |
|
"middle": [], |
|
"last": "Farkas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gy\u00f6rgy", |
|
"middle": [], |
|
"last": "M\u00f3ra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Computational Linguistics", |
|
"volume": "38", |
|
"issue": "", |
|
"pages": "335--367", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gy\u00f6rgy Szarvas, Veronika Vincze, Rich\u00e1rd Farkas, Gy\u00f6rgy M\u00f3ra, and Iryna Gurevych. 2012. Cross-Genre and Cross-Domain Detection of Semantic Uncertainty. Computational Linguistics, 38:335-367, June.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Automated fact checking: Task formulations, methods and future directions", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Thorne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3346--3359", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Thorne and Andreas Vlachos. 2018. Automated fact checking: Task formulations, methods and future directions. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3346- 3359, Santa Fe, New Mexico, USA, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Christos Christodoulopoulos, and Arpit Mittal", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Thorne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Fever: a large-scale dataset for fact extraction and verification", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1803.05355" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. Fever: a large-scale dataset for fact extraction and verification. arXiv preprint arXiv:1803.05355.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Automatic analysis of linguistic features in Communist propaganda texts", |
|
"authors": [ |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martina", |
|
"middle": [ |
|
"Katalin" |
|
], |
|
"last": "Szab\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orsolya", |
|
"middle": [], |
|
"last": "Ring", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Veronika Vincze, Martina Katalin Szab\u00f3, and Orsolya Ring. 2019. Automatic analysis of linguistic fea- tures in Communist propaganda texts. https://propaganda.qcri.org/bias-misinformation- workshop-socinfo19/paper4_final_Vincze_et_al_propaganda.pdf.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Weasels, hedges and peacocks: Discourse-level uncertainty in wikipedia articles", |
|
"authors": [ |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "383--391", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Veronika Vincze. 2013. Weasels, hedges and peacocks: Discourse-level uncertainty in wikipedia articles. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 383-391.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Uncertainty detection in Hungarian texts", |
|
"authors": [ |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of Coling", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Veronika Vincze. 2014a. Uncertainty detection in Hungarian texts. In Proceedings of Coling 2014.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Uncertainty Detection in Natural Language Texts", |
|
"authors": [ |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Veronika Vincze. 2014b. Uncertainty Detection in Natural Language Texts. Ph.D. thesis, University of Szeged, Szeged, Hungary.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Fact checking: Task definition and dataset construction", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "18--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Vlachos and Sebastian Riedel. 2014. Fact checking: Task definition and dataset construction. In Proceed- ings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pages 18-22, Baltimore, MD, USA, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Liar, Liar Pants on Fire\": A New Benchmark Dataset for Fake News Detection", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "422--426", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William Yang Wang. 2017. \"Liar, Liar Pants on Fire\": A New Benchmark Dataset for Fake News Detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 422-426, Vancouver, Canada, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Automating linguistics-based cues for detecting deception in text-based asynchronous computer-mediated communications. Group decision and negotiation", |
|
"authors": [ |
|
{ |
|
"first": "Lina", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Judee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jay", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Burgoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Doug", |
|
"middle": [], |
|
"last": "Nunamaker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Twitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "13", |
|
"issue": "", |
|
"pages": "81--106", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lina Zhou, Judee K Burgoon, Jay F Nunamaker, and Doug Twitchell. 2004. Automating linguistics-based cues for detecting deception in text-based asynchronous computer-mediated communications. Group decision and negotiation, 13(1):81-106.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "magyarlanc: A toolkit for morphological and dependency parsing of Hungarian", |
|
"authors": [ |
|
{ |
|
"first": "J\u00e1nos", |
|
"middle": [], |
|
"last": "Zsibrita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rich\u00e1rd", |
|
"middle": [], |
|
"last": "Farkas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of RANLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "763--771", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00e1nos Zsibrita, Veronika Vincze, and Rich\u00e1rd Farkas. 2013. magyarlanc: A toolkit for morphological and depen- dency parsing of Hungarian. In Proceedings of RANLP, pages 763-771.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Statistically significant features. #: number, %: rate.", |
|
"content": "<table><tr><td>Feature</td><td>click</td><td>fake</td><td>real</td><td>Feature</td><td>click</td><td>fake</td><td>real</td></tr><tr><td>token #</td><td colspan=\"4\">232.0833 269.8833 370.0500 uncertain #</td><td>2.8000</td><td>2.2167</td><td>2.8667</td></tr><tr><td>sentence #</td><td>20.6833</td><td>21.8333</td><td>27.8833</td><td>uncertain %</td><td>0.0119</td><td>0.0081</td><td>0.0073</td></tr><tr><td>lemma #</td><td colspan=\"4\">143.0167 188.0333 211.9667 negation #</td><td>4.4333</td><td>3.6167</td><td>4.3500</td></tr><tr><td>lemma %</td><td>0.6441</td><td>0.7334</td><td>0.6595</td><td>negation %</td><td>0.0199</td><td>0.0125</td><td>0.0104</td></tr><tr><td>token %</td><td>13.9728</td><td>14.8909</td><td>15.8212</td><td>epistemic #</td><td>0.7500</td><td>0.6000</td><td>0.8333</td></tr><tr><td>sentence length</td><td>11.6967</td><td>12.5064</td><td>13.3731</td><td>investigation #</td><td>0.0500</td><td>0.2000</td><td>0.4167</td></tr><tr><td>unknown #</td><td>0.0500</td><td>1.3000</td><td>0.2333</td><td>condition #</td><td>2.2333</td><td>1.5167</td><td>1.9833</td></tr><tr><td>unknown %</td><td>0.0003</td><td>0.0052</td><td>0.0005</td><td>weasel #</td><td>7.2667</td><td>7.0000</td><td>7.4667</td></tr><tr><td>verb #</td><td>38.2333</td><td>34.9500</td><td>44.9167</td><td>peacock #</td><td>1.8000</td><td>1.2000</td><td>1.6500</td></tr><tr><td>noun #</td><td>58.0500</td><td>75.7167</td><td colspan=\"2\">107.4667 hedge #</td><td>3.6333</td><td>3.2667</td><td>3.8167</td></tr><tr><td>adjective #</td><td>24.6000</td><td>35.3333</td><td>52.7167</td><td>doxastic #</td><td>2.8000</td><td>2.1333</td><td>3.2667</td></tr><tr><td>pronoun #</td><td>17.5667</td><td>12.7667</td><td>18.0833</td><td>epistemic %</td><td>0.0035</td><td>0.0019</td><td>0.0019</td></tr><tr><td>conjunction #</td><td>6.9333</td><td>6.2333</td><td>8.3500</td><td>investigation %</td><td>0.0002</td><td>0.0007</td><td>0.0016</td></tr><tr><td>numeral #</td><td>6.4167</td><td>7.4000</td><td>15.0833</td><td>condition %</td><td>0.0096</td><td>0.0059</td><td>0.0044</td></tr><tr><td>adverb #</td><td>25.6000</td><td>24.7667</td><td>29.5667</td><td>weasel %</td><td>0.0311</td><td>0.0246</td><td>0.0236</td></tr><tr><td>punct #</td><td>46.1500</td><td>53.1500</td><td>69.8667</td><td>peacock %</td><td>0.0077</td><td>0.0045</td><td>0.0044</td></tr><tr><td>proper noun#</td><td>4.8667</td><td>17.8167</td><td>20.9500</td><td>hedge %</td><td>0.0157</td><td>0.0112</td><td>0.0091</td></tr><tr><td>verb %</td><td>0.1682</td><td>0.1299</td><td>0.1195</td><td>doxastic %</td><td>0.0117</td><td>0.0086</td><td>0.0094</td></tr><tr><td>noun %</td><td>0.2451</td><td>0.2784</td><td>0.2879</td><td>joy #</td><td>4.1000</td><td>2.2833</td><td>2.1333</td></tr><tr><td>adjective %</td><td>0.1034</td><td>0.1269</td><td>0.1347</td><td>fear #</td><td>0.7000</td><td>0.4667</td><td>0.3833</td></tr><tr><td>pronoun %</td><td>0.0748</td><td>0.0454</td><td>0.0465</td><td>anger #</td><td>0.4000</td><td>0.2333</td><td>0.5333</td></tr><tr><td>conjunction %</td><td>0.0304</td><td>0.0232</td><td>0.0216</td><td>sorrow #</td><td>1.1000</td><td>0.3167</td><td>3.0500</td></tr><tr><td>numeral %</td><td>0.0269</td><td>0.0298</td><td>0.0467</td><td>love #</td><td>0.8500</td><td>0.3833</td><td>0.4000</td></tr><tr><td>adverb %</td><td>0.1165</td><td>0.0888</td><td>0.0740</td><td>anxiety #</td><td>0.5500</td><td>0.3333</td><td>0.3500</td></tr><tr><td>proper noun %</td><td>0.0204</td><td>0.0706</td><td>0.0661</td><td>disgust #</td><td>0.1833</td><td>0.1833</td><td>0.2167</td></tr><tr><td>superlative #</td><td>0.8333</td><td>0.5500</td><td>0.6667</td><td>surprise #</td><td>0.3833</td><td>0.2500</td><td>0.2167</td></tr><tr><td>comparative #</td><td>0.7167</td><td>0.9167</td><td>1.3167</td><td>joy %</td><td>0.0159</td><td>0.0076</td><td>0.0041</td></tr><tr><td>superlative %</td><td>0.0316</td><td>0.0138</td><td>0.0167</td><td>fear %</td><td>0.0034</td><td>0.0017</td><td>0.0010</td></tr><tr><td>comparative %</td><td>0.0328</td><td>0.0271</td><td>0.0313</td><td>anger %</td><td>0.0015</td><td>0.0007</td><td>0.0017</td></tr><tr><td>Sg1 verb #</td><td>1.4833</td><td>1.1500</td><td>1.2167</td><td>sorrow %</td><td>0.0049</td><td>0.0014</td><td>0.0050</td></tr><tr><td>Pl1 verb #</td><td>2.6500</td><td>2.5000</td><td>2.8333</td><td>love %</td><td>0.0039</td><td>0.0014</td><td>0.0008</td></tr><tr><td>past #</td><td>13.8333</td><td>11.9167</td><td>19.5500</td><td>anxiety %</td><td>0.0022</td><td>0.0009</td><td>0.0007</td></tr><tr><td>present #</td><td>20.1167</td><td>19.6333</td><td>21.5167</td><td>disgust %</td><td>0.0007</td><td>0.0005</td><td>0.0006</td></tr><tr><td>past %</td><td>0.3758</td><td>0.3546</td><td>0.4827</td><td>surprise %</td><td>0.0019</td><td>0.0009</td><td>0.0003</td></tr><tr><td>present %</td><td>0.5167</td><td>0.5471</td><td>0.4364</td><td>positive #</td><td>11.9167</td><td>8.5000</td><td>11.7667</td></tr><tr><td>imperative #</td><td>3.5500</td><td>1.5333</td><td>2.0667</td><td>negative #</td><td>8.4167</td><td>6.9000</td><td>12.4500</td></tr><tr><td>cond. verb #</td><td>1.1333</td><td>1.7333</td><td>1.4833</td><td>positive2 #</td><td>7.4333</td><td>7.2833</td><td>13.1167</td></tr><tr><td>imperative %</td><td>0.0890</td><td>0.0396</td><td>0.0393</td><td>negative2 #</td><td colspan=\"3\">16.5833 14.3000 16.5167</td></tr><tr><td>cond. verb %</td><td>0.0310</td><td>0.0506</td><td>0.0312</td><td>neg. emotive #</td><td>0.4667</td><td>0.3333</td><td>0.6000</td></tr><tr><td>Sg1 verb %</td><td>0.0317</td><td>0.0310</td><td>0.0127</td><td>positive %</td><td>0.0498</td><td>0.0303</td><td>0.0275</td></tr><tr><td>dem. pronoun #</td><td>6.4500</td><td>4.3000</td><td>7.8500</td><td>negative %</td><td>0.0370</td><td>0.0249</td><td>0.0303</td></tr><tr><td>Pl1 verb %</td><td>0.0736</td><td>0.0684</td><td>0.0401</td><td>positive2 %</td><td>0.0700</td><td>0.0516</td><td>0.0395</td></tr><tr><td>dem. pron %</td><td>0.3606</td><td>0.3533</td><td>0.4523</td><td>negative2 %</td><td>0.0328</td><td>0.0269</td><td>0.0327</td></tr><tr><td>noun morph #</td><td>0.8986</td><td>0.9023</td><td>0.9259</td><td>neg.emotive %</td><td>0.0027</td><td>0.0011</td><td>0.0008</td></tr><tr><td>freq. verb #</td><td>0.0833</td><td>0.0667</td><td>0.0333</td><td>content %</td><td>0.6806</td><td>0.7244</td><td>0.7290</td></tr><tr><td>modal verb #</td><td>1.9667</td><td>2.0667</td><td>1.4500</td><td>function %</td><td>0.2566</td><td>0.2145</td><td>0.2224</td></tr><tr><td>caus. verb #</td><td>0.2333</td><td>0.1333</td><td>0.2833</td><td>private verb #</td><td>4.4000</td><td>3.4833</td><td>4.1167</td></tr><tr><td>freq. verb %</td><td>0.0026</td><td>0.0017</td><td>0.0010</td><td>public verb #</td><td>1.4500</td><td>1.7167</td><td>2.8833</td></tr><tr><td>modal verb %</td><td>0.0546</td><td>0.0602</td><td>0.0343</td><td>suasive verb #</td><td>0.6667</td><td>0.8333</td><td>1.0333</td></tr><tr><td>caus. verb %</td><td>0.0074</td><td>0.0045</td><td>0.0070</td><td>private verb %</td><td>0.1132</td><td>0.0941</td><td>0.0822</td></tr><tr><td>subject #</td><td>17.6500</td><td>19.9833</td><td>28.2000</td><td>public verb %</td><td>0.0401</td><td>0.0524</td><td>0.0741</td></tr><tr><td>object #</td><td>14.1667</td><td>14.6333</td><td>16.8833</td><td>suasive verb %</td><td>0.0145</td><td>0.0264</td><td>0.0233</td></tr><tr><td>attributive #</td><td>49.3167</td><td>70.3333</td><td colspan=\"2\">105.8333 speech act #</td><td>4.2333</td><td>3.7667</td><td>4.9667</td></tr><tr><td>adverbial #</td><td>15.7167</td><td>15.2167</td><td>19.6333</td><td>quote #</td><td>1.9333</td><td>5.8667</td><td>4.8500</td></tr><tr><td>coordination #</td><td>16.3667</td><td>18.6000</td><td>25.9833</td><td>dash #</td><td>0.0333</td><td>0.5333</td><td>0.3000</td></tr><tr><td>subject %</td><td>0.9167</td><td>0.9507</td><td>0.9927</td><td>speech act %</td><td>0.0188</td><td>0.0147</td><td>0.0143</td></tr><tr><td>object %</td><td>0.7134</td><td>0.6927</td><td>0.6702</td><td>quote %</td><td>0.0084</td><td>0.0214</td><td>0.0109</td></tr><tr><td>attributive %</td><td>2.4684</td><td>3.2076</td><td>3.7817</td><td>dash %</td><td>0.0002</td><td>0.0023</td><td>0.0008</td></tr><tr><td>adverbial %</td><td>0.8044</td><td>0.6866</td><td>0.6747</td><td>discourse marker #</td><td>10.4833</td><td>9.4500</td><td>10.7833</td></tr><tr><td>coordination %</td><td>0.7947</td><td>0.8435</td><td>0.9329</td><td>discourse marker %</td><td>0.0456</td><td>0.0338</td><td>0.0256</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Mean values for features in each class. #: number, %: rate.", |
|
"content": "<table><tr><td/><td/><td/><td>clickbait</td><td/><td/><td>fake news</td><td/><td/><td>real news</td><td/><td/><td>all</td><td/></tr><tr><td>Feature groups</td><td>Acc</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td></tr><tr><td colspan=\"14\">stat+morph+synt+sem+prag 79.44 86.9 88.3 87.6 78.2 71.7 74.8 73.4 78.3 75.8 79.5 79.4 79.4</td></tr><tr><td>stat+morph+synt+sem</td><td colspan=\"5\">80.56 86.9 88.3 87.6 78.9</td><td>75</td><td colspan=\"3\">76.9 75.8 78.3</td><td>77</td><td colspan=\"3\">80.5 80.6 80.5</td></tr><tr><td>stat+morph+synt+prag</td><td>79.44</td><td>85</td><td>85</td><td>85</td><td colspan=\"3\">80.7 76.7 78.6</td><td>73</td><td colspan=\"5\">76.7 74.8 79.6 79.4 79.5</td></tr><tr><td>stat+morph+sem+prag</td><td colspan=\"5\">82.78 88.3 88.3 88.3 82.8</td><td>80</td><td colspan=\"2\">81.4 77.4</td><td>80</td><td colspan=\"4\">78.7 82.8 82.8 82.8</td></tr><tr><td>stat+synt+sem+prag</td><td>75</td><td>83.8</td><td>95</td><td colspan=\"6\">89.1 67.9 63.3 65.5 71.4 66.7</td><td>69</td><td>74.4</td><td>75</td><td>74.5</td></tr><tr><td>morph+synt+sem+prag</td><td colspan=\"3\">81.11 85.2 86.7</td><td>86</td><td>80.4</td><td>75</td><td colspan=\"7\">77.6 77.8 81.7 79.7 81.1 81.1 81.1</td></tr><tr><td>stat+morph+synt</td><td>78.33</td><td>85</td><td>85</td><td>85</td><td>77.6</td><td>75</td><td colspan=\"2\">76.3 72.6</td><td>75</td><td colspan=\"4\">73.8 78.4 78.3 78.3</td></tr><tr><td>stat+morph+sem</td><td colspan=\"12\">81.11 85.5 88.3 86.9 82.7 71.7 76.8 75.8 83.3 79.4 81.3 81.1</td><td>81</td></tr><tr><td>stat+synt+sem</td><td colspan=\"3\">72.78 84.6 91.7</td><td>88</td><td>66</td><td>55</td><td>60</td><td colspan=\"6\">66.2 71.7 68.8 72.3 72.8 72.3</td></tr><tr><td>morph+synt+sem</td><td>80</td><td colspan=\"3\">86.7 86.7 86.7</td><td>80</td><td colspan=\"3\">73.3 76.5 73.8</td><td>80</td><td colspan=\"2\">76.8 80.2</td><td>80</td><td>80</td></tr><tr><td>stat+morph+prag</td><td colspan=\"2\">79.44 83.6</td><td>85</td><td colspan=\"5\">84.3 78.6 73.3 75.9 76.2</td><td>80</td><td>78</td><td>76.2</td><td>80</td><td>78</td></tr><tr><td>stat+synt+prag</td><td>67.22</td><td>77</td><td colspan=\"6\">78.3 77.7 67.9 63.3 65.5 57.1</td><td>60</td><td colspan=\"4\">58.5 67.3 67.2 67.2</td></tr><tr><td>morph+synt+prag</td><td colspan=\"2\">77.78 82.3</td><td>85</td><td colspan=\"5\">83.6 77.2 73.3 75.2 73.8</td><td>75</td><td colspan=\"4\">74.4 77.7 77.8 77.7</td></tr><tr><td>stat+sem+prag</td><td colspan=\"2\">75.56 84.4</td><td>90</td><td colspan=\"5\">87.1 68.5 61.7 64.9 72.6</td><td>75</td><td colspan=\"4\">73.8 75.2 75.6 75.3</td></tr><tr><td>morph+sem+prag</td><td colspan=\"4\">81.11 88.1 86.7 87.4</td><td>80</td><td colspan=\"8\">73.3 76.5 75.8 83.3 79.4 81.3 81.1 81.1</td></tr><tr><td>synt+sem+prag</td><td colspan=\"13\">71.67 81.5 88.3 84.8 62.5 58.3 60.3 69.5 68.3 68.9 71.2 71.7 71.4</td></tr><tr><td>stat+morph</td><td colspan=\"8\">78.89 87.7 83.3 85.5 78.6 73.3 75.9 71.6</td><td>80</td><td colspan=\"3\">75.6 79.3 78.9</td><td>79</td></tr><tr><td>stat+synt</td><td>65.56</td><td>77</td><td colspan=\"2\">78.3 77.7</td><td>60</td><td>55</td><td colspan=\"7\">57.4 59.4 63.3 61.3 65.5 65.6 65.5</td></tr><tr><td>stat+sem</td><td colspan=\"2\">73.33 83.6</td><td>85</td><td colspan=\"2\">84.3 62.1</td><td>60</td><td>61</td><td>73.8</td><td>75</td><td colspan=\"4\">74.4 73.1 73.3 73.2</td></tr><tr><td>stat+prag</td><td colspan=\"13\">69.44 77.2 73.3 75.2 70.5 71.7 71.1 61.3 63.3 62.3 69.7 69.4 69.5</td></tr><tr><td>morph+synt</td><td>78.89</td><td>82</td><td colspan=\"4\">83.3 82.6 79.7 78.3</td><td>79</td><td>75</td><td>75</td><td>75</td><td colspan=\"3\">78.9 78.9 78.9</td></tr><tr><td>morph+sem</td><td colspan=\"2\">81.11 84.4</td><td>90</td><td colspan=\"5\">87.1 83.7 68.3 75.2 76.1</td><td>85</td><td colspan=\"4\">80.3 81.4 81.1 80.9</td></tr><tr><td>morph+prag</td><td colspan=\"3\">78.89 80.6 83.3</td><td>82</td><td colspan=\"4\">84.6 73.3 78.6 72.7</td><td>80</td><td colspan=\"4\">76.2 79.3 78.9 78.9</td></tr><tr><td>synt+sem</td><td>72.22</td><td>85</td><td>85</td><td>85</td><td>60.9</td><td>65</td><td colspan=\"3\">62.9 71.4 66.7</td><td>69</td><td colspan=\"3\">72.5 72.2 72.3</td></tr><tr><td>synt+prag</td><td colspan=\"2\">66.11 76.3</td><td>75</td><td colspan=\"10\">75.6 59.7 61.7 60.7 62.7 61.7 62.2 66.2 66.1 66.2</td></tr><tr><td>sem+prag</td><td colspan=\"3\">71.11 80.3 81.7</td><td>81</td><td colspan=\"6\">59.3 58.3 58.8 73.3 73.3 73.3</td><td>71</td><td>71.1</td><td>71</td></tr><tr><td>stat</td><td colspan=\"5\">63.33 74.6 78.3 76.4 55.9</td><td>55</td><td colspan=\"7\">55.5 58.6 56.7 57.6 63.1 63.3 63.2</td></tr><tr><td>morph</td><td colspan=\"3\">80.56 86.2 83.3</td><td>84</td><td colspan=\"9\">82.1 76.7 79.3 74.2 81.7 77.8 80.9 80.6 80.6</td></tr><tr><td>synt</td><td colspan=\"8\">57.78 71.7 71.7 71.7 43.1 41.7 42.4 58.1</td><td>60</td><td>59</td><td colspan=\"3\">57.6 57.8 57.7</td></tr><tr><td>sem</td><td colspan=\"5\">65.56 78.3 78.3 78.3 54.1</td><td>55</td><td colspan=\"7\">54.5 64.4 63.3 63.9 65.6 65.6 65.6</td></tr><tr><td>prag</td><td colspan=\"5\">58.89 63.8 61.7 62.7 53.4</td><td>65</td><td colspan=\"2\">58.6 61.2</td><td>50</td><td>55</td><td colspan=\"3\">59.5 58.9 58.8</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |