{ "paper_id": "R11-1019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:04:57.676907Z" }, "title": "Sentiments and Opinions in Health-related Web Messages", "authors": [ { "first": "Marina", "middle": [], "last": "Sokolova", "suffix": "", "affiliation": { "laboratory": "", "institution": "CHEO Research Institute", "location": {} }, "email": "sokolova@uottawa.ca" }, { "first": "Victoria", "middle": [], "last": "Bobicev", "suffix": "", "affiliation": { "laboratory": "", "institution": "Technical University of Moldova", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this work, we analyze sentiments and opinions expressed in user-written Web messages. The messages discuss healthrelated topics: medications, treatment, illness and cure, etc. Recognition of sentiments and opinions is a challenging task for humans as well as an automated text analysis. In this work, we apply both the approaches. The paper presents the annotation model, discusses characteristics of subjectivity annotations in health-related messages, and reports the results of the annotation agreement. For external evaluation of the labeling results, we apply Machine Learning methods on the annotated data and present the obtained results.", "pdf_parse": { "paper_id": "R11-1019", "_pdf_hash": "", "abstract": [ { "text": "In this work, we analyze sentiments and opinions expressed in user-written Web messages. The messages discuss healthrelated topics: medications, treatment, illness and cure, etc. Recognition of sentiments and opinions is a challenging task for humans as well as an automated text analysis. In this work, we apply both the approaches. The paper presents the annotation model, discusses characteristics of subjectivity annotations in health-related messages, and reports the results of the annotation agreement. For external evaluation of the labeling results, we apply Machine Learning methods on the annotated data and present the obtained results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recent years, Text Data Mining (TDM) and Natural Language Processing (NLP) intensively studied sentiments and opinions in user-written Web texts (e.g., tweets, blogs, messages). Researchers analyzed sentiments and opinions that appear in consumer-written product reviews, financial blogs, political discussions (Blitzer et al, 2007; Ferguson et al, 2009; Kim and Hovy, 2007) . Health care and medical delivery service is another area where practitioners become interested in what users write in their Web posts. Importance of knowing user opinions had became evident during H1N1 pandemic, the first pandemic when Web discussions influenced the general public (Eysenbach, 2009) ; Figure 1 presents an example. 1 The shift from contrived medical text to less rigorously written and edited user-written texts is a challenge for TDM and NLP methods. The current techniques were primarily designed to analyze medical publications in traditional media (e.g., journal articles) and organizational documents (e.g., hospital records) or task-dependent (e.g., information retrieval related to insurance claims) (Angelova, 2009; Cohen et al, 2010; Konovalov et al, 2010) .", "cite_spans": [ { "start": 314, "end": 335, "text": "(Blitzer et al, 2007;", "ref_id": "BIBREF5" }, { "start": 336, "end": 357, "text": "Ferguson et al, 2009;", "ref_id": "BIBREF12" }, { "start": 358, "end": 377, "text": "Kim and Hovy, 2007)", "ref_id": "BIBREF13" }, { "start": 662, "end": 679, "text": "(Eysenbach, 2009)", "ref_id": null }, { "start": 712, "end": 713, "text": "1", "ref_id": null }, { "start": 1104, "end": 1120, "text": "(Angelova, 2009;", "ref_id": "BIBREF0" }, { "start": 1121, "end": 1139, "text": "Cohen et al, 2010;", "ref_id": "BIBREF9" }, { "start": 1140, "end": 1162, "text": "Konovalov et al, 2010)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 682, "end": 690, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "The goal of this work is to study sentiments and opinions in health-related Web messages. We start with building a data set of annotated sentences. We present an opinion and sentiment annotation scheme and its application to tag sentences harvested from the Web messages. We report evaluation of manual annotation agreement. Finally, machine learning methods are applied to automatically assess the sentence labeling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "We are interested in the expressions of user private state which is not open to objective observation or verification (Quirk et al, 1985) . These personal views are revealed through thoughts, perceptions and other subjective expressions that can be found in text (Wiebe, 1994) .", "cite_spans": [ { "start": 118, "end": 137, "text": "(Quirk et al, 1985)", "ref_id": "BIBREF18" }, { "start": 263, "end": 276, "text": "(Wiebe, 1994)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Opinions and Sentiments", "sec_num": "2" }, { "text": "We assume that the private states can be revealed by emotional statements, sentiments, and subjective statements that may not imply emotions, opinions. In this work, statements are considered within the sentence bounds; thus, sentences are the units of our language analysis. We agree with Lasersohn (2005) and Kim and Hovy (2007) that opinion can be expressed about a fact of matter, and should not be treated as identical to sentimental expression.", "cite_spans": [ { "start": 290, "end": 306, "text": "Lasersohn (2005)", "ref_id": "BIBREF15" }, { "start": 311, "end": 330, "text": "Kim and Hovy (2007)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Opinions and Sentiments", "sec_num": "2" }, { "text": "We further sub-categorize sentiments into positive and negative, and opinions -into positive, negative and neutral. Sentences that do not bear opinions or sentiments are considered objective by default and are left for future studies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Opinions and Sentiments", "sec_num": "2" }, { "text": "3 Opinion and Sentiment Annotation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Opinions and Sentiments", "sec_num": "2" }, { "text": "Annotation of subjectivity can be centered either around perception of a reader/annotator (Strapparava and Mihalceal, 2008) or the author of a text (Balahur and Steinberger, 2009) . Our model is author-centric. Our guidelines for annotators defined that a subjective statement contains information which has not been taken by the author from some external source but rather his/her personal thoughts (as defined in Section 2). We requested that annotators do not impose their own sentiments and attitudes towards information in the text (Balahur and Steinberger, 2009 ). Instead we suggested that an annotator imagined sentiments and attitudes that the author possibly had while writing.", "cite_spans": [ { "start": 90, "end": 123, "text": "(Strapparava and Mihalceal, 2008)", "ref_id": null }, { "start": 148, "end": 179, "text": "(Balahur and Steinberger, 2009)", "ref_id": null }, { "start": 537, "end": 567, "text": "(Balahur and Steinberger, 2009", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Annotation Model", "sec_num": "3.1" }, { "text": "Separation of good and bad news from the author attitude is important in the health-related analysis. We know that subjective expressions are highly reflective of the text content and context (Chen, 2008) . Health-related messages are often written about illnesses and medical treatment. Users write about diseases, symptoms, sick relative and friends. This information is naturally distressing and may cause a negative attitude in annotators. We asked annotators not to mark descriptions of symptoms and diseases as subjective; only author's opinion or sentiment should be annotated. For example, \"For a very long time I've had a problem with feeling really awful when I try to get up in the morning\" is a description of some symptoms and should not be annotated as subjective. In contrast, \"I don't know if that makes sense, it seems to me that the new drug which stimulates red blood cell production would be a more logical approach, erythropoiten (sp?)\" exposes the author's thoughts and ideas. It should be annotated as an opinion though without an emotional attitude. Another example, \"Alas, I didn't record the program, but wish I had\" expresses the author's regret and should be annotated as a negative opinion about the action (i.e., not recording the program).", "cite_spans": [ { "start": 192, "end": 204, "text": "(Chen, 2008)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation Model", "sec_num": "3.1" }, { "text": "We considered essential to advise annotators not to agonize over the annotation and, if doubtful, leave the example un-annotated (Balahur and Steinberger, 2009) . The rule is especially important for annotation of user-written texts, when annotators can be destructed and even annoyed by misspellings, simplified grammar and informal style and unfamiliar terminology specific to an individual user..", "cite_spans": [ { "start": 129, "end": 160, "text": "(Balahur and Steinberger, 2009)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Annotation Model", "sec_num": "3.1" }, { "text": "Our annotation schema is based on the following assumptions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Schema", "sec_num": "3.2" }, { "text": "(a) annotation was performed on a sentence level; one sentence expressed only one assertion; this assumption held in a majority of cases;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Schema", "sec_num": "3.2" }, { "text": "(b) only author's subjective comments were marked as such; if the author conveyed opinions or sentiments of others, we did not mark it as subjective as the author was not the holder of these opinions or sentiments;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Schema", "sec_num": "3.2" }, { "text": "(c) we did not differentiate between the objects of comments; author's attitude towards a situation, an event, a person or an object were considered equally important.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Schema", "sec_num": "3.2" }, { "text": "Annotators were informed that the annotation was sentence-level and examples of annotated texts presented them were also with annotated sentences. Thus they tended to annotate sentences. If consecutive sentences were subjective, every one was marked. In some cases, only a subjective part of a sentence was tagged, whereas the other part, containing factual information was not included in the sentiment tag.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Schema", "sec_num": "3.2" }, { "text": "User-written messages usually have opening, body, and closure. Opening can be email subject, parameters of the message, body presents the main content, and closure can be signature or a link to a personal web site. We used the markup tags HEADER, FOOTER and BODY ( Figure 2 ). HEADER referred to the parameters of the message, FOOTER marked the closing part which started with the signature; this part was marked FOOTER regardless of its length and omitted from the processing. BODY marked the message between HEADER and FOOTER.", "cite_spans": [], "ref_spans": [ { "start": 265, "end": 273, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Mode", "sec_num": "3.3" }, { "text": "To comply with our annotation schema, we divide BODY into CITATION and TEXT. CITATION marked embedding of the previous messages in the current one, TEXT marked the text of the message written by the author. In the current study, we are interested in the TEXT part; other parts are left for future work. TEXT was divided in sentences and further analyzed for opinions and sentiments. TEXT:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mode", "sec_num": "3.3" }, { "text": "Stimulation of the vagus nerve slows the heart and drops the blood pressure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mode", "sec_num": "3.3" }, { "text": "FOOTER: 4 Empirical Application", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mode", "sec_num": "3.3" }, { "text": "------------------- Gordon Banks N3JXP | \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mode", "sec_num": "3.3" }, { "text": "For our empirical part, we used the sci.med texts of 20 Newsgroups 2 . It is a benchmark data set of 20,000 messages, popular in applications of machine learning techniques, such as text classification and text clustering. There are 1000 sci.med messages. Most sci.med messages were posted by people who wanted to know something about an illness, drugs or treatment (e.g., questions on tuberculosis, haldol prescription to elderly). After the question appeared on the message board, other people could reply and add comments ( Figure 2 ). To group messages by their content, we merged the messages with the same topic. A script automatically placed all messages with the same Subject line in the file with the same title. Thus, we obtained 365 files named \"Arrhythmia\", \"arthritis and diabetes\", \"Athletes Heart\", etc. Essentially, a file stored the whole discussion thread on the title topic. Many files contained only one question and one or two answers. Several topics raised interest of many list members. Such files contained rather hot discussions (e.g., \"Candidayeast Bloom\",\" MSG sensitivity\", \"Homeopathy\"). In contrast, some files contained newsletters, conference announcements, other announcements that were considered objective (Section 2); these files were deleted from annotation. Finally, 357 files were left for the annotation.", "cite_spans": [], "ref_spans": [ { "start": 527, "end": 535, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "10 undergraduate and 10 master students were involved in the process. A master student had 30 files to annotate. The results of the annotation were examined; students with better annotations received more files. An undergraduate student had 10 files to annotate; only students with the satisfactory quality annotations were given more files. Finally, the 357 files have been annotated by at least one annotator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation results", "sec_num": "4.2" }, { "text": "216 have been tagged by two annotators, and 21 have been tagged by three annotators. 120 files have been tagged by only one annotator. A majority of these files did not contain subjective information, e.g., a question and a factual answer . We have divided the final tags into 3 categories: 3 : subjective sentences : both annotators identified them as subjective, sentiment or opinion, and marked either the same polarity or neutral;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation results", "sec_num": "4.2" }, { "text": "weak subjective sentences : only one annotator identified them as subjective;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation results", "sec_num": "4.2" }, { "text": "non-subjective and uncertain sentences : sentences that the annotators did not mark as subjective or marked with the opposite polarity. Table 1 : Annotation results for sentiment and opinion sentences in the sci.med texts. Table 1 lists the results for the three sentence groups.", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 143, "text": "Table 1", "ref_id": null }, { "start": 223, "end": 230, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Annotation results", "sec_num": "4.2" }, { "text": "6408 sentences were annotated in total. The majority -4190 sentences -were considered nonsubjective by both annotators. Neutral opinion was the most frequent subjective label, some persons asked questions and some replied in many cases expressing their own opinions. 85 sentences were marked neutral opinion by both annotators. In 655 cases, it was a weak subjectivity (i.e., identified by one annotator). The latter set contained ambiguous sentences, without clear indicators was the expressed statement author's thought or just information taken from some sources. We report some examples: \"Symptoms can be drastically enhanced by food but not inflammation\", \"The low residue diet is appropriate for you if you still have obstructions\", \"Then they may be able to crowd out garbage genes\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "Negative sentiment was another large set of the ambiguous annotation. In Section 3.1, we wrote that the texts were about diseases, so it was natural that sometimes annotators marked descriptions of symptoms or sickness as negative sentiment. Often negative sentiment was attributed to sentences that were interpreted as subjective only in the message context. For example, \"I said that I PERSON-ALLY had other people order the EXACT SAME FOOD at TWO DIFFERENT TIMES from the SAME RESTAU-RANT\" was marked negative sentiment in context of a very opinionated discussion. For the annotator, it was clear that the author of the text had been really angry, and the sentence did carry negative emotion even if it did not contain indicative words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "We have found that sarcasm was a strong factor for the polarity disagreement between annotators. \"I'm forever in your debt\" was marked as positive sentiment and negative sentiment, because it was positive as is but was used in a sarcastic answer to another message; one annotator took the whole context in consideration but another one did not. \"Surprise surprise different people react differently to different things.\" and \"Subject: Scientific Yawn\" (denouncing an alternative medicine) are two other illustrations of opposite polarity labeling. Perhaps, a more complex set of sentiment annotation tags can help to capture such sentiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "Content-wise, we found that several types of sentences created problems while annotation: advices, suggestions (\"go and see a doctor\"); courtesy (\"thank you in advance\", \"I would greatly appreciate any reply\", \"good luck\"); questions and indirect questions (\"can somebody point me\", \"I am interested in\", \"I would like to find any information\"). An appropriate remedy can be to divide subjective sentences into categories, e.g., reporting, advice, judgment and sentiment (Asher et al, 2009) . Rhetorical relations formed another influential factor. However, correct identification of this phenomena requires a higher proficiency of annotations.", "cite_spans": [ { "start": 471, "end": 490, "text": "(Asher et al, 2009)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "Additionally, annotators faced challenges intrinsic to the user-written text (Section 3.1). Indeed, syntactic rules were not strictly respected and there were mistypes and misspellings. Other challenges were recognitions of trade-mark and proprietary names (\"itraconazole\", \"Oodles of Noodles\"), public health and related services (\"AMA\", \"FDA\", \"State Licensing Board\", \"ABFP\") and medi- cal and scientific terms (\"Candida\", \"sinusitis\", \"yeast bloom\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "5 Empirical Evaluation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "To assess the quality of subjective labeling, we computed two types of measures. First, we separately assessed agreement between the annotator labeling of positive and negative sentiments and opinions. We opted for two, positive and negative, measures because annotators may agree on what constitutes a subjective label and disagree on what does not, e.g., their understanding of positive may be close and their understanding of not positive may be far apart. We find the twodimensional values being more informative than the one-dimensional value (Bhowmick et al, 2008; Murakami et al, 2010) . We applied two measures introduced in (Cicchetti and Feinstein, 1990a):", "cite_spans": [ { "start": 548, "end": 570, "text": "(Bhowmick et al, 2008;", "ref_id": "BIBREF4" }, { "start": 571, "end": 592, "text": "Murakami et al, 2010)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Concordance evaluation", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p pos = 2a/(f 1 + g 1 )", "eq_num": "(1)" } ], "section": "Concordance evaluation", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p neg = 2d/(N \u2212 (a \u2212 d))", "eq_num": "(2)" } ], "section": "Concordance evaluation", "sec_num": "5.1" }, { "text": "Next, we computed a commonly used kappa to evaluate a ratio between the chance-corrected observed agreement and the chance-corrected perfect agreement (Cicchetti and Feinstein, 1990a) :", "cite_spans": [ { "start": 151, "end": 183, "text": "(Cicchetti and Feinstein, 1990a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Concordance evaluation", "sec_num": "5.1" }, { "text": "kappa = a+d N \u2212 f 1 g 1 +f 2 g 2 N 2 1 \u2212 f 1 g 1 +f 2 g 2 N 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concordance evaluation", "sec_num": "5.1" }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concordance evaluation", "sec_num": "5.1" }, { "text": "Notations are presented in Table 2 . We report the assessment results in Table 3 . The reported results show that annotators find a common ground on sentences that do not belong to the categories. This mutual understanding holds across all the subjective categories. We interpret this as a possibility of correct identification of negative examples for all the categories. Annotators also agree on what belongs to positive and negative sentiments; for these two categories, we expect correct identification of positive and negative examples. Table 3 : Concordance assessment.", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 34, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 73, "end": 80, "text": "Table 3", "ref_id": null }, { "start": 542, "end": 549, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Concordance evaluation", "sec_num": "5.1" }, { "text": "To analyze the lexical indicators of subjectivity, we built N -gram models (N = 1, 2, 3, 4) . The N -gram models estimate the probability of a word sequence w 1 . . . w n as a conditional probability of the word w n appearing after the sequence of words w 1 . . . w n\u22121 :", "cite_spans": [], "ref_spans": [ { "start": 75, "end": 91, "text": "(N = 1, 2, 3, 4)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Statistical language analysis", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (w n |w n\u22121 1 ) \u2248 P (w n |w n\u22121 n\u2212N +1 )", "eq_num": "(4)" } ], "section": "Statistical language analysis", "sec_num": "5.2" }, { "text": "The models were built for subjective sentences and weak subjective sentences (upper parts of Table 1 ). We analyzed most frequent words (occurrence \u2265 3) and word combinations output by the models.", "cite_spans": [], "ref_spans": [ { "start": 93, "end": 100, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Statistical language analysis", "sec_num": "5.2" }, { "text": "To make the task feasible, we deleted stop words ( i.e., pronouns, prepositions, articles, determiners and auxiliary verbs). Uni-and bi-gram outputs had shown that very few emotionally charged words appear among the most frequent words. Examples of such words are \"good\", \"happy\", \"hard\", \"unfortunately\"; \"good\", \"happy\", however, may indicate courtesy expressions more than sentiments. For instance, their most frequent bi-grams are \"very good\", \"am happy\". Tri-and quadri-gram outputs were very sparse (i.e., occurrences < 5), thus, not reliable for semantic generalization. Important to note that words listed in SentiWordNet (Denecke, 2008) and WordNet-Affect (Strapparava and Mihalceal, 2008) as a rule do not appear in our data .", "cite_spans": [ { "start": 630, "end": 645, "text": "(Denecke, 2008)", "ref_id": "BIBREF10" }, { "start": 665, "end": 698, "text": "(Strapparava and Mihalceal, 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Statistical language analysis", "sec_num": "5.2" }, { "text": "We computed a significant relative frequency difference (Rayson and Garside, 2000) to find words and word combinations (N = 2, 3, 4) on which two sets of sentences differ. The difference was computed as follows:", "cite_spans": [ { "start": 56, "end": 82, "text": "(Rayson and Garside, 2000)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical language analysis", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "LL(w) = 2(a log a(a + b) c +b log b(a + b) d )", "eq_num": "(5)" } ], "section": "Statistical language analysis", "sec_num": "5.2" }, { "text": "where w -the word, a and b are the occurrences of w in sets A and B respectively, c and d -sizes of A and B in words. We chose LL because the measure allows two-tailed comparison of w's position in sets A and B. This method, too, output a few emotionally charged words: \"trouble\", \"hard\", \"problem\", \"expensive\" are content words that differentiate between positive and negative opinions; \"bad\", \"problem\", \"hard\", \"better\" appear among words that differentiate between positive and negative sentiments. Word combinations on which the sets differ do not contain emotionally charged words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical language analysis", "sec_num": "5.2" }, { "text": "Sentiment and opinion classification results are highly susceptible to the classification task, the data characteristics and selected text features. Consequently, the data characteristics affect the classification accuracy. We wished to assess how well algorithms discriminate between (a) positive and negative sentiment sentences, (b) positive and negative opinion sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Experiments", "sec_num": "5.3" }, { "text": "Our hypothesis was that if algorithms achieved a competitive accuracy of learning then it confirmed a good quality of labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Experiments", "sec_num": "5.3" }, { "text": "We used the labeled sentences without any additional pre-processing. As a result, two sentence sets have been built:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.4" }, { "text": "Sentiments 62 positive and 179 negative sentences;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.4" }, { "text": "Opinions 169 positive and 74 negative sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.4" }, { "text": "We represented each set through all the words that appear in the set more than twice. Two types of attributes were used in experiments: bag of all the words (binary representation) and occurrences of all the words (numeric representation). The two representations provided similar results. We further report the numeric representation results, which were slightly better than binary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.4" }, { "text": "We applied Naive Bayes (NB), Decision Trees (DT), K-Nearest Neighbor (KNN) and Support Vector Machines (SVM Table 4 reports the best results. For positive and negative sentiments, the reported results were obtained with the following parameters: DT -learning coefficient \u03b1 = 0.15, NB used kernel estimates; K-NN -9 neighbors, Euclidean distance; SVM -complexity parameter C = 0.65, kernel polynomial = 0.52. For positive and negative opinions, the reported results were obtained with the following parameters: DT -learning coefficient \u03b1 = 0.40; NB -with kernel estimates; K-NN -1 neighbor, Euclidean distance; SVM -complexity parameter C = 2.75, kernel polynomial K= 1.0.", "cite_spans": [], "ref_spans": [ { "start": 108, "end": 115, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Learning Results", "sec_num": "5.5" }, { "text": "Our results are competitive with previously obtained results. As reported in (Sokolova and Lapalme, 2011) , opinion-bearing sentences are classified against facts with Precision 80% -90% (Yu and Hatzivassiloglou, 2003) ; for consumer reviews, opinion-bearing text segments are classified into positive and negative categories with Precision 56% -72%; for online debates, posts were classified as positive or negative with F \u2212 score 39% -67%, F \u2212 score increased to 53% -75% when the posts were enriched with the Web information, . 90% BalancedAccuracy(ROC ) was obtained in opinion spam reviews versus genuine reviews classification. For positive and negative review classification, Accuracy is 75.0% -81.8% when data sets are represented through all the uni-and bigrams.", "cite_spans": [ { "start": 77, "end": 105, "text": "(Sokolova and Lapalme, 2011)", "ref_id": null }, { "start": 187, "end": 218, "text": "(Yu and Hatzivassiloglou, 2003)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Learning Results", "sec_num": "5.5" }, { "text": "Opinion mining and sentiment analysis have become a major research topic for Computational Linguistics. A high demand for knowledge sources prompted development of semantic resources SentiWordNet (Denecke, 2008) , WordNet-Affect (Strapparava and Mihalceal, 2008) , MicroWNOp (Balahur et al, 2010) , as well as lists of affective words or collocations created ad-hoc (Whitelaw et al, 2005; Yu and Hatzivassiloglou, 2003) and even non-affective words (Sokolova and Lapalme, 2011) . Sometimes positive and negative text rating was available and used in machine-learning experiments (Pang et al, 2002) . At the same time, there are no available sources for sentiment and opinion analysis of user-written health discussions. We work to build such a source.", "cite_spans": [ { "start": 196, "end": 211, "text": "(Denecke, 2008)", "ref_id": "BIBREF10" }, { "start": 229, "end": 262, "text": "(Strapparava and Mihalceal, 2008)", "ref_id": null }, { "start": 275, "end": 296, "text": "(Balahur et al, 2010)", "ref_id": null }, { "start": 366, "end": 388, "text": "(Whitelaw et al, 2005;", "ref_id": "BIBREF22" }, { "start": 389, "end": 419, "text": "Yu and Hatzivassiloglou, 2003)", "ref_id": null }, { "start": 449, "end": 477, "text": "(Sokolova and Lapalme, 2011)", "ref_id": null }, { "start": 579, "end": 597, "text": "(Pang et al, 2002)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Text Mining and Corpora Annotation in the Domain", "sec_num": "6" }, { "text": "Sentiment and opinion analysis intensively studied consumer-written product reviews (Blitzer et al, 2007) . Somewhat lesser attention was given to political discussion boards (Kim and Hovy, 2007) . In (Ferguson et al, 2009) , financial blogs were annotated on the document and paragraphs level with their sentiment towards the same topic using a five-point scale Very Negative, Negative, Neutral, Positive, Very Positive, in addition to the labels mixed, which indicates a mixture of positive and negative sentiment, and not relevant. It seemed intuitive that paragraph -level annotation should be useful in providing more accurate information which can be leveraged by a machine learning module. However, the results did not show any improvement. To the best of our knowledge there was only one corpus of blogs with fine-grained annotation of subjectivity (Boldrini et al, 2009) . A multilingual corpus of blog posts on different topics of interest in three languages -Spanish, Italian and English was annotated using a fine-grained annotation schema in order to capture the different subjectivity/ objectivity, emotion/opinion/ attitude aspects.", "cite_spans": [ { "start": 84, "end": 105, "text": "(Blitzer et al, 2007)", "ref_id": "BIBREF5" }, { "start": 175, "end": 195, "text": "(Kim and Hovy, 2007)", "ref_id": "BIBREF13" }, { "start": 201, "end": 223, "text": "(Ferguson et al, 2009)", "ref_id": "BIBREF12" }, { "start": 857, "end": 879, "text": "(Boldrini et al, 2009)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Text Mining and Corpora Annotation in the Domain", "sec_num": "6" }, { "text": "Unlike the listed above work, we concentrate on discussions of health-related topics. There are few dedicated work on polarity of health and medical text. In (Niu et al., 2005; Niu et al., 2006) , the authors analyzed textual expressions corresponding to positive, negative, neural clinical outcomes. In our work, however, clinical outcomes are set apart from user sentiments and opinions.", "cite_spans": [ { "start": 158, "end": 176, "text": "(Niu et al., 2005;", "ref_id": "BIBREF16" }, { "start": 177, "end": 194, "text": "Niu et al., 2006)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Text Mining and Corpora Annotation in the Domain", "sec_num": "6" }, { "text": "So far, experiments in corpora annotation attracted considerably less attention. In (Wiebe et al, 2005) , the authors annotated articles at the wordand phrase-level by using fine-grained annotation scheme. Another experiment on news annotation was carried on for the SemEval 2007 Affective Text Task (Strapparava and Mihalceal, 2008) . The subjectivity annotation of newspaper articles was discussed in (Balahur and Steinberger, 2009) and (Bhowmick et al, 2008) . In the former, the researchers extracted 1592 quotes (reported speech) from newspaper articles and annotated for the sentiment on the target of the quotes. The annotation guidelines allowed increase of the inter-annotator agreement from < 50% up to 60%. In the latter, the authors collected 1000 affective sentences and categorized them into direct and indirect affect categories. Our work, instead, is focused on positive and negative sentiments and opinions in user-written Web messages.", "cite_spans": [ { "start": 84, "end": 103, "text": "(Wiebe et al, 2005)", "ref_id": "BIBREF24" }, { "start": 300, "end": 333, "text": "(Strapparava and Mihalceal, 2008)", "ref_id": null }, { "start": 403, "end": 434, "text": "(Balahur and Steinberger, 2009)", "ref_id": null }, { "start": 439, "end": 461, "text": "(Bhowmick et al, 2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Text Mining and Corpora Annotation in the Domain", "sec_num": "6" }, { "text": "In this paper, we have presented a study of sentiments and opinions in user-written Web messages. We focused on messages posted on health discussion boards. In those messages, users discussed health and ailment, treatments and drugs, asked questions about possible cures. Without having precedents of subjectivity analysis in health discussions, we have designed an author-centric annotation model. The model shows how positive and negative sentiments and positive, negative and neutral opinions can be identified in health discussions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "We applied the annotation model to the sci.med messages of 20 NewsGroups. We have evaluated concordance of the manual annotation by computing three measures : p pos , p neg and kappa. The results show that annotators better identify sentiments than opinions and stronger agree on what type of sentences do not belong to positive or negative subjective categories. Our Machine Learning results are comparable with previous results in the subjectivity domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "Our future plans are to continue the annotation; the final aim is to have all texts annotated by at least five persons. We also plan to study objective, factual statements expressed by users in their messages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "http://kdd.ics.uci.edu/databases/ 20newsgroups/20newsgroups.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The labelled sentences are posted on www.ehealthinformation.ca/ap0/opendata.asp", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The first author's work is in part funded by a Discovery grant of Natural Sciences and Engineering Research Council of Canada. The second author thanks the conference organizers for the RANLP grant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Ontological Approach to Terminology Learning", "authors": [ { "first": "G", "middle": [], "last": "Angelova", "suffix": "" } ], "year": 2009, "venue": "Comptes rendus de l'Academie bulgare des Sciences", "volume": "62", "issue": "10", "pages": "1319--1326", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angelova, G. Ontological Approach to Terminology Learning. Comptes rendus de l'Academie bulgare des Sciences, 62(10), pp. 1319-1326, 2009.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Appraisal of opinion expressions in discourse", "authors": [ { "first": "N", "middle": [], "last": "Asher", "suffix": "" }, { "first": "F", "middle": [], "last": "Benamara", "suffix": "" }, { "first": "Y", "middle": [ "Y" ], "last": "Mathieu", "suffix": "" } ], "year": 2009, "venue": "Lingvisticae Investigationes", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asher N., Benamara F., Y. Y. Mathieu. Appraisal of opinion expressions in discourse, Lingvisticae Inves- tigationes, 32(2), 2009.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Steinberger Rethinking Sentiment Analysis in the News: from Theory to Practice and back", "authors": [ { "first": "A", "middle": [], "last": "Balahur", "suffix": "" }, { "first": "R", "middle": [], "last": "", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 1st Workshop on Opionion Mining and Sentiment Analysis", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Balahur, A., R. Steinberger Rethinking Sentiment Analysis in the News: from Theory to Practice and back. Proceedings of the 1st Workshop on Opionion Mining and Sentiment Analysis, 2009", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "An Agreement Measure for Determining Inter-Annotator Reliability of Human Judgements on Affective Text", "authors": [ { "first": "P", "middle": [], "last": "Bhowmick", "suffix": "" }, { "first": "P", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "A", "middle": [], "last": "Basu", "suffix": "" } ], "year": 2008, "venue": "Proceedings of Workshop on Human Judgements in Computational Linguistics, COLING", "volume": "", "issue": "", "pages": "58--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bhowmick, P., P. Mitra, A. Basu. An Agreement Mea- sure for Determining Inter-Annotator Reliability of Human Judgements on Affective Text, Proceedings of Workshop on Human Judgements in Computa- tional Linguistics, COLING, p.p. 58-65, 2008.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification", "authors": [ { "first": "J", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "M", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blitzer, J., M. Dredze, F. Pereira. Biographies, bol- lywood, boom-boxes and blenders: Domain adap- tation for sentiment classification. Proceedings of ACL, 440-447, 2007", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Montoyo EmotiBlog: a finer-grained and more precise learning of subjectivity expression models", "authors": [ { "first": "E", "middle": [], "last": "Boldrini", "suffix": "" }, { "first": "A", "middle": [], "last": "Balahur", "suffix": "" }, { "first": "P", "middle": [], "last": "Martnez-Barco", "suffix": "" }, { "first": "A", "middle": [], "last": "", "suffix": "" } ], "year": 2009, "venue": "Proceedings of LAW IV, ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boldrini, E., A. Balahur, P. Martnez-Barco, A. Mon- toyo EmotiBlog: a finer-grained and more precise learning of subjectivity expression models. In Pro- ceedings of LAW IV, ACL, 2009", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Dimensions of Subjectivity in Natural Language (Short Paper)", "authors": [ { "first": "W", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, W. Dimensions of Subjectivity in Natural Lan- guage (Short Paper). In Proceedings of ACL-HLT, 2008.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "High Agreement but Low Kappa: The Problems of Two Paradoxes", "authors": [ { "first": "D", "middle": [], "last": "Cicchetti", "suffix": "" }, { "first": "A", "middle": [], "last": "Feinstein", "suffix": "" } ], "year": 1990, "venue": "Journal of Clinical Epidemiology", "volume": "43", "issue": "6", "pages": "551--558", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cicchetti, D., A. Feinstein. High Agreement but Low Kappa: The Problems of Two Paradoxes, Journal of Clinical Epidemiology, 43(6),p.p. 543-549 and p p. 551-558, 1990.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Test suite design for biomedical ontology concept recognition systems", "authors": [ { "first": "K", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "C", "middle": [], "last": "Roeder", "suffix": "" }, { "first": "W", "middle": [], "last": "Baumgartner", "suffix": "" }, { "first": "L", "middle": [], "last": "Hunter", "suffix": "" }, { "first": "K", "middle": [], "last": "Verspoor", "suffix": "" } ], "year": 2010, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "441--446", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, K., C. Roeder, W. Baumgartner Jr., L. Hunter, and K. Verspoor Test suite design for biomedical ontology concept recognition systems. Proceedings of LREC, pp. 441-446, 2010.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Using SentiWordNet for multilingual sentiment analysis, Data Engineering Workshop", "authors": [ { "first": "K", "middle": [], "last": "Denecke", "suffix": "" } ], "year": 2008, "venue": "IEEE 24th International Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Denecke, K. Using SentiWordNet for multilingual sen- timent analysis, Data Engineering Workshop, IEEE 24th International Conference, 2008.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Exploring the use of Paragraph-level Annotations for Sentiment Analysis of Financial Blogs", "authors": [ { "first": "P", "middle": [], "last": "Ferguson", "suffix": "" }, { "first": "N", "middle": [], "last": "O'hare", "suffix": "" }, { "first": "M", "middle": [], "last": "Davy", "suffix": "" }, { "first": "A", "middle": [], "last": "Bermingham", "suffix": "" }, { "first": "S", "middle": [], "last": "Tattersall", "suffix": "" }, { "first": "P", "middle": [], "last": "Sheridan", "suffix": "" }, { "first": "C", "middle": [], "last": "Gurrin", "suffix": "" }, { "first": "A", "middle": [], "last": "Smeaton", "suffix": "" } ], "year": 2009, "venue": "WOMAS 2009 -Workshop on Opinion Mining and Sentiment Analysis", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ferguson, P., O'Hare, N., Davy, M., Bermingham, A., Tattersall, S., Sheridan, P., Gurrin, C., Smeaton, A. Exploring the use of Paragraph-level Annotations for Sentiment Analysis of Financial Blogs. WOMAS 2009 -Workshop on Opinion Mining and Sentiment Analysis, 2009.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Crystal: Analyzing predictive opinions on the web", "authors": [ { "first": "S.-M", "middle": [], "last": "Kim", "suffix": "" }, { "first": "E", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 EMNLP-CoNLL", "volume": "", "issue": "", "pages": "1056--1064", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim, S.-M., E. Hovy. Crystal: Analyzing predic- tive opinions on the web. Proceedings of the 2007 EMNLP-CoNLL, pages 1056-1064, 2007.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Biomedical Informatics Techniques for Processing and Analyzing Web Blogs of Military Service Members", "authors": [ { "first": "S", "middle": [], "last": "Konovalov", "suffix": "" }, { "first": "M", "middle": [], "last": "Scotch", "suffix": "" }, { "first": "L", "middle": [], "last": "Post", "suffix": "" }, { "first": "C", "middle": [], "last": "Brandt", "suffix": "" } ], "year": 2010, "venue": "Journal of Medical Internet Research", "volume": "12", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Konovalov S, M. Scotch, L. Post, C. Brandt. Biomedi- cal Informatics Techniques for Processing and An- alyzing Web Blogs of Military Service Members, Journal of Medical Internet Research, 12(4), 2010.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Context Dependence, disagreement, and predicates of personal taste Linguistics and Philosophy", "authors": [ { "first": "P", "middle": [], "last": "Lasersohn", "suffix": "" } ], "year": 2005, "venue": "", "volume": "28", "issue": "", "pages": "643--686", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lasersohn, P. Context Dependence, disagreement, and predicates of personal taste Linguistics and Philos- ophy, 28, pages 643-686, 2005", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Automatic Classification of Semantic Relations between Facts and Opinions", "authors": [ { "first": "K", "middle": [], "last": "Murakami", "suffix": "" }, { "first": "E", "middle": [], "last": "Nichols", "suffix": "" }, { "first": "J", "middle": [], "last": "Mizuno", "suffix": "" }, { "first": "Y", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "H", "middle": [], "last": "Goto", "suffix": "" }, { "first": "M", "middle": [], "last": "Ohki", "suffix": "" }, { "first": "S", "middle": [], "last": "Matsuyoshi", "suffix": "" }, { "first": "K", "middle": [], "last": "Inui", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" }, { "first": "Y", "middle": [], "last": "Niu", "suffix": "" }, { "first": "X", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "J", "middle": [], "last": "Li", "suffix": "" }, { "first": "G", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2005, "venue": "Proceedings of NLP Challenges in the Information Explosion Era, COLING", "volume": "", "issue": "", "pages": "500--574", "other_ids": {}, "num": null, "urls": [], "raw_text": "Murakami, K., E. Nichols, J. Mizuno, Y.Watanabe, H. Goto, M. Ohki, S. Matsuyoshi, K. Inui, Y. Mat- sumoto. Automatic Classification of Semantic Re- lations between Facts and Opinions, Proceedings of NLP Challenges in the Information Explosion Era, COLING, p.p. 21-31,2010, Niu, Y., X. Zhu, J. Li, G. Hirst. Analysis of polarity information in medical text, in Proceedings of the AMIA Annual Symposium, 2005, 500-574.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Using outcome polarity in sentence extraction for medical question-answering", "authors": [ { "first": "Y", "middle": [], "last": "Niu", "suffix": "" }, { "first": "X", "middle": [ "D" ], "last": "Zhu", "suffix": "" }, { "first": "G", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the AMIA Annual Symposium", "volume": "", "issue": "", "pages": "599--603", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niu, Y., X. D. Zhu, G. Hirst. Using outcome polarity in sentence extraction for medical question-answering. In Proceedings of the AMIA Annual Symposium, 2006, 599-603.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Thumbs up? Sentiment Classification using Machine Learning Techniques Proceedings of EMNLP'-02", "authors": [ { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" }, { "first": "S", "middle": [], "last": "Vaithyanathan ; Quirk", "suffix": "" }, { "first": "R", "middle": [], "last": "", "suffix": "" }, { "first": "S", "middle": [], "last": "Greenbaum", "suffix": "" }, { "first": "G", "middle": [], "last": "Leech", "suffix": "" }, { "first": "J", "middle": [], "last": "", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pang, B., L. Lee, S. Vaithyanathan, Thumbs up? Sen- timent Classification using Machine Learning Tech- niques Proceedings of EMNLP'-02, pages 79-86, 2002. Quirk, R., S. Greenbaum, G. Leech, J. Svartvik A Comprehensive Grammar of the English Language Longman, 1985.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Comparing corpora using frequency profiling", "authors": [ { "first": "P", "middle": [], "last": "Rayson", "suffix": "" }, { "first": "R", "middle": [], "last": "Garside", "suffix": "" } ], "year": 2000, "venue": "Proceedings of Comparing Corpora Workshop, ACL", "volume": "", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rayson, P., R. Garside. Comparing corpora using fre- quency profiling. Proceedings of Comparing Cor- pora Workshop, ACL, p.p. 1-6, 2000.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Learning opinions in usergenerated Web content", "authors": [ { "first": "M", "middle": [], "last": "Sokolova", "suffix": "" }, { "first": "G", "middle": [], "last": "Lapalme", "suffix": "" } ], "year": null, "venue": "Journal of Natural Language Engineering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sokolova, M., G. Lapalme. Learning opinions in user- generated Web content. Journal of Natural Lan- guage Engineering, to appear.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Mihalcea Learning to Identify Emotions in Text", "authors": [ { "first": "C", "middle": [], "last": "Strapparava", "suffix": "" }, { "first": "R", "middle": [], "last": "", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 ACM symposium on Applied Computing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Strapparava, C., R. Mihalcea Learning to Identify Emotions in Text, Proceedings of the 2008 ACM symposium on Applied Computing 2008", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Argamon Using Appraisal Groups for Sentiment Analysis", "authors": [ { "first": "C", "middle": [], "last": "Whitelaw", "suffix": "" }, { "first": "N", "middle": [], "last": "Garg", "suffix": "" }, { "first": "S", "middle": [], "last": "", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 14th ACM international conference on Information and knowledge management", "volume": "", "issue": "", "pages": "625--631", "other_ids": {}, "num": null, "urls": [], "raw_text": "Whitelaw, C., N. Garg, S. Argamon Using Appraisal Groups for Sentiment Analysis. Proceedings of the 14th ACM international conference on Information and knowledge management, pp. 625 -631, 2005.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Tracking point of view in narrative", "authors": [ { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "20", "issue": "", "pages": "233--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wiebe, J. Tracking point of view in narrative. Compu- tational Linguistics, 20, pp. 233-287, 1994", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Cardie Annotating Expressions of Opinions and Emotions in Language, Language Resources and Evaluation", "authors": [ { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "T", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "C", "middle": [], "last": "", "suffix": "" } ], "year": 2005, "venue": "", "volume": "39", "issue": "", "pages": "165--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wiebe, J., T. Wilson, C. Cardie Annotating Expres- sions of Opinions and Emotions in Language, Lan- guage Resources and Evaluation, 39 (2-3), pp. 165- 210, 2005", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Hatzivassiloglou Towards Answering Opinion Questions: Separating Facts from Opinions and Identifying the Polarity of Opinion Sentences", "authors": [ { "first": "H", "middle": [], "last": "Yu", "suffix": "" }, { "first": "V", "middle": [], "last": "", "suffix": "" } ], "year": 2003, "venue": "Proceedings of EMNLP-03", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu, H., V. Hatzivassiloglou Towards Answering Opin- ion Questions: Separating Facts from Opinions and Identifying the Polarity of Opinion Sentences. In Proceedings of EMNLP-03, 2003.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "http://www.gocoldflu.info/archives/, accessed April 25, 2011 Posted by Kristi: I really dont know why everyones freaking out about the H1N1 vaccine. I got it the first day it came out (about a week and a half ago) and so did 4 of my family members. None of us had any problems and were all really glad we got the vaccine.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "A user post about H1N1 vaccination.", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Skepticism is the chastity of geb@cadre.dsl.pitt.edu | the intellect, and it is shameful to surrender it too soon.\" -------------------Example of a message.", "uris": null, "num": null }, "TABREF2": { "html": null, "content": "
1st observer
2nd observer YESNOTotals
YESabg 1
NOcdg 2
Totalsf 1f 2N
", "num": null, "text": "Concordance matrix.", "type_str": "table" }, "TABREF5": { "html": null, "content": "
: Classification results for positive and neg-
ative sentence classification. The values are aver-
aged for positive and negative classes. Best values
are in bold. Baseline is calculated if all the sen-
tences are into the majority class.
", "num": null, "text": "", "type_str": "table" } } } }