Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R11-1033",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:04:37.020471Z"
},
"title": "Investigating Advanced Techniques for Document Content Similarity Applied to External Plagiarism Analysis",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Micol",
"suffix": "",
"affiliation": {},
"email": "dmicol@dlsi.ua.es\u00f3"
},
{
"first": "Rafael",
"middle": [],
"last": "Mu\u00f1oz",
"suffix": "",
"affiliation": {},
"email": "rafael@dlsi.ua.es\u00f3"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an approach to perform external plagiarism analysis by applying several similarity detection techniques, such as lexical measures and a textual entailment recognition system developed by our research group. Some of the least expensive features of this system are applied to all corpus documents to detect those that are likely to be plagiarized. After this is done, the whole system is applied over this subset of documents to extract the exact n-grams that have been plagiarized, given that we now have less data to process and therefore can use a more complex and costly function. Apart from the application of strictly lexical measures, we also experiment with a textual entailment recognition system to detect plagiarisms with a high level of obfuscation. In addition, we experiment with the application of a spell corrector and a machine translation system to handle misspellings and plagiarisms translated into different languages, respectively.",
"pdf_parse": {
"paper_id": "R11-1033",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an approach to perform external plagiarism analysis by applying several similarity detection techniques, such as lexical measures and a textual entailment recognition system developed by our research group. Some of the least expensive features of this system are applied to all corpus documents to detect those that are likely to be plagiarized. After this is done, the whole system is applied over this subset of documents to extract the exact n-grams that have been plagiarized, given that we now have less data to process and therefore can use a more complex and costly function. Apart from the application of strictly lexical measures, we also experiment with a textual entailment recognition system to detect plagiarisms with a high level of obfuscation. In addition, we experiment with the application of a spell corrector and a machine translation system to handle misspellings and plagiarisms translated into different languages, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We believe there are two main user scenarios where external plagiarism detection tools are applied, sharing both of them the fact that they have a large source documents corpus. The difference, however, is that the first scenario is based on a large number of suspicious documents being processed at the same time, so the detection approach needs to be highly efficient and scalable. An example of this scenario would be the 1st and 2nd International Competitions on Plagiarism Detection (Potthast et al., 2009; Potthast et al., 2010) , where the corpora contain multiple source and suspicious documents. For this first use case we have developed a system to detect external document plagiarism that is highly efficient and scalable. It contains a first phase where a small subset of source documents are selected as possible candidates to be the origin of the plagiarism for a given suspicious document. Given that this phase processes the whole corpora, it uses a simple and lightweight function to select the subset of candidate source documents. After this is done, a more complex function is applied over this subset to extract which documents contain the plagiarism, and the exact position within these documents. This two-step approach is common among research systems, as described in (Potthast et al., 2009) .",
"cite_spans": [
{
"start": 488,
"end": 511,
"text": "(Potthast et al., 2009;",
"ref_id": "BIBREF17"
},
{
"start": 512,
"end": 534,
"text": "Potthast et al., 2010)",
"ref_id": "BIBREF18"
},
{
"start": 1293,
"end": 1316,
"text": "(Potthast et al., 2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The second use case assumes that we only have to process one suspicious document at a time. Therefore, we can apply more complex techniques that are less efficient but highly accurate, as there is less data to process. An example of this use case could be an online system to detect if a scientific manuscript that an author wants to submit to a journal or conference is a plagiarism of a previously published paper. For this second use case we have experimented with more complex and accurate techniques, such as the usage of textual entailment recognition methods developed by our research group. In addition, we have also applied a spell corrector and a machine translation system to handle documents with misspellings and written in different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the research approaches on external plagiarism analysis contain a simple and efficient heuristic retrieval to reduce the number of source documents to compare against, and a more complex and costly detailed analysis that attempts to extract the exact position of the plagiarized fragment, if any (Potthast et al., 2009) . The system that we have developed is in line with this archi-tecture.",
"cite_spans": [
{
"start": 304,
"end": 327,
"text": "(Potthast et al., 2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State of the art",
"sec_num": "2"
},
{
"text": "With regards to the heuristic retrieval, (Basile et al., 2008; Grozea et al., 2009) decided to apply a document similarity function that would be used as heuristic to determine if a given suspicious and source documents are similar enough to hold a plagiarism relation. (Kasprzak et al., 2009) create an inverted index of the corpus document's contents in order to be able to retrieve efficiently a set of documents that contain a set of n-grams. (Grozea et al., 2009; Stamatatos, 2009) implement a character-level n-gram comparison and apply a cosine similarity function based on term frequency weights. With this approach they extract the 51 most similar source documents to the suspicious one being analyzed. (Basile et al., 2009; Kasprzak et al., 2009) decided to implement a word-level ngram comparison. Low granularity word n-grams, with a size of 1, have been explored by (Muhr et al., 2009) , applying cosine similarity using frequency weights to extract the two most similar partitions for every sentence in a document, using the source document's sentences as centroid.",
"cite_spans": [
{
"start": 41,
"end": 62,
"text": "(Basile et al., 2008;",
"ref_id": "BIBREF1"
},
{
"start": 63,
"end": 83,
"text": "Grozea et al., 2009)",
"ref_id": "BIBREF7"
},
{
"start": 270,
"end": 293,
"text": "(Kasprzak et al., 2009)",
"ref_id": "BIBREF10"
},
{
"start": 447,
"end": 468,
"text": "(Grozea et al., 2009;",
"ref_id": "BIBREF7"
},
{
"start": 469,
"end": 486,
"text": "Stamatatos, 2009)",
"ref_id": "BIBREF20"
},
{
"start": 712,
"end": 733,
"text": "(Basile et al., 2009;",
"ref_id": "BIBREF2"
},
{
"start": 734,
"end": 756,
"text": "Kasprzak et al., 2009)",
"ref_id": "BIBREF10"
},
{
"start": 879,
"end": 898,
"text": "(Muhr et al., 2009)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State of the art",
"sec_num": "2"
},
{
"text": "For the detailed analysis, (Basile et al., 2009 ) perform a greedy match merging if the distance of the matches is not too high. A more strict approach has been presented by (Muhr et al., 2009) , requiring exact sentence matches, and afterwards applying a match merging approach by greedily joining consecutive sentences. In this method, gaps are allowed if the respective sentences are similar to the corresponding sentences in the other document. (Grozea et al., 2009 ) perform a computation of the distances of adjacent matches, joining them based on a Monte Carlo optimization. Afterwards, they propose a refinement of the obtained section pairs. (Kasprzak et al., 2009) extract matches of word n-grams of length 5, and apply a Match Merging Heuristic to get larger matches. Then they extract the maximum size that shares at least 20 matches, including the first and the last n-gram of the matching sections, and for which 2 adjacent matches are at most 49 not-matching ngrams apart.",
"cite_spans": [
{
"start": 27,
"end": 47,
"text": "(Basile et al., 2009",
"ref_id": "BIBREF2"
},
{
"start": 174,
"end": 193,
"text": "(Muhr et al., 2009)",
"ref_id": "BIBREF15"
},
{
"start": 449,
"end": 469,
"text": "(Grozea et al., 2009",
"ref_id": "BIBREF7"
},
{
"start": 651,
"end": 674,
"text": "(Kasprzak et al., 2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State of the art",
"sec_num": "2"
},
{
"text": "We will first present a baseline system that is efficient and scalable, and designed to work for the first use case mentioned above. For this purpose, we will use corpora of thousands of suspicious and source documents, where every suspi-cious can contain none, one or more plagiarisms of any source documents. After this, we present certain optimizations built on top of our baseline system that will make it more accurate, although slower, and therefore will be applicable in the second use case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Our baseline system , developed for our participation in the 2nd International Competition on Plagiarism Detection (Potthast et al., 2010) , has two phases: document selection, using a heuristic retrieval, and passage matching, performing a more detailed analysis.",
"cite_spans": [
{
"start": 115,
"end": 138,
"text": "(Potthast et al., 2010)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline system",
"sec_num": "3.1"
},
{
"text": "The first step is to select a subset of candidate source documents that will later on be compared against a given suspicious document. This should reduce by a large factor the number of document comparisons to perform. To generate this set we will have to loop through all source documents, and given that this set is large, this operation needs to be relatively simple and inexpensive. Our approach to solve this problem is to weight the words in every document and then compare the weights of those terms that appear in both the suspicious and the source documents being compared. Their similarity score will be the sum of the mentioned common term weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline system",
"sec_num": "3.1"
},
{
"text": "Once we have a small subset of source documents to compare against for every suspicious one, we can perform a more accurate and costly comparison between pairs of documents. We try to find the largest common substring between suspicious and source documents, requiring a minimum length which will be the n-gram size. Once the n-grams of the source document being compared against have been extracted, we will iterate through the contents of the suspicious document, extract n-grams starting at every given offset, look them up in the list of n-grams of the aforementioned source document, and seek directly to the positions where the given n-gram appears, avoiding unnecessary comparisons. From these offsets we will try to find the largest common substring to both documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline system",
"sec_num": "3.1"
},
{
"text": "The baseline system that we have detailed before is suitable for low levels of plagiarism obfuscation, given that it is based on lexical comparisons. If the person who performs the appropriation uses equivalent terms instead of the original ones, or swaps the word order considerably, our system will not perform well and won't recognize these plagiarisms. To be able to detect these sorts of appropriations, we add semantic and syntactic techniques, as well as more advanced lexical measures. Concretely, we decided to apply DLSITE (Ferr\u00e1ndez et al., 2007a) , a textual entailment recognition system developed by our research group that analyzes pairs of sentences, being one the text and the other the hypothesis, trying to determine if the hypothesis' meaning can be inferred from the text's. Therefore, with the use of this system, we could detect plagiarisms that are written in different manners, but still share their meaning. DLSITE contains the following modules:",
"cite_spans": [
{
"start": 533,
"end": 558,
"text": "(Ferr\u00e1ndez et al., 2007a)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DLSITE: a textual entailment recognition system",
"sec_num": "3.2"
},
{
"text": "Lexical analysis The lexical module of DLSITE (Ferr\u00e1ndez et al., 2007b) computes the extraction of several lexical feature values for a given texthypothesis pair. These measures are mainly based on word co-occurrences in both the hypothesis and the text, as well as the context where they appear.",
"cite_spans": [
{
"start": 46,
"end": 71,
"text": "(Ferr\u00e1ndez et al., 2007b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DLSITE: a textual entailment recognition system",
"sec_num": "3.2"
},
{
"text": "The syntactic module of DL-SITE (Micol et al., 2007) compares the meaning of the text and the hypothesis by generating their corresponding syntactic dependency trees, and then analyzing the similarities of these two structures. It is composed of a pipeline of four submodules, which are syntactic dependency tree construction, filtering, embedded subtree search and graph node matching.",
"cite_spans": [
{
"start": 32,
"end": 52,
"text": "(Micol et al., 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic analysis",
"sec_num": null
},
{
"text": "The semantic module of DL-SITE analyzes a text-hypothesis pair from a meaning's perspective, using resources such as Word-Net, VerbOcean and FrameNet. Similar research projects have already developed procedures using standard WordNet-based similarities (Corley and Mihalcea, 2005; Hickl and Bensley, 2007) . However, in our case we also consider string-based similarities for the final similarity score. This allows us to positively consider entities that, while not appearing in WordNet, are very relevant, instead of penalizing their similarity score. We exploit WordNet relations in order to find semantic paths that connect two concepts through the Word-Net taxonomy.",
"cite_spans": [
{
"start": 253,
"end": 280,
"text": "(Corley and Mihalcea, 2005;",
"ref_id": "BIBREF3"
},
{
"start": 281,
"end": 305,
"text": "Hickl and Bensley, 2007)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic analysis",
"sec_num": null
},
{
"text": "Since verbs have a strong contribution to the sentence's final meaning, we want to measure how the hypothesis' verbs are related to the text's. To achieve this, we exploit the VerbNet lexicon (Kipper et al., 2006) , and the VerbOcean and Word-Net relationships, trying to find correlations between the main verbs expressed in the hypothesis with those in the text. The underlying intuition about the VerbNet correspondence is that the verbs wrapped in the same VerbNet class or in one of their subclasses have a strong semantic relation since they share the same thematic roles and restrictions, as well as syntactic and semantic frames. Additionally, VerbOcean's relations are good indicators of semantic correspondence between verbs.",
"cite_spans": [
{
"start": 192,
"end": 213,
"text": "(Kipper et al., 2006)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic analysis",
"sec_num": null
},
{
"text": "Another relevant issue to recognize entailment relations is to analyze the presence and absence of named entities. (Rodrigo et al., 2008) successfully built their system mainly using the knowledge supplied by the recognition of named entities. Other works, such as (Iftene and Moruz, 2009) and our participation in the Text Analysis Conference 2008 (Balahur et al., 2008) , have also proven that knowledge about named entities positively helps in modeling entailments. In our case, rather than constructing the system based on named entity inferences, we study the addition of this knowledge in our textual entailment recognition system.",
"cite_spans": [
{
"start": 115,
"end": 137,
"text": "(Rodrigo et al., 2008)",
"ref_id": "BIBREF19"
},
{
"start": 265,
"end": 289,
"text": "(Iftene and Moruz, 2009)",
"ref_id": "BIBREF9"
},
{
"start": 349,
"end": 371,
"text": "(Balahur et al., 2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic analysis",
"sec_num": null
},
{
"text": "Therefore, similarly as we did for verbs, we explored ways to find out entity counterparts between the text and the hypothesis. The first step is to recognize named entities, and for this purpose we use our in-house named entity recognizer, called NERUA (Kozareva et al., 2007) . Afterwards, we use two surface techniques to discover NE relations: partial entity matching and acronym correspondences between the NEs detected in the hypothesis and the ones in the text.",
"cite_spans": [
{
"start": 254,
"end": 277,
"text": "(Kozareva et al., 2007)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic analysis",
"sec_num": null
},
{
"text": "We have identified some scenarios where it would be beneficial to perform additional corpus preprocessing. These are described as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus pre-processing",
"sec_num": "3.3"
},
{
"text": "Handling misspellings Given that our method is heavily based on term frequencies, a misspelling in the processed documents could introduce a high level of noise, since they will have a lower document frequency, and therefore a higher idf . Also, if a misspelling appears in a suspicious and a source document, these will be heavily linked by this term, and their similarity score may not be fair when comparing it with other documents. There-fore, it would be beneficial to apply a spell corrector over the documents in our corpora, such as the one described in (Gao et al., 2010) . To minimize the impact of false positives from the speller system, we would perform a two-pass algorithm. In the first pass we would not apply the spell corrector, and would try to retrieve all the plagiarisms that our system recognizes. In the second pass we would apply the spell corrector and attempt to extract additional appropriations. By doing this we ensure that we don't loose plagiarisms if the spell corrector system introduces some noise into the data. Document translation When plagiarizing a document, an author can choose to translate it into a different language. This is the case, for instance, for some of the plagiarized documents of the PAN corpora, which have been translated into Spanish or German (Potthast et al., 2009) . These appropriations won't be detected by our system unless we translate them into English, as this is the language in which the source documents are written. As a pre-processing step, we propose to apply a language detector over the set of suspicious documents, and if this tool detects that they are not in English, we execute an automatic translator to transform the corresponding document into English. The detection step is performed using the API of a machine translation application. Given that this is a remote live production system and some of the documents in our corpus can be large, sending the whole text doesn't seem to be the best approach. For the user case where we have a large amount of suspicious documents to process, we send a fragment composed of the first few hundreds of words from a document in order to get a fast and scalable response. This is not completely accurate, as some times documents contain fragments written in different languages. If we only process one suspicious document, we perform a more complex and accurate process. To do this we first split the document content into sentences based on punctuation symbols. Then, we submit three random sentences from the text to the translation application. If all of them return the same language detected, this will be the one of the document. If this is not the case we take another set of three sentences. Similar to what we previously mentioned, we perform a two-pass algorithm in order to reduce the impact of false positives introduced by the translation software.",
"cite_spans": [
{
"start": 562,
"end": 580,
"text": "(Gao et al., 2010)",
"ref_id": "BIBREF6"
},
{
"start": 1303,
"end": 1326,
"text": "(Potthast et al., 2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus pre-processing",
"sec_num": "3.3"
},
{
"text": "As mentioned before, the corpora that we have used to measure and evaluate our system have been provided by the 1st International Competition on Plagiarism Detection. These are composed thousands of source and suspicious documents, some of the latter containing automatically generated plagiarisms with different levels of obfuscation. In addition, some source documents are written in Spanish or German, but the corresponding plagiarized document has been translated into English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimentation and results",
"sec_num": "4"
},
{
"text": "To experiment with our system we used the external plagiarism corpora from the 1st International Competition on Plagiarism Detection. The first aspect we experimented with was trying to determine the optimal number of documents to be selected, given that a larger amount would lead to higher accuracy, but would affect performance negatively. The opposite applies to smaller selected document sets. Table 1 shows the results from this experiment using different set sizes, where column Captured represents the number of plagiarisms that are contained within the set of source documents, and Missed those that are not included in this set. Given the values from Table 1 , we decided to use a number of documents of 10, since we believe it is the best trade-off between amount of texts and recall. After this step, we executed the passage detection, which produced an overall score of 0.3902. As we can see in these results, the strongest aspect of our baseline system is its precision, where it ranks the third among all participants. On the other hand, recall and granularity were not as good, but still within the top half. The reason why recall is lower is in part due to the fact that we chose 10 source documents per suspicious text to evaluate, giving a maximum coverage value of 77.81%. Apart from this, and since our method is purely lexical, we miss plagiarisms that are not written in similar ways. Finally, documents that are translated will also lower our recall. On the other hand, granularity would have been lower if we had been more aggressive at merging matches, although then precision might have suffered.",
"cite_spans": [],
"ref_spans": [
{
"start": 399,
"end": 406,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 661,
"end": 668,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Baseline system",
"sec_num": "4.1"
},
{
"text": "Due to the expensive computational cost of executing a textual entailment recognition system, we used the corpora provided for the Recognizing Textual Entailment challenges. To simulate that the text-hypothesis pairs in these corpora are documents, we combine the texts into a single document and the hypothesis into another one, and then perform a plagiarism detection using both documents. Table 2 shows the results using our baseline system and the textual entailment recognition method previously described. As we can see in this table, our baseline system doesn't recognize the cases where there is an entailment, given that the pairs are written in a very different way. Applying our textual entailment recognition method provides significant gains. ",
"cite_spans": [],
"ref_spans": [
{
"start": 392,
"end": 399,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Applying a textual entailment recognition system",
"sec_num": "4.2"
},
{
"text": "Given the nature of the corpora provided for the 1st International Competition on Plagiarism Detection, we cannot apply them to test a speller sys-tem given that the plagiarisms are automatically generated and therefore they do not contain misspellings (Potthast et al., 2009) . Instead, we evaluate the addition of this module based on the results that spellers achieve in real-world applications. Typically, web spellers have an accuracy of around 90% assuming an 85% of correctly spelled queries and 15% of misspellings, as described in (Gao et al., 2010) . This means that there is clearly a gain of applying these systems as, even though they introduce some noise, in general terms they produce significant benefits. In addition, they are deterministic systems, and given that we apply them to both the source and suspicious document, an incorrect behavior for a given word in a source document would also be applied to the same word in the suspicious, and vice versa. In our system we want to match terms that appear in the same manner, and therefore a false positive or negative produced by the speller system won't hurt the accuracy of our plagiarism detection software.",
"cite_spans": [
{
"start": 253,
"end": 276,
"text": "(Potthast et al., 2009)",
"ref_id": "BIBREF17"
},
{
"start": 540,
"end": 558,
"text": "(Gao et al., 2010)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Handling misspellings",
"sec_num": "4.3"
},
{
"text": "Assuming a highly misspelled document, the application of a speller could produce a net gain of about 5%, which is a very important increase. In addition, speller systems typically return a normalize score value depending on the confidence of a given candidate. Based on this they either produce a suggestion, when there is lower confidence, or an auto-correction, when there is higher. We could tune our system to use a more or less aggressive speller depending on the user's needs as well as the nature of the input corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Handling misspellings",
"sec_num": "4.3"
},
{
"text": "The corpora provided for the 1st International Competition on Plagiarism Detection contains source documents in languages other than English, although the suspicious ones have been translated. Concretely, there are 13, 559 source documents in English, and 870 in other languages. Given that the suspicious texts will be in English, our system won't find the plagiarisms associated to those 870 due to language mismatches. To overcome this issue we applied the translator previously described, using different configurations. The parameter we changed was the number of words from the document to submit to the translator, using the first 200, 500 and 1, 000 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document translation",
"sec_num": "4.4"
},
{
"text": "The following table shows the results from applying the language detector over the source documents corpus. Table 3 : Results from applying the language detector over the source documents corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 115,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Document translation",
"sec_num": "4.4"
},
{
"text": "We define positives as the documents that have been translated, and negatives those that have been not. In this table we can see that there is a 5.77% increase in accuracy if we apply a language detector using the first 1, 000 words of a document. However, given that we use a two-pass algorithm, the number of FPs would be 0, which means that the final accuracy after applying a language detection software would be 0.9984, which is a 5.87% higher than the baseline. This means that, assuming a perfect translator and plagiarism detector, our system's score could increase in almost six points, which is a big improvement. The final gain will depend on the user's document translation software choice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "In this paper we have presented a baseline system for external plagiarism analysis mainly based on lexical similarities, and a set of more advanced techniques that could be beneficial to external plagiarism analysis. While the baseline system is very efficient and produces reasonable results, the application of the aforementioned advanced techniques can have a very significant impact, depending on the corpus' nature. However, these latter methods decrease our overall system's performance considerably, so they are not applicable to large corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "5"
},
{
"text": "We have also explained two scenarios where we believe that plagiarism detection tools are applied. In the first of them, where we would have a large suspicious documents corpus, the application of advanced techniques would not be feasible given their low efficiency. Therefore, in this case we would have to use our baseline system which is mainly based on lexical measures. On the other hand, in the second user scenario, where we only have one suspicious document to analyze, the application of the aforementioned advanced techniques is suitable given the smaller amount of data to process. In this case we will be able to achieve higher accuracy rates and support a larger number of obfuscation cases. Therefore, there is a tradeoff between accuracy and response time, which will be in large determined by the size of the corpus to process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "5"
},
{
"text": "As future work we would like to apply a word alignment algorithm to detect plagiarisms, such as the one described in (Och, 2002) . This would be a more flexible and accurate approach, rather than forcing the words to appear in the same position in both documents being analyzed, although its computational cost would also be considerably higher. This should allow our system to recognize higher levels of obfuscation than our current approach. In addition, it would be very beneficial for multilingual plagiarism analysis. This kind of task presents the challenge that words might not appear in the same order, not even after a machine translation tool has been applied. Hence, applying the aforementioned word alignment algorithm would allow us to handle better multilingual plagiarism.",
"cite_spans": [
{
"start": 117,
"end": 128,
"text": "(Och, 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "This research has been partially funded by the Spanish Ministry of Science and Innovation (grant TIN2009-13391-C04-01) and the Conselleria d'Educaci\u00f3 of the Spanish Generalitat Valenciana (grants PROMETEO/2009/119 and ACOMP/2010/286). Furthermore, we would like to thank Dario Bigongiari and Michael Schueppert for their help and support.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The DLSIUAES Team's Participation in the TAC 2008 Tracks",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Balahur",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Lloret",
"suffix": ""
},
{
"first": "\u00d3scar",
"middle": [],
"last": "Ferr\u00e1ndez",
"suffix": ""
},
{
"first": "Andr\u00e9s",
"middle": [],
"last": "Montoyo",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Palomar",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Mu\u00f1oz",
"suffix": ""
}
],
"year": 2008,
"venue": "Notebook Papers of the Text Analysis Conference, TAC 2008 Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra Balahur, Elena Lloret,\u00d3scar Ferr\u00e1ndez, Andr\u00e9s Montoyo, Manuel Palomar, and Rafael Mu\u00f1oz. 2008. The DLSIUAES Team's Participa- tion in the TAC 2008 Tracks. In Notebook Papers of the Text Analysis Conference, TAC 2008 Workshop, Gaithersburg, Maryland, USA, November.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An example of mathematical authorship attribution",
"authors": [
{
"first": "Chiara",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Benedetto",
"suffix": ""
},
{
"first": "Emanuele",
"middle": [],
"last": "Caglioti",
"suffix": ""
},
{
"first": "Mirko Degli",
"middle": [],
"last": "Esposti",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Mathematical Physics",
"volume": "49",
"issue": "",
"pages": "125211--125230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiara Basile, Dario Benedetto, Emanuele Caglioti, and Mirko Degli Esposti. 2008. An example of mathematical authorship attribution. Journal of Mathematical Physics, 49:125211-125230.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Plagiarism Detection Procedure in Three Steps: Selection, Matches and \"Squares",
"authors": [
{
"first": "Chiara",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Benedetto",
"suffix": ""
},
{
"first": "Emanuele",
"middle": [],
"last": "Caglioti",
"suffix": ""
},
{
"first": "Giampaolo",
"middle": [],
"last": "Cristadoro",
"suffix": ""
},
{
"first": "Mirko Degli",
"middle": [],
"last": "Esposti",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the SEPLN'09 Workshop on Uncovering Plagiarism, Authorship and Social Software Misuse",
"volume": "",
"issue": "",
"pages": "19--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiara Basile, Dario Benedetto, Emanuele Caglioti, Giampaolo Cristadoro, and Mirko Degli Esposti. 2009. A Plagiarism Detection Procedure in Three Steps: Selection, Matches and \"Squares\". In Pro- ceedings of the SEPLN'09 Workshop on Uncovering Plagiarism, Authorship and Social Software Mis- use, pages 19-23, San Sebasti\u00e1n (Donostia), Spain, September.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Measuring the Semantic Similarity of Texts",
"authors": [
{
"first": "Courtney",
"middle": [],
"last": "Corley",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment",
"volume": "",
"issue": "",
"pages": "13--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Courtney Corley and Rada Mihalcea. 2005. Measur- ing the Semantic Similarity of Texts. In Proceed- ings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, pages 13-18, Ann Arbor, Michigan, USA, June.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Perspective-Based Approach for Solving Textual Entailment Recognition",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "Ferr\u00e1ndez",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Micol",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Mu\u00f1oz",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Palomar",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar Ferr\u00e1ndez, Daniel Micol, Rafael Mu\u00f1oz, and Manuel Palomar. 2007a. A Perspective-Based Ap- proach for Solving Textual Entailment Recognition. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 66-71, Prague, Czech Republic, June.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "DLSITE-1: Lexical Analysis for Solving Textual Entailment Recognition",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "Ferr\u00e1ndez",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Micol",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Mu\u00f1oz",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Palomar",
"suffix": ""
}
],
"year": 2007,
"venue": "Natural Language Processing and Information Systems",
"volume": "4592",
"issue": "",
"pages": "284--294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar Ferr\u00e1ndez, Daniel Micol, Rafael Mu\u00f1oz, and Manuel Palomar. 2007b. DLSITE-1: Lexical Anal- ysis for Solving Textual Entailment Recognition. In Natural Language Processing and Information Sys- tems, volume 4592, pages 284-294.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Large Scale Ranker-Based System for Search Query Spelling Correction",
"authors": [
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Micol",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirck",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "358--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianfeng Gao, Xiaolong Li, Daniel Micol, Chris Quirck, and Xu Sun. 2010. A Large Scale Ranker- Based System for Search Query Spelling Correction. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 358-366, Bei- jing, China, August.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "ENCOPLOT: Pairwise Sequence Matching in Linear Time Applied to Plagiarism Detection",
"authors": [
{
"first": "Cristian",
"middle": [],
"last": "Grozea",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Gehl",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Popescu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the SEPLN'09 Workshop on Uncovering Plagiarism, Authorship and Social Software Misuse",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristian Grozea, Christian Gehl, and Marius Popescu. 2009. ENCOPLOT: Pairwise Sequence Matching in Linear Time Applied to Plagiarism Detection. In Proceedings of the SEPLN'09 Workshop on Un- covering Plagiarism, Authorship and Social Soft- ware Misuse, pages 10-18, San Sebasti\u00e1n (Donos- tia), Spain, September.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Discourse Commitment-Based Framework for Recognizing Textual Entailment",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Hickl",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Bensley",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing",
"volume": "",
"issue": "",
"pages": "171--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Hickl and Jeremy Bensley. 2007. A Dis- course Commitment-Based Framework for Recog- nizing Textual Entailment. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 171-176, Prague, Czech Re- public, June.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "UAIC Participation at RTE5",
"authors": [
{
"first": "Adrian",
"middle": [],
"last": "Iftene",
"suffix": ""
},
{
"first": "Mihai-Alex",
"middle": [],
"last": "Moruz",
"suffix": ""
}
],
"year": 2009,
"venue": "Notebook Papers of the Text Analysis Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adrian Iftene and Mihai-Alex Moruz. 2009. UAIC Participation at RTE5. In Notebook Papers of the Text Analysis Conference, TAC 2009 Workshop, Gaithersburg, Maryland, USA, November.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Finding Plagiarism by Evaluating Document Similarities",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Kasprzak",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Brandejs",
"suffix": ""
},
{
"first": "Miroslav",
"middle": [],
"last": "K\u0159ipa\u010d",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the SEPLN'09 Workshop on Uncovering Plagiarism, Authorship and Social Software Misuse",
"volume": "",
"issue": "",
"pages": "24--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Kasprzak, Michal Brandejs, and Miroslav K\u0159ipa\u010d. 2009. Finding Plagiarism by Evaluating Document Similarities. In Proceedings of the SEPLN'09 Work- shop on Uncovering Plagiarism, Authorship and So- cial Software Misuse, pages 24-28, San Sebasti\u00e1n (Donostia), Spain, September.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Extending Verbnet with Novel Verb Classes",
"authors": [
{
"first": "Karin",
"middle": [],
"last": "Kipper",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Neville",
"middle": [],
"last": "Ryant",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karin Kipper, Anna Korhonen, Neville Ryant, and Martha Palmer. 2006. Extending Verbnet with Novel Verb Classes. In Proceedings of the Fifth In- ternational Conference on Language Resources and Evaluation (LREC 2006), Genova, Italy, June.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Combining data-driven systems for improving Named Entity Recognition",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "\u00d3",
"middle": [],
"last": "Ferr\u00e1ndez",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Montoyo",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mu\u00f1oz",
"suffix": ""
}
],
"year": 2007,
"venue": "Data and Knowledge Engineering",
"volume": "61",
"issue": "3",
"pages": "449--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Kozareva,\u00d3. Ferr\u00e1ndez, A. Montoyo, and R. Mu\u00f1oz. 2007. Combining data-driven systems for improving Named Entity Recognition. Data and Knowledge Engineering, 61(3):449-466.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "DLSITE-2: Semantic Similarity Based on Syntactic Dependency Trees Applied to Textual Entailment",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Micol",
"suffix": ""
},
{
"first": "\u00d3scar",
"middle": [],
"last": "Ferr\u00e1ndez",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Mu\u00f1oz",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the TextGraphs-2 Workshop",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Micol,\u00d3scar Ferr\u00e1ndez, and Rafael Mu\u00f1oz. 2007. DLSITE-2: Semantic Similarity Based on Syntactic Dependency Trees Applied to Textual En- tailment. In Proceedings of the TextGraphs-2 Work- shop, pages 73-80, Rochester, New York, USA, April.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Lexical Similarity Approach for Efficient and Scalable External Plagiarism Detection",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Micol",
"suffix": ""
},
{
"first": "\u00d3scar",
"middle": [],
"last": "Ferr\u00e1ndez",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Llopis",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Mu\u00f1oz",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the SEPLN'10 Workshop on Uncovering Plagiarism, Authorship and Social Software Misuse",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Micol,\u00d3scar Ferr\u00e1ndez, Fernando Llopis, and Rafael Mu\u00f1oz. 2010. A Lexical Similarity Ap- proach for Efficient and Scalable External Plagia- rism Detection. In Proceedings of the SEPLN'10 Workshop on Uncovering Plagiarism, Authorship and Social Software Misuse, Padua, Italy, Septem- ber.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "External and Intrinsic Plagiarism Detection Using Vector Space Models",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Muhr",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Zechner",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Kern",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Granitzer",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the SEPLN'09 Workshop on Uncovering Plagiarism, Authorship and Social Software Misuse",
"volume": "",
"issue": "",
"pages": "47--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Muhr, Mario Zechner, Roman Kern, and Michael Granitzer. 2009. External and Intrinsic Plagiarism Detection Using Vector Space Models. In Proceedings of the SEPLN'09 Workshop on Un- covering Plagiarism, Authorship and Social Soft- ware Misuse, pages 47-55, San Sebasti\u00e1n (Donos- tia), Spain, September.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Statistical machine translation: from single-word models to alignment templates",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2002. Statistical machine trans- lation: from single-word models to alignment tem- plates. Ph.D. thesis, RWTH Aachen.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Overview of the 1st international competition on plagiarism detection",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Eiselt",
"suffix": ""
},
{
"first": "Alberto",
"middle": [
"Barr\u00f3n"
],
"last": "Cede\u00f1o",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the SE-PLN'09 Workshop on Uncovering Plagiarism, Authorship and Social Software Misuse",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Potthast, Benno Stein, Andreas Eiselt, Al- berto Barr\u00f3n Cede\u00f1o, and Paolo Rosso. 2009. Overview of the 1st international competition on plagiarism detection. In Proceedings of the SE- PLN'09 Workshop on Uncovering Plagiarism, Au- thorship and Social Software Misuse, pages 1-9, San Sebasti\u00e1n (Donostia), Spain, September.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Overview of the 2nd international competition on plagiarism detection",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Eiselt",
"suffix": ""
},
{
"first": "Alberto",
"middle": [
"Barr\u00f3n"
],
"last": "Cede\u00f1o",
"suffix": ""
},
{
"first": "Paolo",
"middle": [
"Rosso"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the SE-PLN'10 Workshop on Uncovering Plagiarism",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Potthast, Benno Stein, Andreas Eiselt, Al- berto Barr\u00f3n Cede\u00f1o, and Paolo Rosso. 2010. Overview of the 2nd international competition on plagiarism detection. In Proceedings of the SE- PLN'10 Workshop on Uncovering Plagiarism, Au- thorship and Social Software Misuse, Padua, Italy, September.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Towards an Entity-based recognition of Textual Entailment",
"authors": [
{
"first": "Alvaro",
"middle": [],
"last": "Rodrigo",
"suffix": ""
},
{
"first": "Anselmo",
"middle": [],
"last": "Pe\u00f1as",
"suffix": ""
},
{
"first": "Felisa",
"middle": [],
"last": "Verdejo",
"suffix": ""
}
],
"year": 2008,
"venue": "Notebook Papers of the Text Analysis Conference, TAC 2008 Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alvaro Rodrigo, Anselmo Pe\u00f1as, and Felisa Verdejo. 2008. Towards an Entity-based recognition of Textual Entailment. In Notebook Papers of the Text Analysis Conference, TAC 2008 Workshop, Gaithersburg, Maryland, USA, November.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Intrinsic Plagiarism Detection Using Character n-gram Profiles",
"authors": [
{
"first": "Efstathios",
"middle": [],
"last": "Stamatatos",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the SEPLN'09 Workshop on Uncovering Plagiarism, Authorship and Social Software Misuse",
"volume": "",
"issue": "",
"pages": "36--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Efstathios Stamatatos. 2009. Intrinsic Plagiarism De- tection Using Character n-gram Profiles. In Pro- ceedings of the SEPLN'09 Workshop on Uncovering Plagiarism, Authorship and Social Software Mis- use, pages 36-37, San Sebasti\u00e1n (Donostia), Spain, September.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"text": "Metrics using different selected document set sizes.",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF3": {
"type_str": "table",
"text": "Results of our baseline and textual entailment systems using the RTE test corpora.",
"html": null,
"num": null,
"content": "<table/>"
}
}
}
}