{ "paper_id": "R11-1008", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:04:04.918570Z" }, "title": "Actions Speak Louder than Words: Evaluating Parsers in the Context of Natural Language Understanding Systems for Human-Robot Interaction", "authors": [ { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indiana University", "location": {} }, "email": "skuebler@indiana.edu" }, { "first": "Rachael", "middle": [], "last": "Cantrell", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indiana University", "location": {} }, "email": "rcantrel@indiana.edu" }, { "first": "Matthias", "middle": [], "last": "Scheutz", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indiana University", "location": {} }, "email": "mscheutz@indiana.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The standard ParsEval metrics alone are often not sufficient for evaluating parsers integrated in natural language understanding systems. We propose to augment intrinsic parser evaluations by extrinsic measures in the context of human-robot interaction using a corpus from a human cooperative search task. We compare a constituent with a dependency parser on both intrinsic and extrinsic measures and show that the conversion to semantics is feasible for different syntactic paradigms.", "pdf_parse": { "paper_id": "R11-1008", "_pdf_hash": "", "abstract": [ { "text": "The standard ParsEval metrics alone are often not sufficient for evaluating parsers integrated in natural language understanding systems. We propose to augment intrinsic parser evaluations by extrinsic measures in the context of human-robot interaction using a corpus from a human cooperative search task. We compare a constituent with a dependency parser on both intrinsic and extrinsic measures and show that the conversion to semantics is feasible for different syntactic paradigms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Human-robot interactions (HRI) in natural language pose many challenges for natural language understanding (NLU) systems, for humans expect robots to (1) generate quick responses to their request, which requires all processing to be done in real-time, (2) to rapidly integrate perceptions (e.g., to resolve referents (Brick and Scheutz, 2007) ), and (3) to provide backchannel feedback indicating whether they understood an instruction, often before the end of an utterance. As a result, NLU systems on robots must operate incrementally to allow for the construction of meaning that can lead to robot action before an utterance is completed (e.g., a head-turn of the robot to check for an object referred to by the speaker). Hence, the question arises how one can best evaluate NLU components such as parsers for robotic NLU in the context of HRI.", "cite_spans": [ { "start": 317, "end": 342, "text": "(Brick and Scheutz, 2007)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we argue that intrinsic parser evaluations, which evaluate parsers in isolation, are insufficient for determining their performance in HRI contexts where the ultimate goal of the NLU system is to generate the correct actions for the robot in a timely manner. For high performance of a parser with respect to intrisic measures does not imply that the parser will also work well with the other NLU components. A correct but overly complex parse passed to the semantic analysis unit, for example, may not result in the correct meaning interpretation and will thus fail to generate correct actions. Similarly, fragmented input from the speech recognizer may not lead to any parsable sequence of words, again likely resulting in incorrect robot behavior. Hence, we need an extrinsic evaluation to determine the utility and performance of a parser in the context of other NLU components at the level of semantics and action execution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To this end, we introduce an evaluation architecture that can be used for extrinsic evaluations of NLU components and demonstrate its utility for parser evaluation using state-of-the-art parsers for each of the two main parsing paradigms: the Berkeley constituent parser (Petrov and Klein, 2007) and MaltParser (Nivre et al., 2007b) , a dependency parser. The evaluation compares intrinsic and extrinsic measures on the CReST corpus (Eberhard et al., 2010) , which is representative of a broad class of collaborative instructionbased tasks envisioned for future robots (e.g., in search and rescue missions). To our knowledge, no previous extrinsic parser evaluation used conversions to semantic/action representations, which can be performed for different parser types and are thus ideally suited for comparing parsing frameworks. Moreover, no previous work has presented a combined intrinsic-extrinsic evaluation where the extrinsic evaluation uses full-fledged semantic/action representations in an HRI context.", "cite_spans": [ { "start": 271, "end": 295, "text": "(Petrov and Klein, 2007)", "ref_id": "BIBREF17" }, { "start": 311, "end": 332, "text": "(Nivre et al., 2007b)", "ref_id": "BIBREF16" }, { "start": 433, "end": 456, "text": "(Eberhard et al., 2010)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Evaluating different types of parsers is challenging for many reasons. For one, intrinsic evaluation measures are often specific to the type of parser. The ParsEval measures (precision and recall) are the standard for constituent parsers, attachment scores for dependency parsing. Yet, none of these measures is ideal: the ParsEval measures have been widely criticized because they favor flat annotation schemes and harshly punish attachment errors (Carroll et al., 1998) . Additionally, there is no evaluation scheme that can compare the performance of constituent and dependency parsers, or parsers using different underlying grammars. Converting constituents into dependencies (Boyd and Meurers, 2008) , evens out differences between underlying grammars. However, it is well known that the conversion into a different format is not straightforward. Clark and Curran (2007) , who convert the CCGBank to Dep-Bank, report an F-score of 68.7 for the conversion on gold data. Conversions into dependencies have been evaluated on the treebank side (Rehbein and van Genabith, 2007) , but not on the parser side; yet, the latter is critical since parser errors result in unpredicted structures and thus conversion errors.", "cite_spans": [ { "start": 449, "end": 471, "text": "(Carroll et al., 1998)", "ref_id": "BIBREF5" }, { "start": 680, "end": 704, "text": "(Boyd and Meurers, 2008)", "ref_id": "BIBREF1" }, { "start": 852, "end": 875, "text": "Clark and Curran (2007)", "ref_id": "BIBREF6" }, { "start": 1045, "end": 1077, "text": "(Rehbein and van Genabith, 2007)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Intrinsic parsing quality has been shown to be insufficient for comparing parsers, and adding extrinsic measures to the evaluation can lead to inconclusive results, in comparing two dependency parsers (Moll\u00e1 and Hutchinson, 2003) , three constituent parsers (Preiss, 2002) , and for a deep and a partial parser (Grover et al., 2005) .", "cite_spans": [ { "start": 201, "end": 229, "text": "(Moll\u00e1 and Hutchinson, 2003)", "ref_id": "BIBREF14" }, { "start": 258, "end": 272, "text": "(Preiss, 2002)", "ref_id": "BIBREF18" }, { "start": 311, "end": 332, "text": "(Grover et al., 2005)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "We propose to use intrinsic and extrinsic measures together to assess tradeoffs for parsers embedded in NLU systems (e.g., low-intrinsic/highextrinsic quality is indicative of parsers that work well in challenging systems, while highintrinsic/low-extrinsic quality is typical of highperformance parsers that are difficult to interface).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "For evaluation, we propose the robotic DIARC architecture which has been used successfully in many robotic applications. In addition to components for visual perception and action execution, DIARC consists of five NLU components. The first two components, a speech recognizer, and a disfluency filter which filters out common vocal distractors (\"uh\", \"um\", etc.) and common fillers (\"well\", \"so\", etc.) will not be used here. The third component optionally performs trigram-based part of speech (POS) tagging. The fourth component, the parser to be evaluated, which produces the constituent tree or dependency graph used by the fifth component, the \u03bb converter, to produce formal semantic representations. If the semantic representation indicates that a command needs to be executed, the command is passed on to an action interpreter (which then retrieves an existing action script indexed by the command or, if no such script is found, forwards the request to a task planner, which will plan a sequence of actions to achieve it (Schermerhorn et al., 2009) ).", "cite_spans": [ { "start": 1029, "end": 1056, "text": "(Schermerhorn et al., 2009)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "An Evaluation Framework for HRI", "sec_num": "3" }, { "text": "The semantic conversion process makes use of combinatorial categorial grammar (CCG) tags associated with lexical items, which are essentially part-of-speech tags enriched with information about the word's arguments. Given a word and the appropriate CCG tag, the corresponding semantic representations are retrieved from a semantic lexicon. These representations are \u03bbexpressions expressed in a fragment of first-order dynamic logic sufficiently rich to capture the language of (action) instructions from the corpus (c.f. e.g., (Goldblatt, 1992) ). Expressions are repeatedly combined using \u03b2-reduction until all words are converted and (preferably) only one \u03bb-free formula is left (Dzifcak et al., 2009) .", "cite_spans": [ { "start": 527, "end": 544, "text": "(Goldblatt, 1992)", "ref_id": "BIBREF9" }, { "start": 681, "end": 703, "text": "(Dzifcak et al., 2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "An Evaluation Framework for HRI", "sec_num": "3" }, { "text": "For example, the sentence \"do you see a blue box?\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Evaluation Framework for HRI", "sec_num": "3" }, { "text": "is translated as check-and-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Evaluation Framework for HRI", "sec_num": "3" }, { "text": "answer(\u2203x.see(self, x) \u2227 box(x) \u2227blue(x))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Evaluation Framework for HRI", "sec_num": "3" }, { "text": ". checkand-answer is an action that takes a formula as an argument, checks its truth (if possible), and causes the robot to reply with \"yes\" or \"no\" depending on the outcome of the check operation 1 . The conversion from dependency graphs to semantic representations is straightforward: When a dependent is attached to a head, the dependent is added to the CCG tag, resulting in a convenient format for semantic conversion. Then each node is looked up in the dictionary, and the definition is used to convert the node. For the example above, the parse graph indicates that \"a\" and \"blue\" are syntactic arguments of \"box\", \"you\" and \"a blue box\" are arguments of \"see\", and the clause \"you see a blue box\" is an argument of \"do\". Based on the lexical definitions, the phrase \"a blue box\" is combined into the expression (\u2203x.box(x) \u2227 blue(x)). As argument of the verb \"see\", it is then combined into the expression (\u2203x.see(self, x) \u2227 box(x) \u2227 blue(x)), and ultimately", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Evaluation Framework for HRI", "sec_num": "3" }, { "text": "check&answer(\u2203x.see(self, x) \u2227 box(x) \u2227 blue(x)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Evaluation Framework for HRI", "sec_num": "3" }, { "text": "The conversion for constituent trees is less straightforward since it is more difficult to automatically identify the head of a phrase, and to connect the arguments in the same way. We use a slightly different method: each node in the tree is looked up in the dictionary for a suitable word/CCG tag combination given the words dominated by the node's daughters. The \u03bb conversions are then performed for each sentence after the parser finishes producing a parse tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Evaluation Framework for HRI", "sec_num": "3" }, { "text": "For parser evaluations, we use an HRI scenario where processing speed is critical (often more important even than accuracy) as humans expect timely responses of the robot. Moreover, a parser's ability to produce fragments of a sentence (instead of failing completely) is highly desirable since the robot can ask clarification questions (if it knows where the parse failed) as opposed to offline processing tasks as humans are typically willing to help. This is different from a corpus, where no clarification question can be asked. Correctness here is determined by correct semantic interpretations that can be generated in the semantic analysis based on the (partial) parses. While these aspects are often of secondary importance in many NLU systems, they are essential to a robotic NLU architecture. Since we experiment with a new corpus that has not been used in parsing research yet, we also present an intrinsic evaluation to give a reference point to put the parsers' performance into perspective with regard to previous work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "More specifically, we investigate two points: (1) Given that spoken commands to robots are considerably shorter and less complex than newspaper sentences, is it possible to use existing resources, i.e., the Penn Treebank (Marcus et al., 1993) , for training the parsers without a major decrease in accuracy? And (2), are constituent or dependency parsers better suited for the NLU architecture described above, in terms of accuracy and speed?", "cite_spans": [ { "start": 221, "end": 242, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "To answer these questions, we carried out two experiments: (1) The intrinsic evaluation. This is split into two parts: one that compares constituent and dependency parsers on our test data when both parsers were trained on the Penn Treebank; and one that compares the parsers trained on a small in-domain set. (2) The extrinsic evaluation, which compares the two parsers in the NLU architecture, is also based on in-domain training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "For the first experiment we use standard intrinsic parsing measures: for the constituent parser, we report labeled precision (LP), labeled recall (LR), and labeled Fscore (LF); for the dependency parser the labeled attachment score (LAS). The second experiment uses the accuracy of the logical forms and the correct action interpretation and execution as a measure of quality. For this experiment, we also report the processing time, i.e., how much time the complete system requires for processing the test set from the text input to the output of logical forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intrinsic and extrinsic measures:", "sec_num": null }, { "text": "Data sets: For the intrinsic evaluation, we used the Penn Treebank. For the constituent experiments, we used the treebank with grammatical functions since the semantic construction requires this information. The only exception is the experiment using the Berkeley parser with the Penn Treebank: Because of memory restrictions, we could not use grammatical functions. For the dependency parser, we used a dependency version of the Penn Treebank created by pennconverter (Johansson and Nugues, 2007) .", "cite_spans": [ { "start": 469, "end": 497, "text": "(Johansson and Nugues, 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Intrinsic and extrinsic measures:", "sec_num": null }, { "text": "For the in-domain experiments (intrinsic and extrinsic), we used CReST (Eberhard et al., 2010) , a corpus of natural language dialogues obtained from recordings of humans performing a cooperative, remote search task. The multi-modal corpus contains the speech signals and transcriptions of the dialogues, which are additionally annotated for dialogue structure, disfluencies, POS, and syntax. The syntactic annotation covers both constituent annotation based on the Penn Treebank annotation scheme and dependencies based on the dependency version of the Penn Treebank. The corpus consists of 7 dialogues, with 1,977 sentences overall. The sentences are fairly short; average sentence length is 6.7 words. We extracted all commands (such as \"walk into the next room\"), which our robot can handle, and used those 122 sentences as our test set. We performed a 7-fold cross validation, in which one fold consists of all test sentences (i.e. commands) from one of the 7 dialogues. All the other folds combined with the declarative sentences from all dialogues served as training data. The number of commands per dialogue varies so the evaluation was performed on the set of all test sentences rather than averaged over the 7 folds.", "cite_spans": [ { "start": 71, "end": 94, "text": "(Eberhard et al., 2010)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Intrinsic and extrinsic measures:", "sec_num": null }, { "text": "We use both state-of-the-art constituent and dependency parsers: As constituent parser, we chose the Berkeley parser (Petrov and Klein, 2007) , a parser that learns a refined PCFG grammar based on latent variables. We used grammars based on 6 split-merge cycles. For the dependency parser, we used MaltParser (Nivre et al., 2007b) , a pseudo-projective dependency parser, which has reached state-of-the-art results for all languages in the CONLL 2007 shared task (Nivre et al., 2007a) . We decided to use version 1.1 of MaltParser, which allows the use of memory-based learning (MBL) in the implementation of TiMBL 2 . MBL has been shown to work well with small training sets (cf., (Banko and Brill, 2001) ). MaltParser was used with the Nivre algorithm and the feature set that proved optimal for English (Nivre et al., 2007b) . TiMBL parameters were optimized for each experiment in a nonexhaustive search. When trained on the Penn Treebank, the parser performed best using MVDM, 5 nearest neighbors, no feature weighting, and Inverse Distance class weighting. For the experiments on the dialogue corpus, the default settings proved optimal. Since MaltParser requires POStagged input, we used the Markov model tagger TnT (Brants, 1999) to tag the test sentences for dependency parsing; the Berkeley parser performs POS tagging in the parsing process.", "cite_spans": [ { "start": 117, "end": 141, "text": "(Petrov and Klein, 2007)", "ref_id": "BIBREF17" }, { "start": 309, "end": 330, "text": "(Nivre et al., 2007b)", "ref_id": "BIBREF16" }, { "start": 463, "end": 484, "text": "(Nivre et al., 2007a)", "ref_id": "BIBREF15" }, { "start": 682, "end": 705, "text": "(Banko and Brill, 2001)", "ref_id": "BIBREF0" }, { "start": 806, "end": 827, "text": "(Nivre et al., 2007b)", "ref_id": "BIBREF16" }, { "start": 1223, "end": 1237, "text": "(Brants, 1999)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Parsers:", "sec_num": null }, { "text": "For the experiment based on the complete NLU architecture, we used an incremental reimplementation of the Nivre algorithm called Mink (Cantrell, 2009) as dependency parser. Mink uses the WEKA implementation of the C4.5 decision tree classifier (Hall et al., 2009) as guide. The confidence threshold for pruning is 0.25, and the minimum number of instances per leaf is 2.", "cite_spans": [ { "start": 134, "end": 150, "text": "(Cantrell, 2009)", "ref_id": "BIBREF4" }, { "start": 244, "end": 263, "text": "(Hall et al., 2009)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Parsers:", "sec_num": null }, { "text": "The results of the intrinsic parser evaluation are shown in Table 1 . The POS tagging results for TnT (for MaltParser) are unexpected: the small indomain training set resulted in an increase of accuracy of 4.7 percent points. The result for the POS tagging accuracy of the Berkeley parser trained on CReST is artificially low because the parser did not parse 9 sentences, which resulted in missing POS tags for those sentences. All of the POS tagging results are lower than the TnT accuracy of 2 http://ilk.uvt.nl/timbl/ 96.7%, reported for the Penn Treebank (Brants, 1999) . This is due to either out-of-domain data or the small training set for the training with CReST.", "cite_spans": [ { "start": 559, "end": 573, "text": "(Brants, 1999)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 60, "end": 67, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "When the parsers were trained on the Penn Treebank, the very low results for both parsers (46.0 F-score, 40.6 LAS) show clearly that preexisting resources cannot be used for training. The low results are due to the fact that the test set consists almost exclusively of commands, a sentence type that, to our knowledge, does not occur in the Penn Treebank. A comparison between ParsEval measures and LAS is difficult. We refrained from converting the constituent parse to dependencies for evaluation because it is unclear how reliable the conversion for parser output is.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "The results for the Berkeley parser trained on the dialogue data from CReST are better than the results trained on the Penn Treebank. However, even with training on in-domain data, the F-score of 52.5 is still considerably lower than state-ofthe-art results for in-domain parsing of the Penn Treebank. This is partly due to our inclusion of grammatical functions in the parsing process as well as in the evaluation. Thus, the parsing task is more difficult than in other experiments. Another possible reason for the low performance is the size of the training set. We must assume that the Berkeley parser requires a larger training set to reach good results. This is corroborated by the fact that this parser did not find any parse for 9 sentences. The dependency parser performs equally badly when trained on the Penn Treebank (40.6 LAS). However, when it is trained on in-domain data, it reaches an LAS of 70.5, which corroborates the assumption that TiMBL performs well with small data sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "An error analysis of the parser output based on the CReST training shows that one frequent type of error results from differing lexical preferences between the Penn Treebank and the CReST domain. The word \"left\", for example, is predominantly used as a verb in the Penn Treebank, but as an adverb or noun in the dialogue corpus, which results in frequent POS tagging errors and subse-( (S (VP (VB hold) (PRT (RP on)) (S (VP (VB let) (S (NP (PRP me)) (VP (VB pick) (PRT (RP up)) (NP (DT those) (JJ green) (NNS boxes)))))))) ) Figure 1 : Constituent parse for \"hold on let me pick up those green boxes\". quent parsing errors.", "cite_spans": [], "ref_spans": [ { "start": 525, "end": 533, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "For the extrinsic evaluation in the context of the NLU system, we report exact match accuracy for the logical forms. Since the semantic conversion fails on unexpected parser output, the quantitative semantic evaluation is based only on syntactically-correct sentences, although partiallycorrect parses are instructive examples, and thus are included in the discussion. More parses were almost correct than perfectly so: 27% were perfectly correct for the constituent parser, and 30% for the dependency parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Of these, 90% of dependency graphs were correctly semantically combined. while just 64% of constituent trees were correctly combined. Mink was also faster: Averaged over a range of sentence lengths and complexities, the NLU system using Mink was roughly twice as fast as the one with the Berkeley parser. Averaged over 5 runs of 100 sentences each, Mink required approx. 180 ms per sentence, the Berkeley parser approx. 270 ms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "The most egregious problem area involves a typical phenomenon of spontaneous speech that an utterance does not necessarily correspond to a sentence in the syntactic sense: Many utterances contain multiple, independent phrases or clauses, e.g., \"hold on let me pick up those green boxes\", as a single utterance. The ideal translation for this utterance is: wait(listener); get(speaker, {x|green(x)\u2227 box(x)}) where \";\" is the sequencing operator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "The constituent parse for the utterance is shown in Figure 1 . This parse is partially correct, but the two commands are not treated as a conjunction of clauses; instead, the second command is treated as subordinate to the first one, This analysis results in the argument structure shown in Table 2 , where each phrase takes its phrasal constituents as arguments. The semantic definitions and CCG tags are shown in Table 3 . Some definitions do not have the same number of arguments as the CCG tags, in particular the verb \"pick\" with its raised subject, which will be applied by the semantics of the verb \"let\". The correspondence between the constituent parse and semantics output is shown in Table 4. The dependency parse is shown in Figure 2 The parse results in the syntactic head and dependent relationships and the semantic head and dependent relationships for the words in the utterance, constructed from the definitions in Table 5 . In the semantic analysis, \"pick\" is similar to the syntactic analysis in that it takes a noun phrase and a particle as its arguments. This results in the following combination: \u03bbx.\u03bby.\u03bbz.pick (up, z, y) (up) (those green boxes) 3 . The first application applies \"up\" to x, resulting in the analysis: \u03bby.\u03bbz.pick(up, z, y) (those green boxes) which in turn is converted into: \u03bbz.pick(up, z, those green boxes).", "cite_spans": [ { "start": 1133, "end": 1143, "text": "(up, z, y)", "ref_id": null } ], "ref_spans": [ { "start": 52, "end": 60, "text": "Figure 1", "ref_id": null }, { "start": 291, "end": 298, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 415, "end": 422, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 737, "end": 745, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 932, "end": 939, "text": "Table 5", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Semantic 1 N P 1 :boxes (DT=those,JJ=green) {x|green(x) \u2227 boxes(x)} 2 V P 1 :pick (PRT=up,N P 1 ) \u03bbz.pick(up, z, {x|green(x) \u2227 box(x)}) 3 S 1 (N P 2 =speaker,V P 1 ) pick(up, speaker, {x|green(x) \u2227 box(x)}) 4 V P 2 :let (S 1 ) pick(up, speaker, {x|green(x) \u2227 box(x)}) 5 S 2 (V P 2 ) pick(up, speaker, {x|green(x) \u2227 box(x)}) 6 V P 3 :hold (PRT=on,S 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituency", "sec_num": null }, { "text": "wait (pick(up, speaker, {x|green(x) Here we find a systematic difference between the syntactic analysis and the intended semantic one: While syntactically, the adjective \"green\" is dependent on the head \"boxes\", it is the opposite in the semantic analysis. The definition of \"boxes\" indicates that it is a predicate that takes as an argument an abstract entity \"x\", representing the realworld item that has the property of being a box. This predicate, box(x), is itself then applied to the predicate \"green\", which has the definition \u03bb X.\u03bb x.green(x)\u2227 X(x). The variable X represents the concept that will be applied. This application produces \u03bb x.green(x)\u2227 box(x)). Thus a conversion rule reverses dependencies within noun phrases.", "cite_spans": [ { "start": 5, "end": 35, "text": "(pick(up, speaker, {x|green(x)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Constituency", "sec_num": null }, { "text": "\u2227 box(x)})) \u21d0 error 7 S 4 (V P 3 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituency", "sec_num": null }, { "text": "The results show that a considerable number of sentences could be parsed but not converted correctly to logical form because of the way certain information is represented in the parses. Additionally, a small difference in the parsers' behavior, namely MaltParser's ability to provide partial parses, resulted in a large difference in the usability of the parsers' output -partial parses are not only better than parse failures, but may even be the expected outcome in an HRI settings, since they can be successfully translated to logical form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "While the same parser performed better under both intrinsic and extrinsic evaluation, this may not necessarily always be the case (see section 2). It is possible that one parser provides imperfect parses when evaluated intrinsically but the information is presented in a form that can be used by higher applications. This occurred in our experiment in the case of the dependency parser, whose partial parses could be converted in completely correct semantic representations. I.e., while the parse may not be completely correct with regard to the gold standard, it may still provide enough information to use for the higher component so that no information loss ensues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "One advantage of our extrinsic evaluation is that the conversion to semantics can be performed for a wide range of different syntactic annotations. While previous evaluations stayed within one parsing framework (e.g., dependency parsing), our evaluation included a constituent and a dependency parser (this evaluation can be extended to \"deeper\" parsers such as HPSG parsers). Additionally, the conversion to semantics involves a wide range of syntactic phenomena, thus providing a high granularity compared to extrinsic evaluations in information retrieval, where only specific sentence parts (e.g., noun phrases) are targeted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We introduced a novel, semantics-based method for comparing the performance of different parsers in an HRI setting and evaluated our method on a test corpus collected in a human coordination task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "The experiments emphasize the importance of performing an extrinsic evaluation of parsers in typical application domains. While extrinsic evaluations may depend on the application domain, it is important to show that parsers cannot be used off-the-shelf based on intrinsic evaluations. To estimate the variance of parsers, it is important to establish a scenario of different applications in which parsers can be tested. An NLU component in an HRI setting is an obvious candidate since the conversion to semantics is possible for any syntac-tic paradigm, and the HRI setting requires evaluation metrics, such as the time behavior or the incrementality of the parser, which are typically not considered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "self is a deictic referent always denoting the robot.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Here, \"those green boxes\" is a human-convenient shorthand for its full semantic definition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was in part supported by ONR grants #N00014-10-1-0140 and #N00014-07-1-1049.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Scaling to very very large corpora for natural language disambiguation", "authors": [ { "first": "Michele", "middle": [], "last": "Banko", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ACL-EACL'01", "volume": "", "issue": "", "pages": "26--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michele Banko and Eric Brill. 2001. Scaling to very very large corpora for natural language disambigua- tion. In Proceedings of ACL-EACL'01, pages 26- 33, Toulouse, France.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Revisiting the impact of different annotation schemes on PCFG parsing: A grammatical dependency evaluation", "authors": [ { "first": "Adriane", "middle": [], "last": "Boyd", "suffix": "" }, { "first": "Detmar", "middle": [], "last": "Meurers", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the ACL Workshop on Parsing German", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adriane Boyd and Detmar Meurers. 2008. Revisiting the impact of different annotation schemes on PCFG parsing: A grammatical dependency evaluation. In Proceedings of the ACL Workshop on Parsing Ger- man, Columbus, OH.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Tagging and Parsing with Cascaded Markov Models. DFKI", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Brants. 1999. Tagging and Parsing with Cas- caded Markov Models. DFKI, Universit\u00e4t des Saar- landes.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Incremental natural language processing for HRI", "authors": [ { "first": "Timothy", "middle": [], "last": "Brick", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Scheutz", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Second ACM IEEE International Conference on Human-Robot Interaction", "volume": "", "issue": "", "pages": "263--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Brick and Matthias Scheutz. 2007. Incremen- tal natural language processing for HRI. In Proceed- ings of the Second ACM IEEE International Confer- ence on Human-Robot Interaction, pages 263-270, Washington D.C.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Mink: An incremental datadriven dependency parser with integrated conversion to semantics", "authors": [ { "first": "Rachael", "middle": [], "last": "Cantrell", "suffix": "" } ], "year": 2009, "venue": "Student Workshop at RANLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rachael Cantrell. 2009. Mink: An incremental data- driven dependency parser with integrated conver- sion to semantics. In Student Workshop at RANLP, Borovets, Bulgaria.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Parser evaluation: a survey and a new proposal", "authors": [ { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Sanfilippo", "suffix": "" } ], "year": 1998, "venue": "Proceedings of LREC 1998", "volume": "", "issue": "", "pages": "447--454", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Carroll, Ted Briscoe, and Antonio Sanfilippo. 1998. Parser evaluation: a survey and a new pro- posal. In Proceedings of LREC 1998, pages 447- 454, Granada, Spain.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Formalismindependent parser evaluation with CCG and Dep-Bank", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "James", "middle": [], "last": "Curran", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL 2007", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Clark and James Curran. 2007. Formalism- independent parser evaluation with CCG and Dep- Bank. In Proceedings of ACL 2007, Prague, Czech Republic.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "What to do and how to do it: Translating natural language directives into temporal and dynamic logic representation for goal management and action execution", "authors": [ { "first": "Juraj", "middle": [], "last": "Dzifcak", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Scheutz", "suffix": "" }, { "first": "Chitta", "middle": [], "last": "Baral", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'09)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juraj Dzifcak, Matthias Scheutz, and Chitta Baral. 2009. What to do and how to do it: Translating nat- ural language directives into temporal and dynamic logic representation for goal management and action execution. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'09), Kobe, Japan.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The Indiana \"Cooperative Remote Search Task\" (CReST) Corpus", "authors": [ { "first": "Kathleen", "middle": [], "last": "Eberhard", "suffix": "" }, { "first": "Hannele", "middle": [], "last": "Nicholson", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Gunderson", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Scheutz", "suffix": "" } ], "year": 2010, "venue": "Proceedings of LREC-2010", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kathleen Eberhard, Hannele Nicholson, Sandra K\u00fcbler, Susan Gunderson, and Matthias Scheutz. 2010. The Indiana \"Cooperative Remote Search Task\" (CReST) Corpus. In Proceedings of LREC- 2010, Valetta, Malta.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Parallel action: Concurrent dynamic logic with independent modalities", "authors": [ { "first": "Robert", "middle": [], "last": "Goldblatt", "suffix": "" } ], "year": 1992, "venue": "Studia Logica", "volume": "51", "issue": "3/4", "pages": "551--578", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Goldblatt. 1992. Parallel action: Concurrent dynamic logic with independent modalities. Studia Logica, 51(3/4):551-578.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A comparison of parsing technologies for the biomedical domain", "authors": [ { "first": "Claire", "middle": [], "last": "Grover", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Lascarides", "suffix": "" } ], "year": 2005, "venue": "Natural Language Engineering", "volume": "11", "issue": "", "pages": "27--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claire Grover, Mirella Lapata, and Alex Lascarides. 2005. A comparison of parsing technologies for the biomedical domain. Natural Language Engineer- ing, 11:27-65.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The WEKA data mining software: An update. SIGKDD Explorations", "authors": [ { "first": "Mark", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Eibe", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Holmes", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Pfahringer", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Reutemann", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Witten", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian Witten. 2009. The WEKA data mining software: An update. SIGKDD Explorations, 11(1).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Extended constituent-to-dependency conversion for English", "authors": [ { "first": "Richard", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Nugues", "suffix": "" } ], "year": 2007, "venue": "Proceedings of NODALIDA 2007", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Johansson and Pierre Nugues. 2007. Ex- tended constituent-to-dependency conversion for English. In Proceedings of NODALIDA 2007, Tartu, Estonia.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):313-330.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Intrinsic versus extrinsic evaluations of parsing systems", "authors": [ { "first": "Diego", "middle": [], "last": "Moll\u00e1", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Hutchinson", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the EACL 2003 Workshop on Evaluation Initiatives in Natural Language Processing", "volume": "", "issue": "", "pages": "43--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Moll\u00e1 and Ben Hutchinson. 2003. Intrinsic ver- sus extrinsic evaluations of parsing systems. In Pro- ceedings of the EACL 2003 Workshop on Evaluation Initiatives in Natural Language Processing, pages 43-50, Budapest, Hungary.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The CoNLL 2007 shared task on dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mc-Donald", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Deniz", "middle": [], "last": "Yuret", "suffix": "" } ], "year": 2007, "venue": "Proceedings of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Johan Hall, Sandra K\u00fcbler, Ryan Mc- Donald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007a. The CoNLL 2007 shared task on dependency parsing. In Proceedings of EMNLP- CoNLL 2007, Prague, Czech Republic.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "MaltParser: A language-independent system for data-driven dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "Atanas", "middle": [], "last": "Chanev", "suffix": "" }, { "first": "G\u00fcl\u015fen", "middle": [], "last": "Eryi\u01e7it", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Svetoslav", "middle": [], "last": "Marinov", "suffix": "" }, { "first": "Erwin", "middle": [], "last": "Marsi", "suffix": "" } ], "year": 2007, "venue": "Natural Language Engineering", "volume": "13", "issue": "2", "pages": "95--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G\u00fcl\u015fen Eryi\u01e7it, Sandra K\u00fcbler, Svetoslav Marinov, and Erwin Marsi. 2007b. MaltParser: A language-independent system for data-driven de- pendency parsing. Natural Language Engineering, 13(2):95-135.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Improved inference for unlexicalized parsing", "authors": [ { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2007, "venue": "Proceedings of HLT-NAACL'07", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of HLT- NAACL'07, Rochester, NY.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Choosing a parser for anaphora resolution", "authors": [ { "first": "Judita", "middle": [], "last": "Preiss", "suffix": "" } ], "year": 2002, "venue": "Proceedings of DAARC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judita Preiss. 2002. Choosing a parser for anaphora resolution. In Proceedings of DAARC, Lisbon, Por- tugal.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Treebank annotation schemes and parser evaluation for German", "authors": [ { "first": "Ines", "middle": [], "last": "Rehbein", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2007, "venue": "Proceedings of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "630--639", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ines Rehbein and Josef van Genabith. 2007. Tree- bank annotation schemes and parser evaluation for German. In Proceedings of EMNLP-CoNLL 2007, pages 630-639, Prague, Czech Republic.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Finding and exploiting goal opportunities in realtime during plan execution", "authors": [ { "first": "Paul", "middle": [], "last": "Schermerhorn", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Benton", "suffix": "" }, { "first": "Kartik", "middle": [], "last": "Scheutz", "suffix": "" }, { "first": "Rao", "middle": [], "last": "Talamadupula", "suffix": "" }, { "first": "", "middle": [], "last": "Kambhampati", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Schermerhorn, J Benton, Matthias Scheutz, Kar- tik Talamadupula, and Rao Kambhampati. 2009. Finding and exploiting goal opportunities in real- time during plan execution. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelli- gent Robots and Systems, St. Louis.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "First steps toward natural human-like HRI", "authors": [ { "first": "Matthias", "middle": [], "last": "Scheutz", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Schermerhorn", "suffix": "" }, { "first": "James", "middle": [], "last": "Kramer", "suffix": "" }, { "first": "David", "middle": [], "last": "Anderson", "suffix": "" } ], "year": 2007, "venue": "Autonomous Robots", "volume": "22", "issue": "4", "pages": "411--423", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthias Scheutz, Paul Schermerhorn, James Kramer, and David Anderson. 2007. First steps to- ward natural human-like HRI. Autonomous Robots, 22(4):411-423.", "links": null } }, "ref_entries": { "FIGREF0": { "text": ". The two commands are correctly analyzed as in-", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "The dependency analysis.", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "content": "
Berkeley parserMaltParser
training data POS acc.LPLRLF POS acc. LAS
Penn86.9 47.2 44.8 46.088.1 40.6
CReST67.8 56.7 48.9 52.592.8 70.5
", "html": null, "text": "The results of the intrinsic evaluation.", "type_str": "table", "num": null }, "TABREF1": { "content": "
: The argument structure based on the con-
stituent parse.
Token Arg. Str. Semantics
holdS/RP\u03bbx.wait(x)
onRPon
letS/NP/S\u03bbx.\u03bbX.X(x)
meNPspeaker
pickS/RP/NP \u03bbx.\u03bby.\u03bbz.pick(x, y, z)
upRPup
those NP/NP\u03bbX.{x|X(x)}
green NP/NP boxes NP\u03bbX.\u03bbx.green(x) \u2227 X(x) box
", "html": null, "text": "", "type_str": "table", "num": null }, "TABREF2": { "content": "
HeadDependents
HOLDon
LETme, pick
PICKup, boxes
BOXES those, green
", "html": null, "text": "Semantics for the example sentence.", "type_str": "table", "num": null }, "TABREF3": { "content": "", "html": null, "text": "Syntactic head/dependent relationships.", "type_str": "table", "num": null }, "TABREF4": { "content": "
", "html": null, "text": "Correspondence between the constituent parse and the semantics output.", "type_str": "table", "num": null } } } }