A well-established assumption across theories of sentence processing is that syntactic parsing happens incrementally by establishing links between related words in a sentence leading to a hierarchical structure (Frazier, 1987). For example, in the sentence John ate an ice cream, John has to be linked to ate, similarly, ate has to be linked to an ice cream. These links can be formed by a top-down predictive process where previous linguistic cues (e.g., ate) predict the upcoming linguistic element (an ice cream). Alternatively, the links can be formed as a result of a bottom-up integrative process at the linguistic entity where the link is identified. For example, the link between John and ate is identified and formed at ate.
Bottom-up accounts hold that retrieval of head/dependent chunk from the working memory is necessary to establish the syntactic relations; thus, such a process makes sentence comprehension subject to working memory constraint (e.g., Bartek et al., 2011, Grodner and Gibson, 2005). This process is assumed to hold cross-linguistically (Lewis & Vasishth, 2005). Evidence for such bottom-up processing or backward dependency formation has also been observed in a verb-final language like Hindi, for argument-verb dependencies (Vasishth, 2003), reflexives (Kush & Phillips, 2014), and verb agreement (Bhatia & Dillon, 2022); see a recent review by Dillon and Keshev (2024) for further evidence of bottom-up processing in other verb-final languages.
Dependency formation can also happen through top-down processing (Marslen-Wilson, 1973) leading to what has been termed as forward dependency. Evidence for top-down processing has been found cross-linguistically and across a variety of syntactic relations (e.g., Altmann and Kamide, 1999, Garnsey et al., 1997, Grüter et al., 2020, Knoeferle et al., 2005, Levy, 2008, Stone et al., 2021, Trueswell et al., 1993); see, Dillon and Keshev, 2024, Kuperberg and Jaeger, 2016, Kutas et al., 2011 and Staub (2015) for recent reviews on the role of prediction during sentence comprehension. Critically, the role of top-down processing has been argued to be robust and more dominant than bottom-up processing in verb-final languages (e.g., Levy et al., 2013, Vasishth et al., 2010). This is because in such languages the heads (e.g., verb) appear later and therefore the dependents (e.g., nouns) that are encountered prior to the heads can be used to make successful predictions about the heads. For example, in sentence 1, the verb khaaii ‘ate’ appears after its two arguments ‘John’ and ‘ice-cream’. It has been proposed that the properties of the preverbal arguments (e.g., lexical semantics, nominal case-marker, etc.) successfully predict the upcoming clause final verb.1
Evidence for robust verbal prediction in verb-final languages comes from the so called anti-locality effect (Konieczny, 2000) as well as from the no structural forgetting effect (Vasishth et al., 2010). Take examples 2a, 2b and 2c for instance. Konieczny (2000) showed that the reading times at the main verbs (hingelegt, gelegt) were faster when the relative clause (RC1) intervened between the verb and its argument (Er, die Rose). This was argued to be a result of better predictability of the verb owing to increased preverbal material. These results go counter to the working-memory accounts (e.g., Gibson, 2000) that would predict faster RTs when the subject-verb dependency is local. The facilitatory effect at the verb is thus termed as anti-locality.
Both anti-locality effect discussed above as well as the no forgetting effect has been reported in multiple verb-final languages such as German, Dutch, Hindi, and Japanese.
As stated earlier, there is evidence that the parser uses both top-down as well as bottom-up strategies to establish syntactic links (or dependencies). The argument-verb dependency formation in Subject-Object-Verb (SOV) languages has provided an interesting test case to investigate the interaction between these two strategies. One way to conceptualize dependency formation (especially for noun-verb relations) through predictive processing in verb-final languages is that there is no backward link formation during parsing. Put differently, once the preverbal arguments have been used to robustly predict the clause final verb, no further argument retrieval is needed to form the argument-verb link when the verb is eventually observed as input. While this may be the case, there is also evidence that dependency resolution during processing of verb-final languages is influenced by backward-looking dependency formation at the verb where previously seen arguments are retrieved to form the necessary links (Häussler and Bader, 2015, Vasishth, 2003). Such a retrieval step will be subject to various working-memory constraints such as similarity-based interference or time-based decay.
A key result supporting bottom-up processing in verb-final languages is Vasishth (2003). For Hindi sentences like 3, Vasishth (2003) showed that the reading times at the critical non-finite verb khariid-neko ‘buy-non.finite’ were slower in the condition where preverbal nominals had similar case-markers (3a) compared to the condition where the nouns had unique case-markers (3b). These results can be explained by the cue-based retrieval theory which is a well-known sentence comprehension proposal based on working memory constraints. The theory assumes that the dependency formation between an argument and the verb is driven by a content-addressable search in memory: the argument is searched in memory using its feature specifications such [subject], [plural], [case] called retrieval cues. A key prediction of cue-based retrieval theory is similarity-based interference — when multiple chunks in memory match a retrieval cue, it causes difficulty in identifying the target and consequently, causes a slowdown in dependency formation. Such a process can explain the case interference effect in Vasishth (2003). For example, in sentences like (3a), multiple nouns in memory match the [ko] case cue causing a difficulty in retrieving the correct noun at the verb. Thus, the theory predicts an interference effect due to syntactic feature similarity: a slowdown in (3a) vs. (3b). This suggests that bottom-up integration processes drive dependency formation in a verb-final language like Hindi.
However, one concern regarding a bottom-up interpretation of this important finding by Vasishth (2003) is predictive processing. In particular, given the widespread role of prediction in verb-final languages (e.g., Husain et al., 2014), it could be the case that the observed slowdown due to case similarity is in fact due to a prediction error cost. For example, Husain et al. (2014) show that a processing slowdown at a critical relative clause (RC) verb can be explained by prediction error owing to encountering an SVO order rather than a canonical SOV order of the RC. In light of these findings, it is quite reasonable to assume that the processing slowdown at the verb in the example above could be because the prediction of the critical non-finite verb is worse in the condition with similar case-marker compared to unique case-marker. If the critical slowdown in Vasishth (2003) can be explained in terms of predictive processing then there is no basis to argue for bottom-up integration (driven by memory constraints) in verb-final languages. And, it can be argued that the observed data can be explained by predictive processing alone. For this reason, the findings in Vasishth (2003) are important for theories of working memory constraints in sentence comprehension, i.e., to establish whether working memory constraints matter for verb-final languages.
Given the theoretical importance of the results in Vasishth (2003), it is important to investigate if prediction error can provide an alternative explanation for the observed case interference effect and whether these results can be replicated with a design controlled for prediction-based effects. In this paper, we investigate two related questions using multiple cloze completion and SPR studies: (i) Whether a bottom-up retrieval-based interference effect can be observed in a verb-final language like Hindi? and (ii) Whether working memory constraints influence top-down predictive processing?
Regarding the first question, we could not replicate the observed slowdown in case interference conditions in Vasishth (2003), i.e., there is no effect of similarity-based interference on reading times at the verb. The results therefore provide no evidence in favor of an exclusive bottom-up dependency completion process in a language like Hindi. We argue that the cue-based retrieval account, proposed in Vasishth (2003), is insufficient to explain the overall results; it needs to incorporate a top-down prediction mechanism to account for these data.
With regard to the second question, we find evidence for the influence of working memory constraints on the prediction of the clause-final verb. In a cloze completion study, we find that verb prediction is influenced by the case interference at the nouns. This result is in line with representation distortion-based accounts of working memory constraints — the feature representation of nouns stored in memory can distort or degrade with time, causing faulty predictions of the upcoming verb. We show that two models that assume feature distortion due to memory constraints, the memory interference model (see, Oberauer & Kliegl, 2006) and the lossy memory model (Futrell et al., 2020) can explain why case interference increases incorrect verb predictions. Taken together, the results from our cloze completion and SPR studies provide comprehensive support for the role of working memory constraints and top-down prediction during processing in a verb-final language like Hindi. We argue that dependency completion is driven by a cue-based retrieval process (subject to interference and time-based decay) and a probabilistic pre-activation of the upcoming verb phrase based on a (potentially) distorted representation of pre-verbal input.
To summarize, in this work, we investigate two questions (a) whether similarity-based interference due to case-markers can be observed in reading at the clause-final verb? and (b) whether case-based interference could also affect predictive processing. We investigate these questions using Hindi (a verb-final language). The paper is arranged as follows, we begin with discussing the prediction of the cue-based retrieval model for SPR studies. We present a direct SPR replication (Experiment 1) of Vasishth (2003). We then address a serious confound in the direct replication and present a large-scale replication (Experiment 2) of Vasishth (2003) with a slightly modified design after removing the confound. We then interpret the SPR results from our large-scale replication in light of a close completion study (Experiment 3). Following this, we report a cloze completion study (Experiment 4) to investigate the role of case-based interference on verb prediction. A new memory interference model of verb prediction and a unified model of verb prediction and reading are presented to account for the key results. We synthesize all the findings in the General Discussion section, following which we conclude the paper.
Comments (0)