Linguistic syntax has often been claimed as uniquely complex due to features like anaphoric relations and distance dependencies. However, visual narratives of sequential images, like those in comics, have been argued to use sequencing mechanisms analogous to those in language. These narrative structures include “refiner” panels that “zoom in” on the contents of another panel. Similar to anaphora in language, refiners indexically connect inexplicit referential information in one unit (refiner, pronoun) to a more informative “antecedent” elsewhere in the discourse. Also like in language, refiners can follow their antecedents (anaphoric) or precede them (cataphoric), along with having either proximal or distant connections. We here explore the constraints on visual narrative refiners created by modulating these features of order and distance. Experiment 1 examined participants’ preferences for where refiners are placed in a sequence using a force-choice test, which revealed that refiners are preferred to follow their antecedents and have proximal distances from them. Experiment 2 then showed that distance dependencies lead to slower self-paced viewing times. Finally, measurements of event-related brain potentials (ERPs) in Experiment 3 revealed that these patterns evoke similar brain responses as referential dependencies in language (i.e., N400, LAN, Nref). Across all three studies, the constraints and (neuro)cognitive responses to refiners parallel those shown to anaphora in language, suggesting domain-general constraints on the sequencing of referential dependencies.
Comments (0)