Two formal notions of higher-order invariance detection in humans (A proof of the invariance equivalence principle in Generalized Invariance Structure Theory and ramifications for related computations)

Invariance and symmetry principles have played a fundamental if not essential role in the theoretical development of the physical and mathematical sciences. More recently, Generalized Invariance Structure Theory (GIST; Vigo, 2013, 2015; Vigo et al., 2022) has extended this methodological trajectory with respect to the study and formal modeling of human cognition. Indeed, GIST is the first systematic and extensively tested mathematical and computational theory of concept learning and categorization behavior (i.e., human generalization) based on such principles. The theory introduces an original mathematical and computational framework, with novel, more appropriate, and more natural characterizations, constructs, and measures of invariance and symmetry with respect to cognition than existing ones in the mathematical sciences and physics. These have proven effective in predicting and explaining empirically tested behavior in the domains of perception, concept learning, categorization, similarity assessment, aesthetic judgments, and decision making, among others. GIST has its roots in a precursor theory known as Categorical Invariance Theory (CIT; Vigo, 2009). This paper gives a basic introduction to two different notions of human invariance detection proposed by GIST and its precursor CIT: namely, a notion based on a cognitive mechanism of dimensional suppression, rapid attention shifting, and partial similarity assessment referred to as binding (s-invariance) and a perturbation notion based on perturbations of the values of the dimensions on which categories of object stimuli are defined (p-invariance). This is followed by the first simple formal proof of the invariance equivalence principle from GIST which asserts that the two notions are equivalent under a set of strict conditions on categories. The paper ends with a brief discussion of how GIST, unlike CIT, may be used to model probabilistic process accounts of categorization, and how it naturally and directly applies to the learning of sequential categories and to multiset-based concept learning.

Comments (0)

No login
gif