High-level perception – the process of making sense of complex data at an abstract, conceptual level – is fundamental to human cognition. Through high-level perception, chaotic environmental stimuli are organized into mental representations that are used throughout cognitive processing. Much work in traditional artificial intelligence has ignored the process of high-level perception, by starting with hand-coded representations. In this paper, we argue that this dismissal of perceptual processes leads to distorted models of human cognition. We examine some existing artificial-intelligence models – notably BACON, a model of scientific discovery, and the Structure-Mapping Engine, a model of analogical thought – and argue that these are flawed precisely because they downplay the role of high-level perception. Further, we argue that perceptual processes cannot be separated from other cognitive processes even in principle, and therefore that traditional artificial-intelligence models cannot be defended by supposing the existence of a ‘representation module’ that supplies representations ready-made. Finally, we describe a model of high-level perception and analogical thought in which perceptual processing is integrated with analogical mapping, leading to the flexible build-up of representations appropirate to a given context.
Publication
Télécharger la publication
Année de publication : 1992
Type :
Article de journal
Article de journal
Auteurs :
Chalmers, D. J.
French, R. M.
Hofstadter, D. R.
Chalmers, D. J.
French, R. M.
Hofstadter, D. R.
Titre du journal :
Journal of Experimental & Theoretical Artificial Intelligence
Journal of Experimental & Theoretical Artificial Intelligence
Volume du journal :
4
4