Endress and Mehler (2009) reported that when adult subjects are exposed to an unsegmented artificial language composed from trisyllabic words such as ABX, YBC, and AZC, they are unable to distinguish between these words and what they coined as the ‘‘phantomword’’ ABC in a subsequent test. This suggests that statistical learning generates knowledge about the transitional probabilities (TPs) within each pair of syllables (AB, BC, and A C), which are common to words and phantom-words, but, crucially, does not lead to the extraction of genuine word-like units. This conclusion is definitely inconsistent with chunk-based models of word segmentation, as confirmed by simulations run with the MDLChunker (Robinet, Lemaire, & Gordon, 2011) and PARSER (Perruchet & Vinter, 1998), which successfully discover the words without computing TPs. Null results, however, can be due to multiple causes, and notably, in the case of Endress and Mehler, to the reduced level of intelligibility of their synthesized speech flow. In three experiments, we observed positive results in conditions similar to Endress and Mehler after only 5 min of exposure to the language, hence providing strong evidence that statistical information is sufficient to extract word-like units.
Publication
Télécharger la publication
Année de publication : 2012
Type :
Article de journal
Article de journal
Auteurs :
Perruchet, P.
& Poulin-Charronnat, B.
Perruchet, P.
& Poulin-Charronnat, B.
Titre du journal :
Journal of Memory and Language
Journal of Memory and Language
Mots-clés :
statistical learning, artificial language, word segmentation, chunking, modeling
statistical learning, artificial language, word segmentation, chunking, modeling