Serial Position Encoding of Signs (SummerFest 2016)

MICHELE MIOZZO, Columbia University, ANNA PETROVA, Hong Kong University, SIMON FISCHER-BAUM, Rice University, FRANCESCA PERESSOTTI, University of Padua.

Reduced short-term memory (STM) capacity has been reported for signs as compared to speech when items have to be recalled in a specific order. This difference has been attributed to a more precise and efficient serial position encoding in verbal STM (used for speech) than visuospatial STM (used for signs). We tested whether the reduced STM capacity with signs stems from a lack of positional encoding available in verbal STM. Error analyses reported in prior studies have revealed that for verbal material the positions within a sequence are defined by distance from both the start and the end of the sequence (both-edges positional encoding scheme). For the visuospatial material, however, the encoding scheme appears to be based only on the start of the sequence. If sign language material is encoded through visuospatial STM, we should expect the same results for signs.

We analysed the errors made by deaf participants in repeating sequences of signs, and we found that the STM representation of signs is characterized by the both-edges positional encoding scheme. These results indicate that the cause of the STM disadvantage is not the type of positional encoding but rather difficulties in binding of items in visuospatial STM to the specific positions in a sequence. Both-edges positional encoding scheme could be specific for sign language material, since it has not been found in visuospatial STM tasks conducted with hearing participants.