In this paper we investigate deaf individuals’ learning from both videos and written verbal material. We know from the literature that the hearing loss and the aural degraded linguistic input received since birth lead both signing and oral deaf individuals to a poor comprehension of the written language. Also, the visuo-spatial nature of sign languages, acquired since birth by signing deaf individuals, leads them to an improved ability in exploiting visual information. From such atypical cognitive functioning of deaf individuals, and from the theoretical implications of the mental model theory applied to comprehension and deep learning, we derive the following predictions: 1) deaf individuals differ from hearing individuals in learning by written verbal language, but not in learning by videos; 2) signing deaf individuals exploit the visual information conveyed by videos more than oral deaf individuals and hearing individuals. Our experiment on signing deaf individuals, oral deaf individuals, and a control group of hearing individuals, confirms our predictions.
File in questo prodotto:
Non ci sono file associati a questo prodotto.