Skip to main content

You're using an out-of-date version of Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.

Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit... more
Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.
Download (.pdf)
Download (.pdf)
Download (.pdf)
Download (.pdf)
Embodied image schemas are central to experientialist accounts of meaning-making. Research from several disciplines has evidenced their pervasiveness in motivating form and meaning in both literal and figurative expressions across diverse... more
Embodied image schemas are central to experientialist accounts of meaning-making. Research from several disciplines has evidenced their pervasiveness in motivating form and meaning in both literal and figurative expressions across diverse semiotic systems and art forms (e.g., Gibbs and Colston; Hampe; Johnson; Lakoff; and Mandler). This paper aims to highlight structural similarities between, on the one hand, dynamic image schemas and force schemas and, on the other, hand shapes and gestural movements. Such flexible correspondences between conceptual and gestural schematicity are assumed to partly stem from experiential bases shared by incrementally internalized conceptual structures and the repeated gestural (re-) enacting of bodily actions as well as more abstract semantic primitives (Lakoff). Gestures typically consist of evanescent, metonymically reduced hand configurations, motion onsets, or movement traces that minimally suggest, for instance, a PATH, the idea of CONTAINMENT, an IN-OUT spatial relation, or the momentary loss of emotional BALANCE. So, while physical in nature, gestures often emerge as rather schematic gestalts that, as such, have the capacity to vividly convey essential semantic and pragmatic aspects of high relevance to the speaker. It is further argued that gesturally instantiated image schemas and force dynamics are inherently meaningful structures that typically underlie more complex semantic and pragmatic processes involving, for instance, metonymy, metaphor, and frames. First, I discuss previous work on how image schemas, force gestalts, and mimetic schemas may underpin hand gestures and body postures. Drawing on Gibbs' dynamic systems account of image schemas, I then introduce an array of tendencies in gestural image schema enactments: body-inherent/self-oriented (body as image-schematic structure; forces acting upon the body); environment-oriented (material culture including spatial structures), and interlocutor-oriented (intersubjective understanding). Adopting a dynamic systems perspective (e.g.,Thompson and Varela) thus puts the focus on how image schemas and force gestalts that operate in gesture may function as cognitive-semiotic organizing principles that underpin a) the physical and cognitive self-regulation of speakers; b) how they interact with the (virtual) environment while talking; and c) intersub-jective instances of resonance and understanding between interlocutors or between an artwork and its beholder. Examples of these patterns are enriched by video and motion-capture data, showing how numeric kinetic data allow one to measure the temporal and spatial dimensions of gestural articulations and to visualize movement traces.
Download (.pdf)
Taking an Emergent Grammar (Hopper 1998) approach to multimodal usage events in face-to-face interaction, this paper suggests that basic scenes of experience tend to motivate entrenched patterns in both language and gesture (Fillmore... more
Taking an Emergent Grammar (Hopper 1998) approach to multimodal usage events in face-to-face interaction, this paper suggests that basic scenes of experience tend to motivate entrenched patterns in both language and gesture (Fillmore 1977; Goldberg 1998; Langacker 1987). Manual actions and interactions with the material and social world, such as giving or holding, have been shown to serve as substrate for prototypical ditransitive and transitive constructions in language (Goldberg 1995). It is proposed here that they may also underpin multimodal instantiations of existential construsctions in German discourse, namely, instances of the es gibt 'it gives' (there is/are) construction (Newman 1998) that co-occur with schematic gestural enactments of giving or holding something. Analyses show that gestural existential markers tend to combine referential and pragmatic functions. They exhibit a muted degree of indexicality, pointing to the existence of absent or abstract discourse contents that are central to the speaker's subjective expressivity. Furthermore, gestural existential markers show characteristics of grammaticalization processes in spoken and signed languages (Bybee 2013; Givón 1985; Haiman 1994; Hopper and Traugott 2003). A multimodal construction grammar needs to account for how linguistic constructions combine with gestural patterns into commonly used cross-modal clusters in different languages and contexts of use.
Download (.pdf)
Download (.pdf)
Social interactions arise from patterns of communicative signs, whose perception and interpretation require a multitude of cognitive functions. The semiotic framework of Peirce's Universal Categories (UCs) laid ground for a novel... more
Social interactions arise from patterns of communicative signs, whose perception and interpretation require a multitude of cognitive functions. The semiotic framework of Peirce's Universal Categories (UCs) laid ground for a novel cognitive-semiotic typology of social interactions. During functional magnetic resonance imaging (fMRI), 16 volunteers watched a movie narrative encompassing verbal and non-verbal social interactions. Three types of non-verbal interactions were coded ("unresolved," "non-habitual," and "habitual") based on a typology reflecting Peirce's UCs. As expected, the auditory cortex responded to verbal interactions, but non-verbal interactions modulated temporal areas as well. Conceivably, when speech was lacking, ambiguous visual information (unresolved interactions) primed auditory processing in contrast to learned behavioral patterns (habitual interactions). The latter recruited a parahippocampal-occipital network supporting conceptual processing and associative memory retrieval. Requesting semiotic contextualization, non-habitual interactions activated visuo-spatial and contextual rule-learning areas such as the temporo-parietal junction and right lateral prefrontal cortex. In summary, the cognitive-semiotic typology reflected distinct sensory and association networks underlying the interpretation of observed non-verbal social interactions.
Download (.pdf)
This paper aims to evidence the inherently metonymic nature of co-speech gestures. Arguing that motivation in gesture involves iconicity (similarity), indexicality (contiguity), and habit (conventionality) to varying degrees, it... more
This paper aims to evidence the inherently metonymic nature of co-speech gestures. Arguing that motivation in gesture involves iconicity (similarity), indexicality (contiguity), and habit (conventionality) to varying degrees, it demonstrates how a set of metonymic principles may lend a certain systematicity to experientially grounded processes of gestural abstraction and enaction. Introducing visuo-kinetic signs as an umbrella term for co-speech gestures and signed languages, the paper shows how a frame-based approach to gesture may integrate different cognitive/functional linguistic and semiotic accounts of metonymy (e.g., experiential domains, frame metonymy, contiguity, and pragmatic inferencing). The guiding assumption is that gestures metonymically profile deeply embodied, routinized aspects of familiar scenes, that is, the motivating context of frames. The discussion shows how gestures may evoke frame structures exhibiting varying degrees of groundedness, complexity, and schematicity: basic physical action and object frames; more complex frames; and highly abstract, complex frame structures. It thereby provides gestural evidence for the idea that metonymy is more basic and more directly experientially grounded than metaphor and thus often feeds into correlated metaphoric processes. Furthermore, the paper offers some initial insights into how metonymy also seems to induce the emergence of schematic patterns in gesture which may result from action-based and discourse-driven processes of habituation and conventionalization. It exemplifies how these forces may engender grammaticalization of a basic physical action into a gestural marker that shows strong metonymic form reduction, decreased transitivity, and interacting pragmatic functions. Finally, addressing basic metonymic operations in signed lexemes elucidates certain similarities regarding sign constitution in gesture and sign. English and German multimodal discourse data as well as German Sign Language (DGS) are drawn upon to illustrate the theoretical points of the paper. Overall, this paper presents a unified account of metonymy's role in underpinning forms, functions, and patterns in visuo-kinetic signs.
Download (.pdf)
Viewpoint has been shown to be a powerful construal mechanism in multimodal spoken and signed discourse, as well as in various other modalities and genres. This paper investigates embodied viewpoint strategies that have been observed when... more
Viewpoint has been shown to be a powerful construal mechanism in multimodal spoken and signed discourse, as well as in various other modalities and genres. This paper investigates embodied viewpoint strategies that have been observed when speakers combine speech, gestures, postures, gaze, and simulated action to describe their interaction with spatial artifacts such as gallery buildings, virtual architectural models, and paintings. Simulated artifact immersion is introduced as a multimodal viewpoint strategy whereby speakers submerge into their mental representation of an artifact by perceiving and experiencing it from an internal vantage point. It is argued that this viewpoint strategy tends to be employed when there is no narrative structure for the speakers to fall back on. The paper's aim is twofold: (a) to show that when speakers talk about their own experiences with spatial artifacts, distinguishing between immersed and non-immersed experiential viewpoint strategies may be more fitting than distinguishing between character and observer viewpoint; and (b) to discuss how considering the interaction of iconic, indexical, and metonymic principles in gesture may elucidate viewpoint phenomena in general.
Download (.pdf)
Download (.pdf)
This chapter starts from the observation that metaphoric understandings expressed monomodally through gesture tend to rely on "primary metaphors" (Grady 1997a). Asserting that gestures draw on basic, experientially motivated, embodied... more
This chapter starts from the observation that metaphoric understandings expressed monomodally through gesture tend to rely on "primary metaphors" (Grady 1997a). Asserting that gestures draw on basic, experientially motivated, embodied construal operations, we detail how primary scenes and subscenes (Grady & Johnson 2002), image and force schemas, metonymy, and frames (Fillmore 1982) interact in situated meaning-making. We propose that by shifting the focus from object-oriented schemas, source domains, and mappings to what we call "source actions" and "embodied action frames," we can account for the pragmatically minded nature and specific mediality of communicative gestural acts integrated in natural multimodal discourse. We argue that coverbal gestures recruit frame structures metonymically, singling out elements of "scenes" (Fillmore 1977), especially those underpinning correlated metaphoric meanings. We back up our theoretical claims with evidence from neuroscientific studies and outline a frame-based approach that helps trace avenues for further research into embodied cognition and multimodal discourse processes.
Download (.pdf)
In "Two heads are better than one," "head" stands for people and focuses the message on the intelligence of people. This is an example of figurative language through metonymy, where substituting a whole entity by one of its parts focuses... more
In "Two heads are better than one," "head" stands for people and focuses the message on the intelligence of people. This is an example of figurative language through metonymy, where substituting a whole entity by one of its parts focuses attention on a specific aspect of the entity. Whereas metaphors, another figurative language device, are substitutions based on similarity, metonymy involves substitutions based on associations. Both are figures of speech but are also expressed in coverbal gestures during multimodal communication. The closest neuropsychological studies of metonymy in gestures have been nonlinguistic tool-use, illustrated by the classic apraxic problem of body-part-as-object (BPO, equivalent to an internal metonymy representation of the tool) vs. pantomimed action (external metonymy representation of the absent object/tool). Combining these research domains with concepts in cognitive linguistic research on gestures, we conducted an fMRI study to investigate metonymy resolution in coverbal gestures. Given the greater difficulty in developmental and apraxia studies, perhaps explained by the more complex semantic inferencing involved for external metonymy than for internal metonymy representations, we hypothesized that external metonymy resolution requires greater processing demands and that the neural resources supporting metonymy resolution would modulate regions involved in semantic processing. We found that there are indeed greater activations for external than for internal metonymy resolution in the temporoparietal junction (TPJ). This area is posterior to the lateral temporal regions recruited by metaphor processing. Effective connectivity analysis confirmed our hypothesis that metonymy resolution modulates areas implicated in semantic processing. We interpret our results in an interdisciplinary view of what metonymy in action can reveal about abstract cognition.
Download (.pdf)
... In C. Forceville and E. Urios-Aparisi (eds.), Multimodal Metaphor. Mouton de Gruyter, Berlin (2009), 329-358. 13. ... O'Reilly Media, 2008. 16. Wobbrock, JO, Morris, MR and Wilson, AD User-defined gestures for surface... more
... In C. Forceville and E. Urios-Aparisi (eds.), Multimodal Metaphor. Mouton de Gruyter, Berlin (2009), 329-358. 13. ... O'Reilly Media, 2008. 16. Wobbrock, JO, Morris, MR and Wilson, AD User-defined gestures for surface computing. Proc. of the 27th Int. Conf. ...
Download (.pdf)
ABSTRACT This paper explores how na?ve observers recognize and interpret transitive actions (actions involving manipulation of objects) without accompanying speech, in order to derive guidelines for the design of gesture interpretation... more
ABSTRACT This paper explores how na?ve observers recognize and interpret transitive actions (actions involving manipulation of objects) without accompanying speech, in order to derive guidelines for the design of gesture interpretation systems. Semi-structured interviews with 11 observers, interpreting 106 video clips of transitive actions elicited unstaged from 16 participants, reveal that people are generally able to interpret the transitive action as well as characteristics of the object manipulated despite individual variations in how people naturally gesture. In particular, people focus primarily on hand movement and hand shape to correctly interpret object characteristics, and on manner of movement of arms and/or final location of hands to interpret the goal of the transitive action (e.g., arrange objects vs. clear objects). These findings provide insights on aspects of gestures one can focus on to inform and guide the design of gesture interpretation models for interfaces that allow for individual variations in natural gesture production.
Download (.pdf)
Download (.pdf)
Download (.pdf)
Download (.pdf)
Download (.pdf)
Download (.pdf)
Download (.pdf)
Download (.pdf)
Download (.pdf)
Download (.pdf)
Download (.pdf)
Download (.pdf)
Download (.pdf)
Download (.pdf)
This page is part of John Benjamins Publishing Company website. Click 'embed' to view its contents in the fully-featured web application. Embed. ...
Download (.pdf)
... Acknowledgements. I would like to thank James Gair, Monica Gonzalez-Marquez, Gunhild Lischke, Maria Serrano, Michael Spivey, Eve Sweetser, Linda Waugh, and Rebecca Webb for valuable comments and suggestions. ...
Page 1. Association for Information Systems AIS Electronic Library (AISeL) SIGHCI 2010 Proceedings ...
Download (.pdf)
Download (.pdf)
Download (.pdf)
Download (.pdf)