Irene Mittelberg
RWTH Aachen University, Human Technology Centre (HumTec), Faculty Member
- Ph.D., Cornell University, Linguistics and Cognitive Studies
M.A. Cornell University, Linguistics
M.A. Hamburg University, French Linguistics and Art Historyedit
Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit... more
Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Embodied image schemas are central to experientialist accounts of meaning-making. Research from several disciplines has evidenced their pervasiveness in motivating form and meaning in both literal and figurative expressions across diverse... more
Embodied image schemas are central to experientialist accounts of meaning-making. Research from several disciplines has evidenced their pervasiveness in motivating form and meaning in both literal and figurative expressions across diverse semiotic systems and art forms (e.g., Gibbs and Colston; Hampe; Johnson; Lakoff; and Mandler). This paper aims to highlight structural similarities between, on the one hand, dynamic image schemas and force schemas and, on the other, hand shapes and gestural movements. Such flexible correspondences between conceptual and gestural schematicity are assumed to partly stem from experiential bases shared by incrementally internalized conceptual structures and the repeated gestural (re-) enacting of bodily actions as well as more abstract semantic primitives (Lakoff). Gestures typically consist of evanescent, metonymically reduced hand configurations, motion onsets, or movement traces that minimally suggest, for instance, a PATH, the idea of CONTAINMENT, an IN-OUT spatial relation, or the momentary loss of emotional BALANCE. So, while physical in nature, gestures often emerge as rather schematic gestalts that, as such, have the capacity to vividly convey essential semantic and pragmatic aspects of high relevance to the speaker. It is further argued that gesturally instantiated image schemas and force dynamics are inherently meaningful structures that typically underlie more complex semantic and pragmatic processes involving, for instance, metonymy, metaphor, and frames. First, I discuss previous work on how image schemas, force gestalts, and mimetic schemas may underpin hand gestures and body postures. Drawing on Gibbs' dynamic systems account of image schemas, I then introduce an array of tendencies in gestural image schema enactments: body-inherent/self-oriented (body as image-schematic structure; forces acting upon the body); environment-oriented (material culture including spatial structures), and interlocutor-oriented (intersubjective understanding). Adopting a dynamic systems perspective (e.g.,Thompson and Varela) thus puts the focus on how image schemas and force gestalts that operate in gesture may function as cognitive-semiotic organizing principles that underpin a) the physical and cognitive self-regulation of speakers; b) how they interact with the (virtual) environment while talking; and c) intersub-jective instances of resonance and understanding between interlocutors or between an artwork and its beholder. Examples of these patterns are enriched by video and motion-capture data, showing how numeric kinetic data allow one to measure the temporal and spatial dimensions of gestural articulations and to visualize movement traces.
Research Interests:
Taking an Emergent Grammar (Hopper 1998) approach to multimodal usage events in face-to-face interaction, this paper suggests that basic scenes of experience tend to motivate entrenched patterns in both language and gesture (Fillmore... more
Taking an Emergent Grammar (Hopper 1998) approach to multimodal usage events in face-to-face interaction, this paper suggests that basic scenes of experience tend to motivate entrenched patterns in both language and gesture (Fillmore 1977; Goldberg 1998; Langacker 1987). Manual actions and interactions with the material and social world, such as giving or holding, have been shown to serve as substrate for prototypical ditransitive and transitive constructions in language (Goldberg 1995). It is proposed here that they may also underpin multimodal instantiations of existential construsctions in German discourse, namely, instances of the es gibt 'it gives' (there is/are) construction (Newman 1998) that co-occur with schematic gestural enactments of giving or holding something. Analyses show that gestural existential markers tend to combine referential and pragmatic functions. They exhibit a muted degree of indexicality, pointing to the existence of absent or abstract discourse contents that are central to the speaker's subjective expressivity. Furthermore, gestural existential markers show characteristics of grammaticalization processes in spoken and signed languages (Bybee 2013; Givón 1985; Haiman 1994; Hopper and Traugott 2003). A multimodal construction grammar needs to account for how linguistic constructions combine with gestural patterns into commonly used cross-modal clusters in different languages and contexts of use.
Research Interests:
Social interactions arise from patterns of communicative signs, whose perception and interpretation require a multitude of cognitive functions. The semiotic framework of Peirce's Universal Categories (UCs) laid ground for a novel... more
Social interactions arise from patterns of communicative signs, whose perception and interpretation require a multitude of cognitive functions. The semiotic framework of Peirce's Universal Categories (UCs) laid ground for a novel cognitive-semiotic typology of social interactions. During functional magnetic resonance imaging (fMRI), 16 volunteers watched a movie narrative encompassing verbal and non-verbal social interactions. Three types of non-verbal interactions were coded ("unresolved," "non-habitual," and "habitual") based on a typology reflecting Peirce's UCs. As expected, the auditory cortex responded to verbal interactions, but non-verbal interactions modulated temporal areas as well. Conceivably, when speech was lacking, ambiguous visual information (unresolved interactions) primed auditory processing in contrast to learned behavioral patterns (habitual interactions). The latter recruited a parahippocampal-occipital network supporting conceptual processing and associative memory retrieval. Requesting semiotic contextualization, non-habitual interactions activated visuo-spatial and contextual rule-learning areas such as the temporo-parietal junction and right lateral prefrontal cortex. In summary, the cognitive-semiotic typology reflected distinct sensory and association networks underlying the interpretation of observed non-verbal social interactions.
Research Interests:
This paper aims to evidence the inherently metonymic nature of co-speech gestures. Arguing that motivation in gesture involves iconicity (similarity), indexicality (contiguity), and habit (conventionality) to varying degrees, it... more
This paper aims to evidence the inherently metonymic nature of co-speech gestures. Arguing that motivation in gesture involves iconicity (similarity), indexicality (contiguity), and habit (conventionality) to varying degrees, it demonstrates how a set of metonymic principles may lend a certain systematicity to experientially grounded processes of gestural abstraction and enaction. Introducing visuo-kinetic signs as an umbrella term for co-speech gestures and signed languages, the paper shows how a frame-based approach to gesture may integrate different cognitive/functional linguistic and semiotic accounts of metonymy (e.g., experiential domains, frame metonymy, contiguity, and pragmatic inferencing). The guiding assumption is that gestures metonymically profile deeply embodied, routinized aspects of familiar scenes, that is, the motivating context of frames. The discussion shows how gestures may evoke frame structures exhibiting varying degrees of groundedness, complexity, and schematicity: basic physical action and object frames; more complex frames; and highly abstract, complex frame structures. It thereby provides gestural evidence for the idea that metonymy is more basic and more directly experientially grounded than metaphor and thus often feeds into correlated metaphoric processes. Furthermore, the paper offers some initial insights into how metonymy also seems to induce the emergence of schematic patterns in gesture which may result from action-based and discourse-driven processes of habituation and conventionalization. It exemplifies how these forces may engender grammaticalization of a basic physical action into a gestural marker that shows strong metonymic form reduction, decreased transitivity, and interacting pragmatic functions. Finally, addressing basic metonymic operations in signed lexemes elucidates certain similarities regarding sign constitution in gesture and sign. English and German multimodal discourse data as well as German Sign Language (DGS) are drawn upon to illustrate the theoretical points of the paper. Overall, this paper presents a unified account of metonymy's role in underpinning forms, functions, and patterns in visuo-kinetic signs.
Research Interests:
Viewpoint has been shown to be a powerful construal mechanism in multimodal spoken and signed discourse, as well as in various other modalities and genres. This paper investigates embodied viewpoint strategies that have been observed when... more
Viewpoint has been shown to be a powerful construal mechanism in multimodal spoken and signed discourse, as well as in various other modalities and genres. This paper investigates embodied viewpoint strategies that have been observed when speakers combine speech, gestures, postures, gaze, and simulated action to describe their interaction with spatial artifacts such as gallery buildings, virtual architectural models, and paintings. Simulated artifact immersion is introduced as a multimodal viewpoint strategy whereby speakers submerge into their mental representation of an artifact by perceiving and experiencing it from an internal vantage point. It is argued that this viewpoint strategy tends to be employed when there is no narrative structure for the speakers to fall back on. The paper's aim is twofold: (a) to show that when speakers talk about their own experiences with spatial artifacts, distinguishing between immersed and non-immersed experiential viewpoint strategies may be more fitting than distinguishing between character and observer viewpoint; and (b) to discuss how considering the interaction of iconic, indexical, and metonymic principles in gesture may elucidate viewpoint phenomena in general.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
... Acknowledgements. I would like to thank James Gair, Monica Gonzalez-Marquez, Gunhild Lischke, Maria Serrano, Michael Spivey, Eve Sweetser, Linda Waugh, and Rebecca Webb for valuable comments and suggestions. ...
Page 1. Association for Information Systems AIS Electronic Library (AISeL) SIGHCI 2010 Proceedings ...
