• Search Menu
  • Sign in through your institution
  • Advance Articles
  • Editor's Choice
  • Braunwald's Corner
  • ESC Guidelines
  • EHJ Dialogues
  • Issue @ a Glance Podcasts
  • CardioPulse
  • Weekly Journal Scan
  • European Heart Journal Supplements
  • Year in Cardiovascular Medicine
  • Asia in EHJ
  • Most Cited Articles
  • ESC Content Collections
  • Author Guidelines
  • Submission Site
  • Why publish with EHJ?
  • Open Access Options
  • Submit from medRxiv or bioRxiv
  • Author Resources
  • Self-Archiving Policy
  • Read & Publish
  • Advertising and Corporate Services
  • Advertising
  • Reprints and ePrints
  • Sponsored Supplements
  • Journals Career Network
  • About European Heart Journal
  • Editorial Board
  • About the European Society of Cardiology
  • ESC Publications
  • War in Ukraine
  • ESC Membership
  • ESC Journals App
  • Developing Countries Initiative
  • Dispatch Dates
  • Terms and Conditions
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, the power of non-verbal communication, in academic settings, the role of body language in interviews and evaluations, cultural considerations, the impact of body language on collaboration, declarations.

  • < Previous

Unspoken science: exploring the significance of body language in science and academia

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Mansi Patil, Vishal Patil, Unisha Katre, Unspoken science: exploring the significance of body language in science and academia, European Heart Journal , Volume 45, Issue 4, 21 January 2024, Pages 250–252, https://doi.org/10.1093/eurheartj/ehad598

  • Permissions Icon Permissions

Scientific presentations serve as a platform for researchers to share their work and engage with their peers. Science and academia rely heavily on effective communication to share knowledge and foster collaboration. Science and academia are domains deeply rooted in the pursuit of knowledge and the exchange of ideas. While the focus is often on the content of research papers, lectures, and presentations, there is another form of communication that plays a significant role in these fields: body language. Non-verbal cues, such as facial expressions, gestures, posture, and eye contact, can convey a wealth of information, often subtly influencing interpersonal dynamics and the perception of scientific work. In this article, we will delve into the unspoken science of body language, exploring its significance in science and academia. It is essential to emphasize on the importance of body language in scientific and academic settings, highlighting its impact on presentations, interactions, interviews, and collaborations. Additionally, cultural considerations and the implications for cross-cultural communication are explored. By understanding the unspoken science of body language, researchers and academics can enhance their communication skills and promote a more inclusive and productive scientific community.

Communication is a multi-faceted process, and words are only one aspect of it. Research suggests that non-verbal communication constitutes a substantial portion of human interaction, often conveying information that words alone cannot. Body language has a direct impact on how people perceive and interpret scientific ideas and findings. 1 For example, a presenter who maintains confident eye contact, uses purposeful gestures, and exhibits an open posture is likely to be seen as more credible and persuasive compared with someone who fidgets, avoids eye contact, and displays closed-off body language ( Figure 1 ).

Types of non-verbal communications.2 Non-verbal communication comprises of haptics, gestures, proxemics, facial expressions, paralinguistics, body language, appearance, eye contact, and artefacts.

Types of non-verbal communications. 2 Non-verbal communication comprises of haptics, gestures, proxemics, facial expressions, paralinguistics, body language, appearance, eye contact, and artefacts.

In academia, body language plays a crucial role in various contexts. During lectures, professors who use engaging body language, such as animated gestures and expressive facial expressions, can captivate their students and enhance the learning experience. Similarly, students who exhibit attentive and respectful body language, such as maintaining eye contact and nodding, signal their interest and engagement in the subject matter. 3

Body language also influences interactions between colleagues and supervisors. For instance, in a laboratory setting, researchers who display confident and open body language are more likely to be perceived as competent and reliable by their peers. Conversely, individuals who exhibit closed-off or defensive body language may inadvertently create an environment that inhibits collaboration and knowledge sharing. The impact of haptics in research collaboration and networking lies in its potential to enhance interpersonal connections and convey emotions, thereby fostering a deeper sense of empathy and engagement among participants.

Interviews and evaluations are critical moments in academic and scientific careers. Body language can significantly impact the outcomes of these processes. Candidates who display confident body language, including good posture, firm handshakes, and appropriate gestures, are more likely to make positive impressions on interviewers or evaluators. Conversely, individuals who exhibit nervousness or closed-off body language may unwittingly convey a lack of confidence or competence, even if their qualifications are strong. Recognizing the power of body language in these situations allows individuals to present themselves more effectively and positively.

Non-verbal cues play a pivotal role during interviews and conferences, where researchers and academics showcase their work. When attending conferences or presenting research, scientists must be aware of their body language to effectively convey their expertise and credibility. Confident body language can inspire confidence in others, making it easier to establish professional connections, garner support for research projects, and secure collaborations.

Similarly, during job interviews, body language can significantly impact the outcome. The facial non-verbal elements of an interviewee in a job interview setting can have a great effect on their chances of being hired. The face as a whole, the eyes, and the mouth are features that are looked at and observed by the interviewer as they makes their judgements on the person’s effective work ability. The more an applicant genuinely smiles and has their eyes’ non-verbal message match their mouth’s non-verbal message, they will be more likely to get hired than those who do not. As proven, that first impression can be made in only milliseconds; thus, it is crucial for an applicant to pass that first test. It paints the road for the rest of the interview process. 4

While body language is a universal form of communication, it is important to recognize that its interpretation can vary across cultures. Different cultures have distinct norms and expectations regarding body language, and what may be seen as confident in one culture may be interpreted differently in another. 5 It is crucial for scientists and academics to be aware of these cultural nuances to foster effective cross-cultural communication and understanding. Awareness of cultural nuances is crucial in fostering effective cross-cultural communication and understanding. Scientists and academics engaged in international collaborations or interactions should familiarize themselves with cultural differences to avoid misunderstandings and promote respectful and inclusive communication.

Collaboration lies at the heart of scientific progress and academic success. Body language plays a significant role in building trust and establishing effective collaboration among researchers and academics. Open and inviting body language, along with active listening skills, can foster an environment where ideas can be freely exchanged, leading to innovative breakthroughs. In research collaboration and networking, proxemics can significantly affect the level of trust and rapport between researchers. Respecting each other’s personal space and maintaining appropriate distances during interactions can foster a more positive and productive working relationship, leading to better communication and idea exchange ( Figure 2 ). Furthermore, being aware of cultural variations in proxemics can help researchers navigate diverse networking contexts, promoting cross-cultural understanding and enabling more fruitful international collaborations.

Overcoming the barrier of communication. The following factors are important for overcoming the barriers in communication, namely, using culturally appropriate language, being observant, assuming positive intentions, avoiding being judgemental, identifying and controlling bias, slowing down responses, emphasizing relationships, seeking help from interpreters, being eager to learn and adapt, and being empathetic.

Overcoming the barrier of communication. The following factors are important for overcoming the barriers in communication, namely, using culturally appropriate language, being observant, assuming positive intentions, avoiding being judgemental, identifying and controlling bias, slowing down responses, emphasizing relationships, seeking help from interpreters, being eager to learn and adapt, and being empathetic.

On the other hand, negative body language, such as crossed arms, lack of eye contact, or dismissive gestures, can signal disinterest or disagreement, hindering collaboration and stifling the flow of ideas. Recognizing and addressing such non-verbal cues can help create a more inclusive and productive scientific community.

Effective communication is paramount in science and academia, where the exchange of ideas and knowledge fuels progress. While the scientific community often focuses on the power of words, it is crucial not to send across conflicting verbal and non-verbal cues. While much attention is given to verbal communication, the significance of non-verbal cues, specifically body language, cannot be overlooked. Body language encompasses facial expressions, gestures, posture, eye contact, and other non-verbal behaviours that convey information beyond words.

Disclosure of Interest

There are no conflicts of interests from all authors.

Baugh AD , Vanderbilt AA , Baugh RF . Communication training is inadequate: the role of deception, non-verbal communication, and cultural proficiency . Med Educ Online 2020 ; 25 : 1820228 . https://doi.org/10.1080/10872981.2020.1820228

Google Scholar

Aralia . 8 Nonverbal Tips for Public Speaking . Aralia Education Technology. https://www.aralia.com/helpful-information/nonverbal-tips-public-speaking/ (22 July 2023, date last accessed)

Danesi M . Nonverbal communication. In: Understanding Nonverbal Communication : Boomsburry Academic , 2022 ; 121 – 162 . https://doi.org/10.5040/9781350152670.ch-001

Google Preview

Cortez R , Marshall D , Yang C , Luong L . First impressions, cultural assimilation, and hireability in job interviews: examining body language and facial expressions’ impact on employer’s perceptions of applicants . Concordia J Commun Res 2017 ; 4 . https://doi.org/10.54416/dgjn3336

Pozzer-Ardenghi L . Nonverbal aspects of communication and interaction and their role in teaching and learning science. In: The World of Science Education . Netherlands : Brill , 2009 , 259 – 271 . https://doi.org/10.1163/9789087907471_019

Month: Total Views:
October 2023 341
November 2023 239
December 2023 206
January 2024 982
February 2024 418
March 2024 593
April 2024 797
May 2024 1,031
June 2024 762
July 2024 705
August 2024 815
September 2024 743

Email alerts

Citing articles via, looking for your next opportunity, affiliations.

  • Online ISSN 1522-9645
  • Print ISSN 0195-668X
  • Copyright © 2024 European Society of Cardiology
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 01 March 2006

Towards the neurobiology of emotional body language

  • Beatrice de Gelder 1 , 2 , 3  

Nature Reviews Neuroscience volume  7 ,  pages 242–249 ( 2006 ) Cite this article

12k Accesses

524 Citations

33 Altmetric

Metrics details

People's faces show fear in many different circumstances. However, when people are terrified, as well as showing emotion, they run for cover. When we see a bodily expression of emotion, we immediately know what specific action is associated with a particular emotion, leaving little need for interpretation of the signal, as is the case for facial expressions. Research on emotional body language is rapidly emerging as a new field in cognitive and affective neuroscience. This article reviews how whole-body signals are automatically perceived and understood, and their role in emotional communication and decision-making.

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 print issues and online access

176,64 € per year

only 14,72 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

research about body language

Similar content being viewed by others

research about body language

Context matters: task relevance shapes neural responses to emotional facial expressions

research about body language

Commonalities and variations in emotion representation across modalities and brain regions

research about body language

Stimulus arousal drives amygdalar responses to emotional expressions across sensory modalities

Sprengelmeyer, R. et al. Knowing no fear. Proc. Biol. Sci. 266 , 2451–2456 (1999).

Article   CAS   PubMed   PubMed Central   Google Scholar  

de Gelder, B., Snyder, J., Greve, D., Gerard, G. & Hadjikhani, N. Fear fosters flight: a mechanism for fear contagion when perceiving emotion expressed by a whole body. Proc. Natl Acad. Sci. USA 101 , 16701–16706 (2004).

Dittrich, W. H., Troscianko, T., Lea, S. E. & Morgan, D. Perception of emotion from dynamic point-light displays represented in dance. Perception 25 , 727–738 (1996).

Article   CAS   PubMed   Google Scholar  

Atkinson, A. P., Dittrich, W. H., Gemmell, A. J. & Young, A. W. Emotion perception from dynamic and static body expressions in point-light and full-light displays. Perception 33 , 717–746 (2004).

Article   PubMed   Google Scholar  

Ekman, P. Differential communication of affect by head and body cues. J. Pers. Soc. Psychol. 2 , 726–735 (1965).

Panksepp, J. Affective Neuroscience: The Foundation of Human and Animal Emotions (Oxford Univ. Press, New York, 1998).

Google Scholar  

Bulwer, J. Chirologia, or the Natural Language of the Hand (Harper, London1644).

Book   Google Scholar  

Bell, C. The Anatomy and Philosophy of Expression as Connected with the Fine Arts (George Bell and Sons, London, 1806).

Gratiolet, L. P. Mémoire sur les plis Cérébraux de l'homme et des Primates (A. Bertrand, Paris, 1854).

Duchene de Boulogne, G. B. Mécanismes de la Physionomie Humaine, ou Analyse Electro-physiologique de l'expression des Passions (Baillière, Paris, 1862).

Darwin, C. The Expression of the Emotions in Man and Animals (John Murray, London, 1872).

Frijda, N. H. The Emotions (Cambridge Univ. Press, Cambridge, 1986).

Schmidt, K. L. & Cohn, J. F. Human facial expressions as adaptations: evolutionary questions in facial expression research. Am. J. Phys. Anthropol. 33 (Suppl.), 3–24 (2001).

Davidson, R. J. & Irwin, W. The functional neuroanatomy of emotion and affective style. Trends Cogn. Sci. 3 , 11–21 (1999).

Damasio, A. R. The Feeling of What Happens (Harcourt Brace, New York, 1999).

LeDoux, J. E. The Emotional Brain: The Mysterious Underpinnings of Emotional Life 384 (Simon and Schuster, New York, USA, 1996).

Zald, D. H. The human amygdala and the emotional evaluation of sensory stimuli. Brain Res. Brain Res. Rev. 41 , 88–123 (2003).

Phelps, E. A. & Ledoux, J. E. Contributions of the amygdala to emotion processing: from animal models to human behavior. Neuron 48 , 175–187 (2005).

Brothers, L. The neural basis of primate social communication. Motiv. Emot. 14 , 81–91 (1990).

Article   Google Scholar  

Haxby, J. V., Hoffman, E. A. & Gobbini, M. I. The distributed human neural system for face perception. Trends Cogn. Sci. 4 , 223–233 (2000).

de Gelder, B., Frissen, I., Barton, J. & Hadjikhani, N. A modulatory role for facial expressions in prosopagnosia. Proc. Natl Acad. Sci. USA 100 , 13105–13110 (2003).

Rotshtein, P., Malach, R., Hadar, U., Graif, M. & Hendler, T. Feeling or features: different sensitivity to emotion in high-order visual cortex and amygdala. Neuron 32 , 747–757 (2001).

Adolphs, R. Neural systems for recognizing emotion. Curr. Opin. Neurobiol. 12 , 169–177 (2002).

Emery, N. J. & Amaral, D. G. in Cognitive Neuroscience of Emotion (eds Lane, R. D., Nadel, L. & Ahern, G.) 156–191 (Oxford Univ. Press, New York, 2000).

Graziano, M. S. & Cooke, D. F. Parieto-frontal interactions, personal space, and defensive behavior. Neuropsychologia 8 Nov 2005 (10.1016/j.neuropsychologia.2005.09.009).

Argyle, M. Bodily Communication 363 (Methuen, London, 1988).

de Meijer, M. The contribution of general features of body movement to the attribution of emotions. J. Nonverbal Behav. 13 , 247–268 (1989).

Reed, C. L., Stone, V. E., Bozova, S. & Tanaka, J. The body-inversion effect. Psychol. Sci. 14 , 302–308 (2003).

Stekelenburg, J. J. & de Gelder, B. The neural correlates of perceiving human bodies: an ERP study on the body-inversion effect. Neuroreport 15 , 777–780 (2004).

Perrett, D. I., Hietanen, J. K., Oram, M. W. & Benson, P. J. Organization and functions of cells responsive to faces in the temporal cortex. Phil. Trans. R. Soc. Lond. B 335 , 23–30 (1992).

Article   CAS   Google Scholar  

Rizzolatti, G., Fadiga, L., Gallese, V. & Fogassi, L. Premotor cortex and the recognition of motor actions. Brain Res. Cogn. Brain Res. 3 , 131–141 (1996).

Downing, P. E., Jiang, Y., Shuman, M. & Kanwisher, N. A cortical area selective for visual processing of the human body. Science 293 , 2470–2473 (2001).

Bonda, E., Petrides, M., Ostry, D. & Evans, A. Specific involvement of human parietal systems and the amygdala in the perception of biological motion. J. Neurosci. 16 , 3737–3744 (1996).

Hadjikhani, N. & de Gelder, B. Seeing fearful body expressions activates the fusiform cortex and amygdala. Curr. Biol. 13 , 2201–2205 (2003).

Peelen, M. V. & Downing, P. E. Selectivity for the human body in the fusiform gyrus. J. Neurophysiol. 93 , 603–608 (2005).

Johnson, M. H. Subcortical face processing. Nature Rev. Neurosci. 6 , 766–774 (2005).

Gliga, T. & Dehaene-Lambertz, G. Structural encoding of body and face in human infants and adults. J. Cogn. Neurosci. 17 , 1328–1340 (2005).

Meeren, H. K. M., Hadjikhani, N., Ahlfors, S. P., Hamalainen, M. S. & de Gelder, B. in Human Brain Mapping (Florence, Italy, 2006).

Meeren, H. K., van Heijnsbergen, C. C. & de Gelder, B. Rapid perceptual integration of facial expression and emotional body language. Proc. Natl Acad. Sci. USA 102 , 16518–16523 (2005).

Bentin, S., Sagiv, N., Mecklinger, A., Friederici, A. & von Cramon, Y. D. Priming visual face-processing mechanisms: electrophysiological evidence. Psychol. Sci. 13 , 190–193 (2002).

Damasio, A. R. et al. Subcortical and cortical brain activity during the feeling of self-generated emotions. Nature Neurosci. 3 , 1049–1056 (2000).

Singer, T. et al. Empathy for pain involves the affective but not sensory components of pain. Science 303 , 1157–1162 (2004).

Kilts, C. D., Egan, G., Gideon, D. A., Ely, T. D. & Hoffman, J. M. Dissociable neural pathways are involved in the recognition of emotion in static and dynamic facial expressions. Neuroimage 18 , 156–168 (2003).

Kourtzi, Z. & Kanwisher, N. Activation in human MT/MST by static images with implied motion. J. Cogn. Neurosci. 12 , 48–55 (2000).

Bertenthal, B. I., Proffitt, D. R. & Kramer, S. J. Perception of biomechanical motions by infants: implementation of various processing constraints. J. Exp. Psychol. Hum. Percept. Perform. 13 , 577–585 (1987).

Jellema, T. & Perrett, D. I. Cells in monkey STS responsive to articulated body motions and consequent static posture: a case of implied motion? Neuropsychologia 41 , 1728–1737 (2003).

Vallortigara, G., Regolin, L. & Marconato, F. Visually inexperienced chicks exhibit spontaneous preference for biological motion patterns. PLoS Biol. 3 , e208 (2005).

Heberlein, A. S. & Adolphs, R. Impaired spontaneous anthropomorphizing despite intact perception and social knowledge. Proc. Natl Acad. Sci. USA 101 , 7487–7491 (2004).

Anderson, A. K. & Phelps, E. A. Expression without recognition: contributions of the human amygdala to emotional communication. Psychol. Sci. 11 , 106–111 (2000).

Astafiev, S. V., Stanley, C. M., Shulman, G. L. & Corbetta, M. Extrastriate body area in human occipital cortex responds to the performance of motor actions. Nature Neurosci. 7 , 542–548 (2004).

Jeannerod, M. The Cognitive Neuroscience of Action (Blackwell, Oxford, 1997).

Gallese, V., Fadiga, L., Fogassi, L. & Rizzolatti, G. Action recognition in the premotor cortex. Brain 119 , 593–609 (1996).

Grèzes, J. et al. Does perception of biological motion rely on specific brain regions? Neuroimage 13 , 775–785 (2001).

Allison, T., Puce, A. & McCarthy, G. Social perception from visual cues: role of the STS region. Trends Cogn. Sci. 4 , 267–278 (2000).

Grèzes, J., Armony, J. L., Rowe, J. & Passingham, R. E. Activations related to 'mirror' and 'canonical' neurones in the human brain: an fMRI study. Neuroimage 18 , 928–937 (2003).

Fadiga, L., Fogassi, L., Gallese, V. & Rizzolatti, G. Visuomotor neurons: ambiguity of the discharge or 'motor' perception? Int. J. Psychophysiol. 35 , 165–177 (2000).

Rizzolatti, G. & Craighero, L. The mirror-neuron system. Annu. Rev. Neurosci. 27 , 169–192 (2004).

Gallese, V., Keysers, C. & Rizzolatti, G. A unifying view of the basis of social cognition. Trends Cogn. Sci. 8 , 396–403 (2004).

Wicker, B. et al. Both of us disgusted in my insula: the common neural basis of seeing and feeling disgust. Neuron 40 , 655–664 (2003).

Carr, L., Iacoboni, M., Dubeau, M. C., Mazziotta, J. C. & Lenzi, G. L. Neural mechanisms of empathy in humans: a relay from neural systems for imitation to limbic areas. Proc. Natl Acad. Sci. USA 100 , 5497–5502 (2003).

Grosbras, M. H. & Paus, T. Brain networks involved in viewing angry hands or faces. Cereb. Cortex 12 Oct 2005 (10.1093/cercor/bhj050).

Grèzes, J., Pichon, S. & de Gelder, B. in Human Brain Mapping (Florence, Italy, 2006).

Tamietto, M., Latini, L., Weiskrantz, L., Guiliani, G. & de Gelder, B. Non-conscious recognition of faces and bodies in blindsight. 28th Cognitive Neurospsychology Workshop. Bressanone, Italy, 22–27 Jan 2006.

Pegna, A. J., Khateb, A., Lazeyras, F. & Seghier, M. L. Discriminating emotional faces without primary visual cortices involves the right amygdala. Nature Neurosci. 8 , 24–25 (2005).

de Gelder, B., Morris, J. S. & Dolan, R. J. Unconscious fear influences emotional awareness of faces and voices. Proc. Natl Acad. Sci. USA 102 , 18682–18687 (2005).

Anders, S. et al. Parietal somatosensory association cortex mediates affective blindsight. Nature Neurosci. 7 , 339–340 (2004).

Hamm, A. O. et al. Affective blindsight: intact fear conditioning to a visual cue in a cortically blind patient. Brain 126 , 267–275 (2003).

de Gelder, B., Vroomen, J., Pourtois, G. & Weiskrantz, L. Affective blindsight: are we blindly led by emotions?Response to Heywood and Kentridge (2000). Trends Cogn. Sci. 4 , 126–127 (2000).

Adolphs, R., Damasio, H., Tranel, D., Cooper, G. & Damasio, A. R. A role for somatosensory cortices in the visual recognition of emotion as revealed by three-dimensional lesion mapping. J. Neurosci. 20 , 2683–2690 (2000).

Dolan, R. J. Emotion, cognition and behavior. Science 8 , 1191–1194 (2002).

Magnée, M. J. C. M., de Gelder, B., Van Engeland, H. & Kemner, C. Facial EMG and affect processing in pervasive developmental disorder. Paper presented at the 4th International Meeting For Autism Research, Boston, Massachusetts, USA, 5–7 May 2005.

Bonelli, R. M., Kapfhammer, H. P., Pillay, S. S. & Yurglun-Todd, D. Basal ganglia volumetric studies in affective disorder: what did we learn in the last 15 years? J. Neural Transm. 113 , 255–268 (2006).

Van den Stock, J., de Gelder, B., De Diego Balaguer, R. & Bachoud-Lévi, A. -C. Huntington's disease impairs recognition of facial expression but also of body language. Paper presented at the 14th Conference of the European Society for Cognitive Psychology, Leiden, The Netherlands, 31 Aug–3 Sep 2005.

Morris, J. S., Ohman, A. & Dolan, R. J. Conscious and unconscious emotional learning in the human amygdala. Nature 393 , 467–470 (1998).

LeDoux, J. E. Emotion circuits in the brain. Annu. Rev. Neurosci. 23 , 155–184 (2000).

Schiller, P. H. & Koerner, F. Discharge characteristics of single units in superior colliculus of the alert rhesus monkey. J. Neurophysiol. 34 , 920–936 (1971).

Dean, P., Redgrave, P. & Westby, G. W. Event or emergency? Two response systems in the mammalian superior colliculus. Trends Neurosci. 12 , 137–147 (1989).

Sah, P., Faber, E. S., Lopez De Armentia, M. & Power, J. The amygdaloid complex: anatomy and physiology. Physiol. Rev. 83 , 803–834 (2003).

Davis, M. & Whalen, P. J. The amygdala: vigilance and emotion. Mol. Psychiatry 6 , 13–34 (2001).

Cardinal, R. N., Parkinson, J. A., Hall, J. & Everitt, B. J. Emotion and motivation: the role of the amygdala, ventral striatum, and prefrontal cortex. Neurosci. Biobehav. Rev. 26 , 321–352 (2002).

Bechara, A. Decision making, impulse control and loss of willpower to resist drugs: a neurocognitive perspective. Nature Neurosci. 8 , 1458–1463 (2005).

Everitt, B. J. & Robbins, T. W. Neural systems of reinforcement for drug addiction: from actions to habits to compulsion. Nature Neurosci. 8 , 1481–1489 (2005).

Delgado, M. R., Miller, M. M., Inati, S. & Phelps, E. A. An fMRI study of reward-related probability learning. Neuroimage 24 , 862–873 (2005).

Giese, M. A. & Poggio, T. Neural mechanisms for the recognition of biological movements. Nature Rev. Neurosci. 4 , 179–192 (2003).

Flash, T. & Hochner, B. Motor primitives in vertebrates and invertebrates. Curr. Opin. Neurobiol. 15 , 660–666 (2005).

Casile, A. & Giese, M. A. Critical features for the recognition of biological motion. J. Vis. 5 , 348–360 (2005).

Weiskrantz, L. Behavioral changes associated with ablation of the amygdaloid complex in monkeys. J. Comp. Physiol. Psychol. 49 , 381–391 (1956).

Amaral, D. G. & Price, J. L. Amygdalo-cortical projections in the monkey ( Macaca fascicularis ). J. Comp. Neurol. 230 , 465–496 (1984).

Amaral, D. G. & Insausti, R. Retrograde transport of D -[ 3 H]-aspartate injected into the monkey amygdaloid complex. Exp. Brain Res. 88 , 375–388 (1992).

Ghashghaei, H. T. & Barbas, H. Pathways for emotion: interactions of prefrontal and anterior temporal pathways in the amygdala of the rhesus monkey. Neuroscience 115 , 1261–1279 (2002).

Brothers, L., Ring, B. & Kling, A. Response of neurons in the macaque amygdala to complex social stimuli. Behav. Brain Res. 41 , 199–213 (1990).

Adolphs, R. The neurobiology of social cognition. Curr. Opin. Neurobiol. 11 , 231–239 (2001).

Amaral, D. G. The amygdala, social behavior, and danger detection. Ann. NY Acad. Sci. 1000 , 337–347 (2003).

Bauman, M. D., Lavenex, P., Mason, W. A., Capitanio, J. P. & Amaral, D. G. The development of social behavior following neonatal amygdala lesions in rhesus monkeys. J. Cogn. Neurosci. 16 , 1388–1411 (2004).

Emery, N. J. & Clayton, N. S. The mentality of crows: convergent evolution of intelligence in corvids and apes. Science 306 , 1903–1907 (2004).

Adolphs, R., Tranel, D. & Damasio, A. R. The human amygdala in social judgment. Nature 393 , 470–474 (1998).

Hadland, K. A., Rushworth, M. F., Gaffan, D. & Passingham, R. E. The effect of cingulate lesions on social behaviour and emotion. Neuropsychologia 41 , 919–931 (2003).

Bachevalier, J. Brief report: medial temporal lobe and autism: a putative animal model in primates. J. Autism Dev. Disord. 26 , 217–220 (1996).

Amaral, D. G. & Corbett, B. A. The amygdala, autism and anxiety. Novartis Found. Symp. 251 , 177–187; discussion 187–197, 281–197 (2003).

PubMed   Google Scholar  

Morris, J. S., Ohman, A. & Dolan, R. J. A subcortical pathway to the right amygdala mediating 'unseen' fear. Proc. Natl Acad. Sci. USA 96 , 1680–1685 (1999).

Ohman, A. The role of the amygdala in human fear: automatic detection of threat. Psychoneuroendocrinology 30 , 953–958 (2005).

Vuilleumier, P., Richardson, M. P., Armony, J. L., Driver, J. & Dolan, R. J. Distant influences of amygdala lesion on visual cortical activation during emotional face processing. Nature Neurosci. 7 , 1271–1278 (2004).

Rizzolatti, G., Fogassi, L. & Gallese, V. Neurophysiological mechanisms underlying the understanding and imitation of action. Nature Rev. Neurosci. 2 , 661–670 (2001).

Dapretto, M. et al. Understanding emotions in others: mirror neuron dysfunction in children with autism spectrum disorders. Nature Neurosci. 9 , 28–30 (2006).

Hadjikhani, N., Joseph, R. M., Snyder, J. & Tager-Flusberg, H. Anatomical differences in the mirror neuron system and social cognition network in autism. Cereb. Cortex 23 Nov 2005 (10.1093/cercor/bhj069).

Martin, J. H. Neuroanatomy: Text and Atlas 2nd edn (Appleton & Lange, Stamford, Connecticut, 1996).

Download references

Acknowledgements

Preparation of this manuscript was partly funded by a grant from The Human Frontier Science Program (HFSP) and by the MIND Foundation, the Martinos NMR-MGH Center, Harvard Medical School and Nederland Wetenschappelijk Onderzoek (NWO)-Dutch Science Foundation. I am very grateful to my collaborators in the joint studies reviewed here, to J. Van den Stock for assistance with the manuscript and to anonymous reviewers who provided valuable suggestions.

Author information

Authors and affiliations.

the Cognitive and Affective Neurosciences Laboratory, Tilburg University, 5000 LE Tilburg, The Netherlands

Beatrice de Gelder

Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Room 417, Building 36, First Street, Charlestown, 02129, Massachusetts, USA

Harvard Medical School, Charlestown

You can also search for this author in PubMed   Google Scholar

Ethics declarations

Competing interests.

The author declares no competing financial interests.

Related links

Further information.

Fall of the Damned

Rights and permissions

Reprints and permissions

About this article

Cite this article.

de Gelder, B. Towards the neurobiology of emotional body language. Nat Rev Neurosci 7 , 242–249 (2006). https://doi.org/10.1038/nrn1872

Download citation

Issue Date : 01 March 2006

DOI : https://doi.org/10.1038/nrn1872

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Effects of fearful face presentation time and observer’s eye movement on the gaze cue effect.

  • Keita Ishibashi
  • Koichi Iwanaga

Journal of Physiological Anthropology (2023)

Negativity drives online news consumption

  • Claire E. Robertson
  • Nicolas Pröllochs
  • Stefan Feuerriegel

Nature Human Behaviour (2023)

A 5-emotions stimuli set for emotion perception research with full-body dance movements

  • Julia F. Christensen
  • Laura Bruhn
  • Winfried Menninghaus

Scientific Reports (2023)

The role of hand gestures in emotion communication: Do type and size of gestures matter?

  • Esma Nur Asalıoğlu
  • Tilbe Göksun

Psychological Research (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research about body language

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Understanding Body Language and Facial Expressions

Body language refers to the nonverbal signals that we use to communicate. These nonverbal signals make up a huge part of daily communication. In fact, body language may account for between 60% to 65% of all communication.

Examples of body language include facial expressions, eye gaze, gestures, posture, and body movements. In many cases, the things we  don't  say can convey volumes of information.

So, why is body language important? Body language can help us understand others and ourselves. It provides us with information about how people may be feeling in a given situation. We can also use body language to express emotions or intentions.

Facial expressions, gestures, and eye gaze are often identified as the three major types of body language, but other aspects such as posture and personal distance can also be used to convey information. Understanding body language is important, but it is also essential to pay attention to other cues such as context. In many cases, you should look at signals as a group rather than focus on a single action.

This article discusses the roles played by body language in communication, as well as body language examples and the meaning behind them—so you know what to look for when you're trying to interpret nonverbal actions.

Click Play to Learn How To Read Body Language

This video has been medically reviewed by David Susman, PhD .

Facial Expressions

Think for a moment about how much a person is able to convey with just a facial expression. A smile can indicate approval or happiness . A frown can signal disapproval or unhappiness.

In some cases, our facial expressions may reveal our true feelings about a particular situation. While you say that you are feeling fine, the look on your face may tell people otherwise.

Just a few examples of  emotions  that can be expressed via facial expressions include:

The expression on a person's face can even help determine if we trust or believe what the individual is saying.

There are many interesting findings about body language in psychology research. One study found that the most trustworthy facial expression involved a slight raise of the eyebrows and a slight smile. This expression, the researchers suggested, conveys both friendliness and confidence .

Facial expressions are also among the most universal forms of body language. The expressions used to convey fear, anger, sadness, and happiness are similar throughout the world.

Researcher Paul Ekman has found support for the universality of a variety of facial expressions tied to particular emotions including joy, anger, fear, surprise, and sadness.

Research even suggests that we make judgments about people's intelligence based upon their faces and expressions.

One study found that individuals who had narrower faces and more prominent noses were more likely to be perceived as intelligent. People with smiling, joyful expression were also judged as being more intelligent than those with angry expressions.

The eyes are frequently referred to as the "windows to the soul" since they are capable of revealing a great deal about what a person is feeling or thinking.

As you engage in conversation with another person, taking note of eye movements is a natural and important part of the communication process.

Some common things you may notice include whether people are making direct eye contact or averting their gaze, how much they are blinking, or if their pupils are dilated.

The best way to read someone's body language is to pay attention. Look out for any of the following eye signals.

When a person looks directly into your eyes while having a conversation, it indicates that they are interested and paying attention . However, prolonged eye contact can feel threatening.

On the other hand, breaking eye contact and frequently looking away might indicate that the person is distracted, uncomfortable, or trying to conceal his or her real feelings.

Blinking is natural, but you should also pay attention to whether a person is blinking too much or too little.

People often blink more rapidly when they are feeling distressed or uncomfortable. Infrequent blinking may indicate that a person is intentionally trying to control his or her eye movements.  

For example, a poker player might blink less frequently because he is purposely trying to appear unexcited about the hand he was dealt.

Pupil size can be a very subtle nonverbal communication signal. While light levels in the environment control pupil dilation, sometimes emotions can also cause small changes in pupil size.

For example, you may have heard the phrase "bedroom eyes" used to describe the look someone gives when they are attracted to another person. Highly dilated eyes, for example, can indicate that a person is interested or even aroused.   

Mouth expressions and movements can also be essential in reading body language. For example, chewing on the bottom lip may indicate that the individual is experiencing feelings of worry, fear, or insecurity.

Covering the mouth may be an effort to be polite if the person is yawning or coughing, but it may also be an attempt to cover up a frown of disapproval.

Smiling is perhaps one of the greatest body language signals, but smiles can also be interpreted in many ways.

A smile may be genuine, or it may be used to express false happiness, sarcasm, or even cynicism.

When evaluating body language, pay attention to the following mouth and lip signals:

  • Pursed lips. Tightening the lips might be an indicator of distaste, disapproval, or distrust.
  • Lip biting. People sometimes bite their lips when they are worried, anxious, or stressed.
  • Covering the mouth. When people want to hide an emotional reaction, they might cover their mouths in order to avoid displaying smiles or smirks.
  • Turned up or down. Slight changes in the mouth can also be subtle indicators of what a person is feeling. When the mouth is slightly turned up, it might mean that the person is feeling happy or optimistic . On the other hand, a slightly down-turned mouth can be an indicator of sadness, disapproval, or even an outright grimace.

Gestures can be some of the most direct and obvious body language signals. Waving, pointing, and using the fingers to indicate numerical amounts are all very common and easy to understand gestures.

Some gestures may be cultural , however, so giving a thumbs-up or a peace sign in another country might have a completely different meaning than it does in the United States.

The following examples are just a few common gestures and their possible meanings:

  • A clenched fist  can indicate anger in some situations or solidarity in others.
  • A thumbs up and thumbs down  are often used as gestures of approval and disapproval.  
  • The "okay" gesture , made by touching together the thumb and index finger in a circle while extending the other three fingers can be used to mean "okay" or "all right."   In some parts of Europe, however, the same signal is used to imply you are nothing. In some South American countries, the symbol is actually a vulgar gesture.
  • The V sign , created by lifting the index and middle finger and separating them to create a V-shape, means peace or victory in some countries. In the United Kingdom and Australia, the symbol takes on an offensive meaning when the back of the hand is facing outward.

The Arms and Legs

The arms and legs can also be useful in conveying nonverbal information. Crossing the arms can indicate defensiveness. Crossing legs away from another person may indicate dislike or discomfort with that individual.

Other subtle signals such as expanding the arms widely may be an attempt to seem larger or more commanding, while keeping the arms close to the body may be an effort to minimize oneself or withdraw from attention.

When you are evaluating body language, pay attention to some of the following signals that the arms and legs may convey:

  • Crossed arms  might indicate that a person feels defensive, self-protective, or closed-off.
  • Standing with hands placed on the hips  can be an indication that a person is ready and in control, or it can also possibly be a sign of aggressiveness .
  • Clasping the hands behind the back  might indicate that a person is feeling bored, anxious, or even angry.
  • Rapidly tapping fingers or fidgeting  can be a sign that a person is bored, impatient, or frustrated.
  • Crossed legs  can indicate that a person is feeling closed-off or in need of privacy. 

How we hold our bodies can also serve as an important part of body language.

The term posture refers to how we hold our bodies as well as the overall physical form of an individual.

Posture can convey a wealth of information about how a person is feeling as well as hints about personality characteristics, such as whether a person is confident, open, or submissive.

Sitting up straight, for example, may indicate that a person is focused and paying attention to what's going on. Sitting with the body hunched forward, on the other hand, can imply that the person is bored or indifferent.

When you are trying to read body language, try to notice some of the signals that a person's posture can send.

  • Open posture  involves keeping the trunk of the body open and exposed. This type of posture indicates friendliness, openness, and willingness.
  • Closed posture  involves hiding the trunk of the body often by hunching forward and keeping the arms and legs crossed. This type of posture can be an indicator of hostility, unfriendliness, and anxiety .

Personal Space

Have you ever heard someone refer to their need for personal space? Have you ever started to feel uncomfortable when someone stands just a little too close to you?

The term proxemics , coined by anthropologist Edward T. Hall, refers to the distance between people as they interact. Just as body movements and facial expressions can communicate a great deal of nonverbal information, so can the physical space between individuals.

Hall  described four levels  of social distance that occur in different situations.

Intimate Distance: 6 to 18 inches 

This level of physical distance often indicates a closer relationship or greater comfort between individuals. It usually occurs during intimate contact such as hugging, whispering, or touching.

Personal Distance: 1.5 to 4 feet

Physical distance at this level usually occurs between people who are family members or close friends. The closer the people can comfortably stand while interacting can be an indicator of the level of intimacy in their relationship.

Social Distance: 4 to 12 feet.

This level of physical distance is often used with individuals who are acquaintances.

With someone you know fairly well, such as a co-worker you see several times a week, you might feel more comfortable interacting at a closer distance.

In cases where you do not know the other person well, such as a postal delivery driver you only see once a month, a distance of 10 to 12 feet may feel more comfortable.

Public Distance: 12 to 25 feet

Physical distance at this level is often used in public speaking situations. Talking in front of a class full of students or giving a presentation at work are good examples of such situations.

It is also important to note that the level of personal distance that individuals need to feel comfortable can vary from culture to culture.

One oft-cited example is the difference between people from Latin cultures and those from North America. People from Latin countries tend to feel more comfortable standing closer to one another as they interact, while those from North America need more personal distance.

Roles of Nonverbal Communication

Body language plays many roles in social interactions. It can help facilitate the following:

  • Earning trust : Engaging in eye contact, nodding your head while listening, and even unconsciously mirroring another person's body language are all signals that you and someone else are bonding.
  • Emphasizing a point : The tone of voice you use and the way you engage listeners with your hand and arm gestures, or by how you take up space, are all ways that affect how your message comes across.
  • Revealing truths : When someone's body language doesn't match what they're saying, we might intuitively pick up on the fact that they are withholding information, or perhaps not being honest about how they feel.
  • Tuning in to your own needs : Our own body language can reveal a lot about how we're feeling. For instance, are you in a slumped posture, clenching your jaw and/or pursing your lips? This may be a signal that the environment you're currently in is triggering you in some way. Your body might be telling you that you're feeling unsafe, stressed, or any number of emotions.

Remember, though, that your assumptions about what someone else's body language means may not always be accurate.

What does body language tell you about a person?

Body language can tell you when someone feels anxious, angry, excited, or any emotion. It may also suggest personality traits (i.e., whether someone is shy or outgoing). But, body language can be misleading. It is subject to a person's mood, energy level, and circumstances.

While in some cases, a lack of eye contact indicates untrustworthiness, for instance, it doesn't mean you automatically can't trust someone who isn't looking at you in the eyes. It could be they are distracted and thinking about something else. Or, again, it could be a cultural difference at play.

How to Improve Your Nonverbal Communication

The first step in improving your nonverbal communication is to pay attention. Try to see if you can pick up on other people's physical cues as well as your own.

Maybe when someone is telling you a story, you tend to look at the floor. In order to show them you're paying attention, you might try making eye contact instead, and even showing a slight smile, to show you're open and engaged.

What is good body language?

Good body language, also known as positive body language, should convey interest and enthusiasm. Some ways to do this include maintaining an upright and open posture, keeping good eye contact, smiling, and nodding while listening.

Using body language with intention is all about finding balance. For instance, when shaking someone's hand before a job interview, holding it somewhat firmly can signal professionalism. But, gripping it too aggressively might cause the other person pain or discomfort. Be sure to consider how other people might feel.

In addition, continue to develop emotional intelligence . The more in touch you are with how you feel, the easier it often is to sense how others are receiving you. You'll be able to tell when someone is open and receptive, or, on the other hand, if they are closed-off and need some space.

If we want to feel a certain way, we can use our body language to our advantage. For example, research found that people who maintained an upright seated posture while dealing with stress had higher levels of self-esteem and more positive moods compared to people who had slumped posture.

Of course, it's verbal and nonverbal communication—as well as the context of a situation—that often paints a full picture.

There isn't always a one-size-fits-all solution for what nonverbal cues are appropriate. However, by staying present and being respectful, you'll be well on your way to understanding how to use body language effectively.

A Word From Verywell

Understanding body language can go a long way toward helping you better communicate with others and interpreting what others might be trying to convey. While it may be tempting to pick apart signals one by one, it's important to look at these nonverbal signals in relation to verbal communication, other nonverbal signals, and the situation.

You can also learn more about how to improve your nonverbal communication to become better at letting people know what you are feeling—without even saying a word.

Foley GN, Gentile JP. Nonverbal communication in psychotherapy . Psychiatry (Edgmont) . 2010;7(6):38-44.

Tipper CM, Signorini G, Grafton ST. Body language in the brain: constructing meaning from expressive movement . Front Hum Neurosci . 2015;9:450. doi:10.3389/fnhum.2015.00450

Todorov A, Baron SG, Oosterhof NN. Evaluating face trustworthiness: a model based approach. Soc Cogn Affect Neurosci. 2008;3(2):119-27. doi:10.1093/scan/nsn009

Ekman P. Darwin's contributions to our understanding of emotional expressions. Philos Trans R Soc Lond, B, Biol Sci. 2009;364(1535):3449-51. doi:10.1098/rstb.2009.0189

Kleisner K, Chvátalová V, Flegr J. Perceived intelligence is associated with measured intelligence in men but not women. PLoS ONE. 2014;9(3):e81237. doi:10.1371/journal.pone.0081237

D'agostino TA, Bylund CL. Nonverbal accommodation in health care communication. Health Commun . 2014;29(6):563-73. doi:10.1080/10410236.2013.783773

Marchak FM. Detecting false intent using eye blink measures. Front Psychol. 2013;4:736. doi:10.3389/fpsyg.2013.00736

Jiang J, Borowiak K, Tudge L, Otto C, Von kriegstein K. Neural mechanisms of eye contact when listening to another person talking. Soc Cogn Affect Neurosci. 2017;12(2):319-328. doi:10.1093/scan/nsw127

Roter DL, Frankel RM, Hall JA, Sluyter D. The expression of emotion through nonverbal behavior in medical visits. Mechanisms and outcomes . J Gen Intern Med. 2006;21 Suppl 1:S28-34. doi:10.1111/j.1525-1497.2006.00306.x

Montgomery KJ, Isenberg N, Haxby JV. Communicative hand gestures and object-directed hand movements activated the mirror neuron system. Soc Cogn Affect Neurosci. 2007;2(2):114-22. doi:10.1093/scan/nsm004

Vacharkulksemsuk T, Reit E, Khambatta P, Eastwick PW, Finkel EJ, Carney DR. Dominant, open nonverbal displays are attractive at zero-acquaintance . Proc Natl Acad Sci USA. 2016;113(15):4009-14. doi:10.1073/pnas.1508932113

Hall ET. A system for the notation of proxemic behavior . American Anthropologist. October 1963;65(5):1003-1026. doi:10.1525/aa.1963.65.5.02a00020.

Hughes H, Hockey J, Berry G. Power play: the use of space to control and signify power in the workplace . Culture and Organization. 2019;26(4):298-314. doi:10.1080/14759551.2019.1601722

Chemelo VDS, Né YGS, Frazão DR, et al. Is there association between stress and bruxism? A systematic review and meta-analysis.  Front Neurol . 2020;11:590779. doi:10.3389/fneur.2020.590779

Jarick M, Bencic R.  Eye contact is a two-way street: arousal is elicited by the sending and receiving of eye gaze information.   Front Psychol . 2019;10:1262. doi:10.3389/fpsyg.2019.01262

Fred HL. Banning the handshake from healthcare settings is not the solution to poor hand hygiene .  Tex Heart Inst J . 2015;42(6):510-511. doi:10.14503/THIJ-15-5254

Nair S, Sagar M, Sollers J 3rd, Consedine N, Broadbent E. Do slumped and upright postures affect stress responses? A randomized trial .  Health Psychol . 2015;34(6):632-641. doi:10.1037/hea0000146

Hehman, E, Flake, JK and Freeman, JB. Static and dynamic facial cues differentially affect the consistency of social evaluations .  Personality and Social Psychology Bulletin . 2015; 41(8): 1123-34. doi:10.1177/0146167215591495.

Pillai D, Sheppard E, Mitchell P. Can people guess what happened to others from their reactions? Gilbert S, ed. PLoS ONE . 2012;7(11):e49859. doi:10.1371/journal.pone.0049859.

  • Ekman P. Emotions Revealed: Recognizing Faces and Feelings to Improve Communication and Emotional Life. 2nd ed. New York: Holt; 2007.
  • Pease A, Pease B. The Definitive Book of Body Language. Orion Publishing Group; 2017.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

cropped Screenshot 2023 08 20 at 23.18.57

Body Language and Behavior Interpretation: A Comprehensive Guide to Nonverbal Communication

A silent dance of gestures, expressions, and movements lies at the heart of human interaction, weaving an intricate tapestry of meaning that often speaks louder than words. This unspoken language, known as body language , forms the foundation of our daily interactions, influencing our perceptions, relationships, and even our success in various aspects of life.

Have you ever wondered why you instantly clicked with someone or felt uneasy around another person without exchanging a single word? The answer lies in the subtle yet powerful world of nonverbal communication. It’s a realm where a raised eyebrow can convey skepticism, a slight lean forward can indicate interest, and a crossed arm can signal defensiveness.

As social creatures, we’re hardwired to pick up on these cues, often subconsciously. But what if we could consciously harness this knowledge to better understand ourselves and others? That’s where the fascinating field of body language and behavior interpretation comes into play.

Decoding the Silent Language: An Introduction to Body Language

Body language, in its simplest form, refers to the nonverbal signals we send and receive through our physical behaviors. It encompasses everything from facial expressions and eye movements to posture, gestures, and even the way we position ourselves in relation to others. This silent vocabulary is so rich and nuanced that it’s estimated that up to 93% of our communication is nonverbal!

The study of body language, formally known as kinesics , has evolved from a niche area of research to a critical component of fields as diverse as psychology, law enforcement, business, and even politics. It’s a testament to the universal importance of understanding these unspoken messages that surround us every day.

But why is this understanding so crucial? Well, imagine walking into a job interview armed not just with your resume, but with the ability to read your interviewer’s subtle cues. Or picture yourself on a first date, capable of discerning your companion’s level of interest beyond their polite small talk. That’s the power of body language interpretation – it provides a window into thoughts and feelings that words alone might never reveal.

The Science Behind the Signals: Unraveling Body Language and Behavior

The scientific study of body language has its roots in the mid-20th century, with pioneers like Ray Birdwhistell and Edward T. Hall laying the groundwork for what would become a rich field of research. Birdwhistell, an anthropologist, coined the term “kinesics” in the 1950s, viewing nonverbal communication as a language with its own grammar and vocabulary.

Around the same time, Hall introduced the concept of proxemics, focusing on how people use space in interpersonal communication. His work on personal space and cultural differences in spatial preferences remains influential today.

But it was perhaps Paul Ekman who truly catapulted the study of body language into the mainstream. His groundbreaking research on facial expressions in the 1960s and 70s revealed that certain expressions are universal across cultures, challenging the prevailing belief that emotional expressions were entirely learned.

These pioneers paved the way for a deeper understanding of the intricate dance between verbal and nonverbal communication. While words convey explicit messages, body language often reveals implicit ones – the emotions, attitudes, and intentions that lurk beneath the surface of our spoken words.

Interestingly, when verbal and nonverbal cues conflict, we tend to trust the nonverbal. It’s why a forced smile rarely convinces us of someone’s happiness, or why a person’s fidgeting might betray their nervousness despite their calm words. This phenomenon, known as the “leakage effect,” highlights the often uncontrollable nature of our body language.

The Building Blocks: Fundamental Elements of Body Language

To truly grasp the nuances of body language, we need to break it down into its core components. Let’s explore these fundamental elements that form the vocabulary of our nonverbal communication.

1. Facial Expressions and Micro-expressions

Our faces are incredibly expressive, capable of conveying a vast array of emotions. From the obvious (a wide grin signaling joy) to the subtle (a slight furrow of the brow indicating concern), facial behavior is a rich source of nonverbal information.

Micro-expressions are particularly fascinating. These fleeting facial movements, lasting only a fraction of a second, often reveal emotions we’re trying to conceal. Spotting these can be like catching a glimpse of someone’s true feelings before they manage to mask them.

2. Eye Contact and Gaze Patterns

They say the eyes are the windows to the soul, and there’s truth in that cliché. Eye behavior can indicate attention, interest, emotion, and even dominance or submission. Maintaining eye contact, for instance, can signal confidence and engagement, while frequent glancing away might suggest discomfort or dishonesty.

But it’s not just about whether someone meets your gaze. The direction of a person’s gaze can be telling too. Looking up might indicate they’re accessing visual memories, while looking down and to the side could suggest they’re engaged in internal dialogue.

3. Posture and Body Positioning

How we hold ourselves speaks volumes. An open, relaxed posture typically conveys confidence and approachability, while a closed, hunched posture might signal defensiveness or insecurity. Even subtle shifts can be meaningful – leaning towards someone often indicates interest, while leaning away might suggest discomfort or disagreement.

4. Gestures and Hand Movements

Our hands are incredibly expressive tools. We use them to emphasize points, illustrate concepts, and even unconsciously reveal our emotional states. Open palms often signal honesty and openness, while clenched fists might indicate tension or anger. Even seemingly innocuous behaviors like touching one’s face or fiddling with objects can provide clues about a person’s internal state.

5. Proxemics: Personal Space and Distance

The distance we maintain from others is a form of nonverbal communication in itself. Anthropologist Edward T. Hall identified four main distance zones: intimate (0-18 inches), personal (18 inches to 4 feet), social (4-12 feet), and public (more than 12 feet). How close we stand to someone can indicate the nature of our relationship and our comfort level with them.

Understanding these elements is like learning the alphabet of body language. But just as knowing the alphabet doesn’t make you fluent in a language, recognizing these cues is only the first step. The real skill lies in interpreting them accurately in different contexts.

Reading the Room: Interpreting Body Language in Various Settings

Context is king when it comes to interpreting body language. A behavior that means one thing in a casual social setting might convey something entirely different in a professional environment. Let’s explore how body language manifests and is interpreted in different contexts.

1. Professional Settings: Workplace and Business Interactions

In the corporate world, body language can make or break deals, influence hiring decisions, and shape workplace dynamics. A firm handshake and good eye contact in a job interview can convey confidence and competence. In a negotiation, leaning back with hands behind the head might be seen as a power play, signaling confidence or even arrogance.

However, it’s crucial to remember that cultural differences can significantly impact these interpretations. What’s considered assertive in one culture might be seen as aggressive in another.

2. Social Situations: Dating, Friendships, and Casual Encounters

In social settings, body language often becomes more relaxed and expressive. On a date, for instance, signs of attraction might include mirroring the other person’s posture, frequent smiling, and leaning in during conversation. Among friends, playful touches and open postures typically indicate comfort and camaraderie.

3. Cross-cultural Differences in Body Language Interpretation

It’s vital to recognize that body language isn’t universal. While some expressions (like smiles) are generally understood across cultures, many gestures and behaviors can have vastly different meanings in different parts of the world. The “thumbs up” gesture, for example, is positive in many Western countries but can be highly offensive in some Middle Eastern cultures.

4. Body Language in Public Speaking and Presentations

For public speakers, mastering body language is crucial. Confident posture, deliberate gestures, and maintaining eye contact with the audience can significantly enhance the impact of a speech. Conversely, nervous habits like fidgeting or avoiding eye contact can undermine even the most well-prepared presentation.

5. Nonverbal Cues in Virtual Communication

In our increasingly digital world, interpreting body language through a screen presents new challenges. While we lose some cues in virtual interactions, others become more prominent. Paraverbal behavior – aspects of speech like tone, pitch, and pace – takes on added importance. Even in video calls, paying attention to facial expressions and upper body language can provide valuable insights.

Mastering the Art: Techniques for Accurately Interpreting Body Language

Interpreting body language isn’t just about recognizing individual cues – it’s about putting them together to form a coherent picture. Here are some techniques to help you become more adept at reading nonverbal signals:

1. Baseline Behavior Assessment

Before jumping to conclusions, it’s crucial to establish a person’s baseline behavior. Everyone has their own quirks and habits, so what might be a sign of nervousness in one person could be perfectly normal for another. Observe how someone behaves in a relaxed state to better identify deviations that might indicate changes in their emotional or mental state.

2. Cluster Reading: Interpreting Multiple Cues Simultaneously

Nonverbal behavior rarely occurs in isolation. Instead of focusing on a single gesture or expression, look for clusters of behaviors that reinforce each other. For instance, a genuine smile typically involves not just the mouth, but also the eyes (the famous “Duchenne smile”).

3. Context Consideration and Environmental Factors

Always consider the context in which behavior occurs. A person crossing their arms might be feeling defensive – or they might just be cold! Environmental factors like temperature, noise levels, and the presence of other people can all influence body language.

4. Recognizing and Avoiding Common Misinterpretations

It’s easy to fall into the trap of over-interpreting or misreading body language. Be wary of confirmation bias – the tendency to interpret cues in a way that confirms your preexisting beliefs. Also, remember that while body language can provide valuable insights, it’s not mind-reading. Always combine your observations with other forms of communication for a more accurate understanding.

5. Practice Exercises for Improving Interpretation Skills

Like any skill, reading body language improves with practice. Try people-watching in public places, observing interactions with the sound muted on TV shows, or analyzing behavior pictures . You can also practice with friends, taking turns expressing emotions nonverbally and guessing what they’re conveying.

Beyond Theory: Practical Applications of Body Language Interpretation

The ability to accurately interpret body language has far-reaching applications across various fields:

1. Law Enforcement and Criminal Investigations

In law enforcement, reading nonverbal cues can be crucial in interrogations, detecting deception, and assessing potential threats. While body language alone isn’t enough to determine guilt or innocence, it can provide valuable leads and help investigators ask more targeted questions.

2. Psychology and Counseling

Mental health professionals often rely on nonverbal cues to gain deeper insights into their clients’ emotional states. Body language can reveal feelings or thoughts that a client might be unwilling or unable to express verbally.

3. Sales and Negotiation Tactics

In the world of sales and negotiation, being able to read and respond to a client’s body language can be the difference between closing a deal and losing it. Recognizing signs of interest, hesitation, or disagreement allows salespeople to adapt their approach in real-time.

4. Leadership and Team Management

Leaders who are attuned to their team members’ nonverbal cues can better gauge morale, identify potential conflicts, and create a more harmonious work environment. This skill is particularly valuable in managing diverse teams where cultural differences might impact communication styles.

5. Personal Relationship Enhancement

In our personal lives, understanding body language can help us navigate social situations more effectively, improve our romantic relationships, and even strengthen family bonds. It allows us to be more empathetic and responsive to others’ needs, even when they’re not explicitly stated.

The Road Ahead: Conclusion and Future Perspectives

As we’ve explored, body language is an integral part of human communication, often conveying messages more powerfully than words ever could. By honing our skills in interpreting these nonverbal cues, we open ourselves up to a deeper understanding of those around us and ourselves.

However, with great power comes great responsibility. As we become more adept at reading body language, it’s crucial to use this knowledge ethically. Respect for privacy, avoiding manipulation, and recognizing the limitations of our interpretations are all important considerations.

Looking to the future, the field of nonverbal communication studies continues to evolve. Advances in technology, such as AI-powered facial recognition and emotion detection systems, are opening up new avenues for research and application. At the same time, our increasingly digital world presents new challenges in interpreting body language through screens and virtual interactions.

Ultimately, the study of body language reminds us of a fundamental truth: all behavior is a form of communication . By paying attention to these silent signals, we can enhance our understanding of others, improve our relationships, and navigate the complex world of human interaction with greater ease and empathy.

So, the next time you’re in a conversation, remember that there’s much more being said than just the words you hear. Look beyond the verbal, tune into the nonverbal, and you might just discover a whole new dimension of communication.

As you continue on your journey of understanding body language, remember that it’s a lifelong learning process. Each interaction is an opportunity to observe, interpret, and refine your skills. So keep your eyes open, your mind curious, and your body language positive – you never know what fascinating insights you might uncover in the silent language that surrounds us all.

References:

1. Ekman, P. (1993). Facial expression and emotion. American Psychologist, 48(4), 384-392.

2. Hall, E. T. (1966). The Hidden Dimension. Doubleday, New York.

3. Knapp, M. L., & Hall, J. A. (2013). Nonverbal Communication in Human Interaction. Wadsworth, Cengage Learning.

4. Matsumoto, D., Frank, M. G., & Hwang, H. S. (2013). Nonverbal Communication: Science and Applications. SAGE Publications.

5. Mehrabian, A. (1981). Silent Messages: Implicit Communication of Emotions and Attitudes. Wadsworth, Belmont, CA.

6. Navarro, J. (2008). What Every BODY is Saying: An Ex-FBI Agent’s Guide to Speed-Reading People. William Morrow Paperbacks.

7. Pease, A., & Pease, B. (2004). The Definitive Book of Body Language. Bantam Books.

8. Vrij, A. (2008). Detecting Lies and Deceit: Pitfalls and Opportunities. Wiley-Interscience.

9. Wainwright, G. R. (2003). Body Language. Teach Yourself.

10. Zuckerman, M., DePaulo, B. M., & Rosenthal, R. (1981). Verbal and nonverbal communication of deception. Advances in experimental social psychology, 14, 1-59.

Similar Posts

Collective Behavior: Understanding Social Dynamics and Group Actions

Collective Behavior: Understanding Social Dynamics and Group Actions

From flash mobs to social movements, the fascinating world of collective behavior reveals the power of human connection and its ability to shape our society in profound ways. It’s a realm where individuals come together, sometimes spontaneously, to create something greater than the sum of its parts. But what exactly is collective behavior, and why…

Behavioral Contagion: How Social Influence Shapes Our Actions

Behavioral Contagion: How Social Influence Shapes Our Actions

From the infectious laughter of a crowd to the rapid spread of viral trends, the intriguing phenomenon of behavioral contagion shapes our actions in ways we often fail to recognize. It’s a fascinating aspect of human nature that has captivated researchers and laypeople alike for decades. Have you ever found yourself yawning after seeing someone…

Asocial Behavior: Causes, Impacts, and Coping Strategies

Asocial Behavior: Causes, Impacts, and Coping Strategies

A silent pandemic creeps through our society, leaving countless individuals grappling with the challenges and misunderstandings surrounding asocial behavior. It’s a phenomenon that often goes unnoticed, lurking in the shadows of our bustling social world. Yet, its impact is profound, touching the lives of many who find themselves at odds with society’s expectations for social…

Antagonistic Behavior: Understanding Its Causes, Effects, and Management Strategies

Antagonistic Behavior: Understanding Its Causes, Effects, and Management Strategies

From seemingly innocuous remarks to overtly hostile actions, antagonistic behavior can insidiously erode the fabric of relationships, workplaces, and communities, leaving a trail of emotional destruction in its wake. It’s a phenomenon that touches all of our lives, whether we’re the ones dishing it out or on the receiving end. But what exactly is antagonistic…

Behavior Matching: The Subtle Art of Social Mirroring and Its Impact

Behavior Matching: The Subtle Art of Social Mirroring and Its Impact

A subtle dance of gestures, expressions, and words, behavior matching weaves an intricate tapestry of human connection, shaping the very fabric of our social interactions. This fascinating phenomenon, often occurring beneath our conscious awareness, plays a pivotal role in how we relate to one another and navigate the complex world of human relationships. Imagine you’re…

Equalizing Behavior: Promoting Fairness and Balance in Social Interactions

Equalizing Behavior: Promoting Fairness and Balance in Social Interactions

Equalizing behavior, a seemingly simple concept, holds the power to revolutionize the way we interact and create a more balanced, empathetic society. It’s a notion that, when put into practice, can transform our relationships, workplaces, and communities. But what exactly is equalizing behavior, and why should we care about it? At its core, equalizing behavior…

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

ORIGINAL RESEARCH article

Body language in the brain: constructing meaning from expressive movement.

\r\nChristine M. Tipper,*

  • 1 Department of Psychiatry, University of British Columbia, Vancouver, BC, Canada
  • 2 Mental Health and Integrated Neurobehavioral Development Research Core, Child and Family Research Institute, Vancouver, BC, Canada
  • 3 Psychiatric Epidemiology and Evaluation Unit, Saint John of God Clinical Research Center, Brescia, Italy
  • 4 Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA, USA

This fMRI study investigated neural systems that interpret body language—the meaningful emotive expressions conveyed by body movement. Participants watched videos of performers engaged in modern dance or pantomime that conveyed specific themes such as hope, agony, lust, or exhaustion. We tested whether the meaning of an affectively laden performance was decoded in localized brain substrates as a distinct property of action separable from other superficial features, such as choreography, kinematics, performer, and low-level visual stimuli. A repetition suppression (RS) procedure was used to identify brain regions that decoded the meaningful affective state of a performer, as evidenced by decreased activity when emotive themes were repeated in successive performances. Because the theme was the only feature repeated across video clips that were otherwise entirely different, the occurrence of RS identified brain substrates that differentially coded the specific meaning of expressive performances. RS was observed bilaterally, extending anteriorly along middle and superior temporal gyri into temporal pole, medially into insula, rostrally into inferior orbitofrontal cortex, and caudally into hippocampus and amygdala. Behavioral data on a separate task indicated that interpreting themes from modern dance was more difficult than interpreting pantomime; a result that was also reflected in the fMRI data. There was greater RS in left hemisphere, suggesting that the more abstract metaphors used to express themes in dance compared to pantomime posed a greater challenge to brain substrates directly involved in decoding those themes. We propose that the meaning-sensitive temporal-orbitofrontal regions observed here comprise a superordinate functional module of a known hierarchical action observation network (AON), which is critical to the construction of meaning from expressive movement. The findings are discussed with respect to a predictive coding model of action understanding.

Introduction

Body language is a powerful form of non-verbal communication providing important clues about the intentions, emotions, and motivations of others. In the course of our everyday lives, we pick up information about what people are thinking and feeling through their body posture, mannerisms, gestures, and the prosody of their movements. This intuitive social awareness is an impressive feat of neural integration; the cumulative result of activity in distributed brain systems specialized for coding a wide range of social information. Reading body language is more than just a matter of perception. It entails not only recognizing and coding socially relevant visual information, but also ascribing meaning to those representations.

We know a great deal about brain systems involved in the perception of facial expressions, eye movements, body movement, hand gestures, and goal directed actions, as well as those mediating affective, decision, and motor responses to social stimuli. What is still missing is an understanding of how the brain “reads” body language. Beyond the decoding of body motion, what are the brain substrates directly involved in extracting meaning from affectively laden body expressions? The brain has several functionally specialized structures and systems for processing socially relevant perceptual information. A subcortical pulvinar-superior colliculus-amygdala-striatal circuit mediates reflex-like perception of emotion from body posture, particularly fear, and activates commensurate reflexive motor responses ( Dean et al., 1989 ; Cardinal et al., 2002 ; Sah et al., 2003 ; de Gelder and Hadjikhani, 2006 ). A region of the occipital cortex known as the extrastriate body area (EBA) is sensitive to bodily form ( Bonda et al., 1996 ; Hadjikhani and de Gelder, 2003 ; Astafiev et al., 2004 ; Peelen and Downing, 2005 ; Urgesi et al., 2006 ). The fusiform gyrus of the ventral occipital and temporal lobes has a critical role in processing faces and facial expressions ( McCarthy et al., 1997 ; Hoffman and Haxby, 2000 ; Haxby et al., 2002 ). Posterior superior temporal sulcus is involved in perceiving the motion of biological forms in particular ( Allison et al., 2000 ; Pelphrey et al., 2005 ). Somatosensory, ventromedial prefrontal, premotor, and insular cortex contribute to one's own embodied awareness of perceived emotional states ( Adolphs et al., 2000 ; Damasio et al., 2000 ). Visuomotor processing in a functional brain network known as the action observation network (AON) codes observed action in distinct functional modules that together link the perception of action and emotional body language with ongoing behavioral goals and the formation of adaptive reflexes, decisions, and motor behaviors ( Grafton et al., 1996 ; Rizzolatti et al., 1996b , 2001 ; Hari et al., 1998 ; Fadiga et al., 2000 ; Buccino et al., 2001 ; Grézes et al., 2001 ; Grèzes et al., 2001 ; Ferrari et al., 2003 ; Zentgraf et al., 2005 ; Bertenthal et al., 2006 ; de Gelder, 2006 ; Frey and Gerry, 2006 ; Ulloa and Pineda, 2007 ). Given all we know about how bodies, faces, emotions, and actions are perceived, one might expect a clear consensus on how meaning is derived from these percepts. Perhaps surprisingly, while we know these systems are crucial to integrating perceptual information with affective and motor responses, how the brain deciphers meaning based on body movement remains unknown. The focus of this investigation was to identify brain substrates that decode meaning from body movement, as evidenced by meaning-specific neural processing that differentiates body movements conveying distinct expressions.

To identify brain substrates sensitive to the meaningful emotive state of an actor conveyed through body movement, we used repetition suppression (RS) fMRI. This technique identifies regions of the brain that code for a particular stimulus dimension (e.g., shape) by revealing substrates that have different patterns of neural activity in response to different attributes of that dimension (e.g., circle, square, triangle; Grill-Spector et al., 2006 ). When a particular attribute is repeated, synaptic activity and the associated blood oxygen level-dependent (BOLD) response decreases in voxels containing neuronal assemblies that code that attribute ( Wiggs and Martin, 1998 ; Grill-Spector and Malach, 2001 ). We have used this method previously to show that various properties of an action such as movement kinematics, object goal, outcome, and context-appropriateness of action mechanics are uniquely coded by different neural substrates within a parietal-frontal action observation network (AON; Hamilton and Grafton, 2006 , 2007 , 2008 ; Ortigue et al., 2010 ). Here, we applied RS-fMRI to identify brain areas in which activity decreased when the meaningful emotive theme of an expressive performance was repeated between trials. The results demonstrate a novel coding function of the AON—decoding meaning from body language.

Working with a group of professional dancers, we produced a set of video clips in which performers intentionally expressed a particular meaningful theme either through dance or pantomime. Typical themes consisted of expressions of hope, agony, lust, or exhaustion. The experimental manipulation of theme was studied independently of choreography, performer, or camera viewpoint, which allowed us to repeat the meaning of a movement sequence from one trial to another while varying physical movement characteristics and perceptual features. With this RS-fMRI design, a decrease in BOLD activity for repeated relative to novel themes (RS) could not be attributed to specific movements, characteristics of the performer, “low-level” visual features, or the general process of attending to body expressions. Rather, RS revealed brain areas in which specific voxel-wise neural population codes differentiated meaningful expressions based on body movement (Figure 1 ).

www.frontiersin.org

Figure 1. Manipulating trial sequence to induce RS in brain regions that decode body language . The order of video presentation was controlled such that themes depicted in consecutive videos were either novel or repeated. Each consecutive video clip was unique; repeated themes were always portrayed by different dancers, different camera angles, or both. Thus, RS for repeated themes was not the result of low-level visual features, but rather identified brain areas that were sensitive to the specific meaningful theme conveyed by a performance. In brain regions showing RS, a particular affective theme—hope, for example—will evoke a particular pattern of neural activity. A novel theme on the subsequent trial—illness, for instance—will trigger a different but equally strong pattern of neural activity in distinct cell assemblies, resulting in an equivalent BOLD response. In contrast, a repetition of the hopefulness theme on the subsequent trial will trigger activity in the same neural assemblies as the first trial, but to a lesser extent, resulting in a reduced BOLD response for repeated themes. In this way, regions showing RS reveal regions that support distinct patterns of neural activity in response to different themes.

Participants were scanned using fMRI while viewing a series of 10-s video clips depicting modern dance or pantomime performances that conveyed specific meaningful themes. Because each performer had a unique artistic style, the same theme could be portrayed using completely different physical movements. This allowed the repetition of meaning while all other aspects of the physical stimuli varied from trial to trial. We predicted that specific regions of the AON engaged by observing expressive whole body movement would show suppressed BOLD activation for repeated relative to novel themes (RS). Brain regions showing RS would reveal brain substrates directly involved in decoding meaning based on body movement.

The dance and pantomime performances used here conveyed expressive themes through movement, but did not rely on typified, canonical facial expressions to invoke particular affective responses. Rather, meaningful themes were expressed with unique artistic choreography while facial expressions were concealed with a classic white mime's mask. The result was a subtle stimulus set that promoted thoughtful, interpretive viewing that could not elicit reflex-like responses based on prototypical facial expressions. In so doing, the present study shifted the focus away from automatic affective resonance toward a more deliberate ascertainment of meaning from movement.

While dance and pantomime both expressed meaningful emotive themes, the quality of movement and the types of gestures used were different. Pantomime sequences used fairly mundane gestures and natural, everyday movements. Dance sequences used stylized gestures and interpretive, prosodic movements. The critical distinction between these two types of expressive movement is in the degree of abstraction in the metaphors that link movement with meaning (see Morris, 2002 for a detailed discussion of movement metaphors). Pantomime by definition uses gesture to mimic everyday objects, situations, and behavior, and thus relies on relatively concrete movement metaphors. In contrast, dance relies on more abstract movement metaphors that draw on indirect associations between qualities of movement and the emotions and thoughts it evokes in a viewer. We predicted that since dance expresses meaning more abstractly than pantomime, dance sequences would be more difficult to interpret than pantomimed sequences, and would likewise pose a greater challenge to brain processes involved in decoding meaning from movement. Thus, we predicted greater involvement of thematic decoding areas for danced than for pantomimed movement expressions. Greater RS for dance than pantomime could result from dance triggering greater activity upon a first presentation, a greater reduction in activity with a repeated presentation, or some combination of both. Given our prediction that greater RS for dance would be linked to interpretive difficulty, we hypothesized it would be manifested as an increased processing demand resulting in greater initial BOLD activity for novel danced themes.

Participants

Forty-six neurologically healthy, right-handed individuals (30 women, mean age = 24.22 years, range = 19–55 years) provided written informed consent and were paid for their participation. Performers also agreed in writing to allow the use of their images and videos for scientific purposes. The protocol was approved by the Office of Research Human Subjects Committee at the University of California Santa Barbara (UCSB).

Eight themes were depicted, including four danced themes (happy, hopeful, fearful, and in agony) and four pantomimed themes (in love, relaxed, ill, and exhausted). Performance sequences were choreographed and performed by four professional dancers recruited from the SonneBlauma Danscz Theatre Company (Santa Barbara, California; now called ArtBark International, http://www.artbark.org/ ). Performers wore expressionless white masks so body language was conveyed though gestural whole-body movement as opposed to facial expressions. To express each theme, performers adopted an affective stance and improvised a short sequence of modern dance choreography (two themes per performer) or pantomime gestures (two themes per performer). Each of the eight themes were performed by two different dancers and recorded from two different camera angles, resulting in four distinct videos representing each theme (32 distinct videos in total; clips available in Supplementary Materials online).

Behavioral Procedure

In a separate session outside the scanner either before or after fMRI data collection, an interpretation task measured observers' ability to discern the intended meaning of a performance (Figure 2 ). The interpretation task was carried out in a separate session to avoid confounding movement observation in the scanner with explicit decision-making and overt motor responses. Participants were asked to view each video clip and choose from a list of four options the theme that best corresponded with the movement sequence they had just watched. Responses were made by pressing one of four corresponding buttons on a keyboard. Two behavioral measures were collected to assess how well participants interpreted the intended meaning of expressive performances. Consistency scores reflected the proportion of observers' interpretations that matched the performer's intended expression. Response times indicated the time taken to make interpretive judgments. In order to encourage subjects to use their initial impressions and to avoid over-deliberating, the four response options were previewed briefly immediately prior to video presentation.

www.frontiersin.org

Figure 2. Experimental testing procedure . Participants completed a thematic interpretation task outside the scanner, either before or after the imaging session. Performance on this task allowed us to test whether there was a difference in how readily observers interpreted the intended meaning conveyed through dance or pantomime. Any performance differences on this explicit theme judgment task could help interpret the functional significance of observed differences in brain activity associated with passively viewing the two types of movement in the scanner.

For the interpretation task collected outside the scanner, videos were presented and responses collected on a Mac Powerbook G4 laptop programmed using the Psychtoolbox (v. 3.0.8) extension ( Brainard, 1997 ; Pelli and Brainard, 1997 ) for Mac OSX running under Matlab 7.5 R2007b (the MathWorks, Natick, MA). Each trial began with the visual presentation of a list of four theme options corresponding to four button press responses (“u,” “i,” “o,” or “p” keyboard buttons). This list remained on the screen for 3 s, the screen blanked for 750 ms, and then the movie played for 10 s. Following the presentation of the movie, the four response options were presented again, and remained on the screen until a response was made. Each unique video was presented twice, resulting in 64 trials total. Video order was randomized for each participant, and the response options for each trial included the intended theme and three randomly selected alternatives.

Neuroimaging Procedure

fMRI data were collected with a Siemens 3.0 T Magnetom Tim Trio system using a 12-channel phased array head coil. Functional images were acquired with a T2* weighted single shot gradient echo, echo-planar sequence sensitive to Blood Oxygen Level Dependent (BOLD) contrast (TR = 2 s; TE = 30 ms; FA = 90°; FOV = 19.2 cm). Each volume consisted of 37 slices acquired parallel to the AC–PC plane (interleaved acquisition; 3 mm thick with 0.5 mm gap; 3 × 3 mm in-plane resolution; 64 × 64 matrix).

Each participant completed four functional scanning runs lasting approximately 7.5 min while viewing danced or acted expressive movement sequences. While there were a total of eight themes in the stimulus set for the study, each scanning run depicted only two of those eight themes. Over the course of all four scanning runs, all eight themes were depicted. Trial sequences were arranged such that theme of a movement sequence was either novel or repeated with respect to the previous trial. This allowed for the analysis of BOLD response RS for repeated vs. novel themes. Each run presented 24 video clips (3 presentations of 8 unique videos depicting 2 themes × 2 dancers × 2 camera angles). Novel and repeated themes were intermixed within each scanning run, with no more than three sequential repetitions of the same theme. Two scanning runs depicted dance and two runs depicted pantomime performances. The order of runs was randomized for each participant. The experiment was controlled using Presentation software (version 13.0, Neurobehavioral Systems Inc, CA). Participants were instructed to focus on the movement performance while viewing the videos. No specific information about the themes portrayed or types of movement used was provided, and no motor responses were required.

For the behavioral data collected outside the scanner, mean consistency scores and mean response time (RT; ms) were computed for each participant. Consistency and RT were each submitted to an ANOVA with Movement Type (dance vs. pantomime) as a within-subjects factor using Stata/IC 10.0 for Macintosh.

Statistical analysis of the neuroimaging data was organized to identify: (1) brain areas responsive to the observation of expressive movement sequences, defined by BOLD activity relative to an implicit baseline, (2) brain areas directly involved in decoding meaning from movement, defined by RS for repeated themes, (3) brain areas in which processes for decoding thematic meaning varied as a function of abstractness, defined by greater RS for danced than pantomimed themes, and (4) the specific pattern of BOLD activity differences for novel and repeated themes as a function of danced or pantomimed movements in regions showing greater RS for dance.

The fMRI data were analyzed using Statistical Parametric Mapping software (SPM5, Wellcome Department of Imaging Neuroscience, London; www.fil.ion.ucl.ac.uk/spm ) implemented in Matlab 7.5 R2007b (The MathWorks, Natick, MA). Individual scans were realigned, slice-time corrected and spatially normalized to the Montreal Neurological Institute (MNI) template in SPM5 with a resampled resolution of 3 × 3 × 3 mm. A smoothing kernel of 8 mm was applied to the functional images. A general linear model was created for each participant using SPM5. Parameter estimates of event-related BOLD activity were computed for novel and repeated themes depicted by danced and pantomimed movements, separately for each scanning run, for each participant.

Because the intended theme of each movement sequence was not expressed at a discrete time point but rather throughout the duration of the 10 s video clip, the most appropriate hemodynamic response function (HRF) with which to model the BOLD response at the individual level was determined empirically prior to parameter estimation. Of interest was whether the shape of the BOLD response to these relatively long video clips differed from the canonical HRF typically implemented in SPM. The shape of the BOLD response was estimated for each participant by modeling a finite impulse response function ( Ollinger et al., 2001 ). Each trial was represented by a sequence of 12 consecutive TRs, beginning at the onset of each video clip. Based on this deconvolution, a set of beta weights describing the shape of the response over a 24 s interval was obtained for both novel and repeated themes depicted by both danced and pantomimed movement sequences. To determine whether adjustments should be made to the canonical HRF implemented in SPM, the BOLD responses of a set of 45 brain regions within a known AON were evaluated (see Table 1 for a complete list). To find the most representative shape of the BOLD response within the AON, deconvolved beta weights for each condition were averaged across sessions and collapsed by singular value decomposition analysis ( Golub and Reinsch, 1970 ). This resulted in a characteristic signal shape that maximally described the actual BOLD response in AON regions for both novel and repeated themes, for both danced and pantomimed sequences. This examination of the BOLD response revealed that its time-to-peak was delayed 4 s compared to the canonical HRF response curve typically implemented in SPM. That is, the peak of the BOLD response was reached at 8–10 s following stimulus onset instead of the canonical 4–6 s. Given this result, parameter estimation for conditions of interest in our main analysis was based on a convolution of the design matrix for each participant with a custom HRF that accounted for the observed 4 s delay. Time-to-peak of the HRF was adjusted from 6 to 10 s while keeping the same overall width and height of the canonical function implemented in SPM. Using this custom HRF, the 10 s video duration was modeled as usual in SPM by convolving the HRF with a 10 s boxcar function.

www.frontiersin.org

Table 1. The action observation network, as defined by previous investigations .

Second-level whole-brain analysis was conducted with SPM8 using a 2 × 2 random effects model with Movement Type and Repetition as within-subject factors using the weighted parameter estimates (contrast images) obtained at the individual level as data. A gray matter mask was applied to whole-brain contrast images prior to second-level analysis to remove white matter voxels from the analysis. Six second-level contrasts were computed, including (1) expressive movement observation (BOLD relative to baseline), (2) dance observation effect (danced sequences > pantomimed sequences), (3) pantomime observation effect (pantomimed sequences > danced sequences), (4) RS (novel themes > repeated themes), (5) dance × repetition interaction (RS for dance > RS for pantomime), and (6) pantomime x repetition interaction (RS for pantomime > RS for dance). Following the creation of T-map images in SPM8, FSL was used to create Z-map images (Version 4.1.1; Analysis Group, FMRIB, Oxford, UK; Smith et al., 2004 ; Jenkinson et al., 2012 ). The results were thresholded at p < 0.05, cluster-corrected using FSL subroutines based on Gaussian random field theory ( Poldrack et al., 2011 ; Nichols, 2012 ). To examine the nature of the differences in RS between dance and pantomime, a mask image was created based on the corresponding cluster-thresholded Z-map of regions showing greater RS for dance, and the mean BOLD activity (contrast image values) was computed for novel and repeated dance and pantomime contrasts from each participant's first-level analysis. Mean BOLD activity measures were submitted to a 2 × 2 ANOVA with Movement Type (dance vs. pantomime) and Repetition (novel vs. repeat) as within-subjects factors using Stata/IC 10.0 for Macintosh.

In order to ensure that observed RS effects for repeated themes were not due to low-level kinematic effects, a motion tracking analysis of all 32 videos was performed using Tracker 4.87 software for Mac (written by Douglas Brown, distributed on the Open Source Physics platform, www.opensourcephysics.org ). A variety of motion parameters, including velocity, acceleration, momentum, and kinetic energy, were computed within the Tracker software based on semi-automated/supervised motion tracking of the top of the head, one hand, and one foot of each performer. The key question relevant to our results was whether there was a difference in motion between videos depicting novel and repeated themes. One factor ANOVAs for each motion parameter revealed no significant differences in coarse kinematic profiles between “novel” and “repeated” theme trials (all p 's > 0.05). This was not particularly surprising given that all videos were used for both novel and repeated themes, which were defined entirely based on trial sequence). In contrast, the comparison between danced and pantomimed themes did reveal significant differences in kinematic profiles. A 2 × 3 ANOVA with Movement Type (Dance, Pantomime) and Body Point (Hand, Head, Foot) as factors was conducted for each motion parameter (velocity, acceleration, momentum, and kinetic energy), and revealed greater motion energy on all parameters for the danced themes compared to the pantomimed themes (all p 's < 0.05). Any differences in RS between danced and pantomimed themes may therefore be attributed to differences in kinematic properties of body movement. Importantly, however, because there were no systematic differences in motion kinematics between novel and repeated themes, any RS effects for repeated themes could not be attributed to the effect of motion kinematics.

Figure 3 illustrates the behavioral results of the interpretation task completed outside the scanner. Participants had higher consistency scores for pantomimed movements than danced movements [ F (1, 42) = 42.06, p < 0.0001], indicating better transmission of the intended expressive meaning from performer to viewer. Pantomimed sequences were also interpreted more quickly than danced sequences [ F (1, 42) = 27.28, p < 0.0001], suggesting an overall performance advantage for pantomimed sequences.

www.frontiersin.org

Figure 3. Behavioral performance on the theme judgment task . Participants more readily interpreted pantomime than dance. This was evidenced by both greater consistency between the meaningful theme intended to be expressed by the performer and the interpretive judgments made by the observer (left), and faster response times (right). This pattern of results suggests that dance was more difficult to interpret than pantomime, perhaps owing to the use of more abstract metaphors to link movement with meaning. Pantomime, on the other hand, relied on more concrete, mundane sorts of movements that were more likely to carry meaningful associations based on observers' prior everyday experience. SEM, standard error of the mean.

Expressive Whole-body Movements Engage the Action Observation Network

Brain activity associated with the observation of expressive movement sequences was revealed by significant BOLD responses to observing both dance and pantomime movement sequences, relative to the inter-trial resting baseline. Figure 4 depicts significant activation ( p < 0.05, cluster corrected in FSL) rendered on an inflated cortical surface of the Human PALS-B12 Atlas ( Van Essen, 2005 ) using Caret (Version 5. 61; http://www.nitrc.org/projects/caret ; Van Essen et al., 2001 ). Table 2 presents the MNI coordinates for selected voxels within clusters active during movement observation, as labeled in Figure 4 . Region names were obtained from the Harvard-Oxford Cortical and Subcortical Structural Atlases ( Frazier et al., 2005 ; Desikan et al., 2006 ; Makris et al., 2006 ; Goldstein et al., 2007 ; Harvard Center for Morphometric Analysis; www.partners.org/researchcores/imaging/morphology_MGH.asp ), and Brodmann Area labels were obtained from the Juelich Histological Atlas ( Eickhoff et al., 2005 , 2006 , 2007 ), as implemented in FSL. Observation of body movement was associated with robust BOLD activation encompassing cortex typically associated with the AON, including fronto-parietal regions linked to the representation of action kinematics, goals, and outcomes ( Hamilton and Grafton, 2006 , 2007 ), as well as temporal, occipital, and insular cortex and subcortical regions including amygdala and hippocampus—regions typically associated with language comprehension ( Kirchhoff et al., 2000 ; Ni et al., 2000 ; Friederici et al., 2003 ) and socio-affective information processing and decision-making ( Anderson et al., 1999 ; Adolphs et al., 2003 ; Bechara et al., 2003 ; Bechara and Damasio, 2005 ).

www.frontiersin.org

Figure 4. Expressive performances engage the action observation network . Viewing expressive whole-body movement sequences engaged a distributed cortical action observation network ( p < 0.05, FWE corrected). Large areas of parietal, temporal, frontal, and insular cortex included somatosensory, motor, and premotor regions that have been considered previously to comprise a human “mirror neuron” system, as well as non-motor areas linked to comprehension, social perception, and affective decision-making. Number labels correspond to those listed in Table 2 , which provides anatomical names and voxel coordinates for areas of peak activation. Dotted line for regions 17/18 indicates medial temporal position not visible on the cortical surface.

www.frontiersin.org

Table 2. Brain regions showing a significant BOLD response while participants viewed expressive whole-body movement sequences .

The Action Observation Network “Reads” Body Language

To isolate brain areas that decipher meaning conveyed by expressive body movement, regions showing RS (reduced BOLD activity for repeated compared to novel themes) were identified. Since theme was the only stimulus dimension repeated systematically across trials for this comparison, decreased activation for repeated themes could not be attributed to physical features of the stimulus such as particular movements, performers, or camera viewpoints. Figure 5 illustrates brain areas showing significant suppression for repeated themes ( p < 0.05, cluster corrected in FSL). Table 3 presents the MNI coordinates for selected voxels within significant clusters. RS was found bilaterally on the rostral bank of the middle temporal gyrus extending into temporal pole and orbitofrontal cortex. There was also significant suppression in bilateral amygdala and insular cortex.

www.frontiersin.org

Figure 5. BOLD suppression (RS) reveals brain substrates for “reading” body language . Regions involved in decoding meaning in body language showing were isolated by testing for BOLD suppression when the intended theme of an expressive performance was repeated across trials. To identify regions showing RS, BOLD activity associated with novel themes was contrasted with BOLD activity associated with repeated themes ( p < 0.05, cluster corrected in FSL). Significantly greater activity for novel relative to repeated themes was evidence of RS. Given that the intended theme of a performance was the only element that was repeated between trials, regions showing RS revealed brain substrates that were sensitive to the specific meaning infused into a movement sequence by a performer. Number labels correspond to those listed in Table 3 , which provides anatomical names and voxel coordinates for key clusters showing significant RS. Blue shaded area indicates vertical extent of axial slices shown.

www.frontiersin.org

Table 3. Brain regions showing significant BOLD suppression for repeated themes ( p < 0.05, cluster corrected in FSL) .

Movement Abstractness Challenges Brain Substrates that Decode Meaning

The behavioral analysis indicated that interpreting danced themes was more difficult than interpreting pantomimed themes, as evidenced by lower consistency scores and greater RTs. Previous research indicates that greater difficulty discriminating a particular stimulus dimension is associated with greater BOLD suppression upon repetition of that dimension's attributes ( Hasson et al., 2006 ). To test whether greater difficulty decoding meaning from dance than pantomime would also be associated with greater RS in the present data, the magnitude of BOLD response suppression was compared between movement types. This was done with the Dance × Repetition interaction contrast in the second-level whole brain analysis, which revealed regions that had greater RS for dance than for pantomime. Figure 6 illustrates brain regions showing greater RS for themes portrayed through dance than pantomime ( p < 0.05, cluster corrected in FSL). Significant differences were entirely left-lateralized in superior and middle temporal gyri, extending into temporal pole and orbitofrontal cortex, and also present in laterobasal amygdala and the cornu ammonis of the hippocampus. Table 4 presents the MNI coordinates for selected voxels within significant clusters. The reverse Pantomime × Repetition interaction was also tested, but did not reveal any regions showing greater RS for pantomime than dance ( p > 0.05, cluster corrected in FSL).

www.frontiersin.org

Figure 6. Regions showing greater RS for dance than pantomime . RS effects were compared between movement types. This was implemented as an interaction contrast within our Movement Type × Repetition ANOVA design [(Novel Dance > Repeated Dance) > (Novel Pantomime > Repeated Pantomime)]. Greater RS for dance was lateralized to left hemisphere meaning-sensitive regions. The brain areas shown here have been linked previously to the comprehension of meaning in verbal language, suggesting the possibility they represent shared brain substrates for building meaning from both language and action. Number labels correspond to those listed in Table 4 , which provides anatomical names and voxel coordinates for key clusters showing significantly greater RS for dance. Blue shaded area indicates vertical extent of axial slices shown.

www.frontiersin.org

Table 4. Brain regions showing significantly greater RS for themes expressed through dance relative to themes expressed through pantomime ( p < 0.05, cluster corrected in FSL) .

In regions showing greater RS for dance than pantomime, mean BOLD responses for novel and repeated dance and pantomime conditions were computed across voxels for each participant based on their first-level contrast images. This was done to test whether the greater RS for dance was due to greater activity in the novel condition, lower activity in the repeated condition, or some combination of both. Figure 7 illustrates a pattern of BOLD activity across conditions demonstrates that the greater RS for dance was the result of greater initial BOLD activation in response to novel themes. The ANOVA results showed a significant Movement Type × Repetition interaction [ F (1, 42) = 7.83, p < 0.01], indicating that BOLD activity in response to novel danced themes was greater than BOLD activity for all other conditions in these regions.

www.frontiersin.org

Figure 7. Novel danced themes challenge brain substrates that decode meaning from movement . To determine the specific pattern of BOLD activity that resulted in greater RS for dance, average BOLD activity in these areas was computed for each condition separately. Greater RS for dance was driven by a larger BOLD response to novel danced themes. Considered together with behavioral findings indicating that dance was more difficult to interpret, greater RS for dance seems to result from a greater processing “challenge” to brain substrates involved in decoding meaning from movement. SEM, standard error of the mean.

This study was designed to reveal brain regions involved in reading body language—the meaningful information we pick up about the affective states and intentions of others based on their body movement. Brain regions that decoded meaning from body movement were identified with a whole brain analysis of RS that compared BOLD activity for novel and repeated themes expressed through modern dance or pantomime. Significant RS for repeated themes was observed bilaterally, extending anteriorly along middle and superior temporal gyri into temporal pole, medially into insula, rostrally into inferior orbitofrontal cortex, and caudally into hippocampus and amygdala. Together, these brain substrates comprise a functional system within the larger AON. This suggests strongly that decoding meaning from expressive body movement constitutes a dimension of action representation not previously isolated in studies of action understanding. In the following we argue that this embedding is consistent with the hierarchical organization of the AON.

Body Language as Superordinate in a Hierarchical Action Observation Network

Previous investigations of action understanding have identified the AON as a key a cognitive system for the organization of action in general, highlighting the fact that both performing and observing action rely on many of the same brain substrates ( Grafton, 2009 ; Ortigue et al., 2010 ; Kilner, 2011 ; Ogawa and Inui, 2011 ; Uithol et al., 2011 ; Grafton and Tipper, 2012 ). Shared brain substrates for controlling one's own action and understanding the actions of others are often taken as evidence of a “mirror neuron system” (MNS), following from physiological studies showing that cells in area F5 of the macaque monkey premotor cortex fired in response to both performing and observing goal-directed actions ( Pellegrino et al., 1992 ; Gallese et al., 1996 ; Rizzolatti et al., 1996a ). Since these initial observations were made regarding monkeys, there has been a tremendous effort to characterize a human analog of the MNS, and incorporate it into theories of not only action understanding, but also social cognition, language development, empathy, and neuropsychiatric disorders in which these faculties are compromised ( Gallese and Goldman, 1998 ; Rizzolatti and Arbib, 1998 ; Rizzolatti et al., 2001 ; Gallese, 2003 ; Gallese et al., 2004 ; Rizzolatti and Craighero, 2004 ; Iacoboni et al., 2005 ; Tettamanti et al., 2005 ; Dapretto et al., 2006 ; Iacoboni and Dapretto, 2006 ; Shapiro, 2008 ; Decety and Ickes, 2011 ). A fundamental assumption common to all such theories is that mirror neurons provide a direct neural mechanism for action understanding through “motor resonance,” or the simulation of one's own motor programs for an observed action ( Jacob, 2008 ; Oosterhof et al., 2013 ). One proposed mechanism for action understanding through motor resonance is the embodiment of sensorimotor associations between action goals and specific motor behaviors ( Mitz et al., 1991 ; Niedenthal et al., 2005 ; McCall et al., 2012 ). While the involvement of the motor system in a range of social, cognitive and affective domains is certainly worthy of focused investigation, and mirror neurons may well play an important role in supporting such “embodied cognition,” this by no means implies that mirror neurons alone can account for the ability to garner meaning from observed body movement.

Since the AON is a distributed cortical network that extends beyond motor-related brain substrates engaged during action observation, it is best characterized not as a homogeneous “mirroring” mechanism, but rather as a collection of functionally specific but interconnected modules that represent distinct properties of observed actions ( Grafton, 2009 ; Grafton and Tipper, 2012 ). The present results build on this functional-hierarchical model of the AON by incorporating meaningful expression as an inherent aspect of body movement that is decoded in distinct regions of the AON. In other words, the bilateral temporal-orbitofrontal regions that showed RS for repeated themes comprise a distinct functional module of the AON that supports an additional level of the action representation hierarchy. Such an interpretation is consistent with the idea that action representation is inherently nested, carried out within a hierarchy of part-whole processes for which higher levels depend on lower levels ( Cooper and Shallice, 2006 ; Botvinick, 2008 ; Grafton and Tipper, 2012 ). We propose that the meaning infused into the body movement of a person having a particular affective stance is decoded superordinately to more concrete properties of action, such as kinematics and object goals. Under this view, while decoding these representationally subordinate properties of action may involve motor-related brain substrates, decoding “body language” engages non-motor regions of the AON that link movement and meaning, relying on inputs from lower levels of the action representation hierarchy that provide information about movement kinematics, prosodic nuances, and dynamic inflections.

While the present results suggest that the temporal-orbitofrontal regions identified here as decoding meaning from emotive body movement constitute a distinct functional module within a hierarchically organized AON, it is important to note that these regions have not previously been included in anatomical descriptions of the AON. The present study, however, isolated a property of action representation that had not been previously investigated; so identifying regions of the AON not previously included in its functional-anatomic definition is perhaps not surprising. This underscores the important point that the AON is functionally defined, such that its apparent anatomical extent in a given experimental context depends upon the particular aspects of action representation that are engaged and isolable. Previous studies of another abstract property of action representation, namely intention understanding, also illustrate this point. Inferring the intentions of an actor engages medial prefrontal cortex, bilateral posterior superior temporal sulcus, and left temporo-parietal junction—non-motor regions of the brain typically associated with “mentalizing,” or thinking about the mental states of another agent ( Ansuini et al., 2015 ; Ciaramidaro et al., 2014 ). A key finding of this research is that intention understanding depends fundamentally on the integration of motor-related (“mirroring”) brain regions and non-motor (“mentalizing”) brain regions ( Becchio et al., 2012 ). The present results parallel this finding, and point to the idea that in the context of action representation, motor and non-motor brain areas are not two separate brain networks, but rather one integrated functional system.

Predictive Coding and the Construction of Meaning in the Action Observation Network

A critical question raised by the idea that the temporal-orbitofrontal brain regions in which RS was observed here constitute a superordinate, meaning-sensitive functional module of the AON is how activity in subordinate AON modules is integrated at this higher level to produce differential neural firing patterns in response to different meaningful body expressions. That is, what are the neural mechanisms underlying the observed sensitivity to meaning in body language, and furthermore, why are these mechanisms subject to adaptation through repetition (RS)? While the present results do not provide direct evidence to answer these questions, we propose that a “predictive coding” interpretation provides a coherent model of action representation ( Brass et al., 2007 ; Kilner and Frith, 2008 ; Brown and Brüne, 2012 ) that yields useful predictions about the neural processes by which meaning is decoded that would account for the observed RS effect. The primary mechanism invoked by a predictive coding framework of action understanding is recurrent feed-forward and feedback processing across the various levels of the AON, which supports a Bayesian system of predictive neural coding, feedback processes, and prediction error reduction at each level of action representation ( Friston et al., 2011 ). According to this model, each level of the action observation hierarchy generates predictions to anticipate neural activity at lower levels of the hierarchy. Predictions in the form of neural codes are sent to lower levels through feedback connections, and compared with actual subordinate neural representations. Any discrepancy between neural predictions and actual representations are coded as prediction error. Information regarding prediction error is sent through recurrent feed-forward projections to superordinate regions, and used to update predictive priors such that subsequent prediction error is minimized. Together, these Bayes-optimal neural ensemble operations converge on the most probable inference for representation at the superordinate level ( Friston et al., 2011 ) and, ultimately, action understanding based on the integration of representations at each level of the action observation hierarchy ( Chambon et al., 2011 ; Kilner, 2011 ).

A predictive coding account of the present results would suggest that initial feed-forward inputs from subordinate levels of the AON provided the superordinate temporal-orbitofrontal module with information regarding movement kinematics, prosody, gestural elements, and dynamic inflections, which, when integrated with other inputs based on prior experience, would provide a basis for an initial prediction about potential meanings of a body expression. This prediction would yield a generative neural model about the movement dynamics that would be expected given the predicted meaning of the observed body expression, which would be fed back to lower levels of the network that coded movement dynamics and sensorimotor associations. Predictive activity would be contrasted with actual representations as movement information was accrued throughout the performance, and the resulting prediction error would be utilized via feed-forward projections to temporal-orbitofrontal regions to update predictive codes regarding meaning and minimize subsequent prediction error. In this way, the meaningful affective theme being expressed by the performer would be converged upon through recurrent Bayes-optimal neural ensemble operations. Thus, meaning expressed through body language would be accrued iteratively in temporal-orbitofrontal regions by integrating neural representations of various facets of action decoded throughout the AON. Interestingly, and consistent with a model in which an iterative process accrued information over time, we observed that BOLD responses in AON regions peaked more slowly than expected based on SPM's canonical HRF as the videos were viewed over an extended (10 s) duration. Under an iterative predictive coding model, RS for repeated themes could be accounted for by reduced initial generative activity in temporal-orbitofrontal regions due to better constrained predictions about potential meanings conveyed by observed movement, more efficient convergence on an inference due to faster minimization of prediction error, or some combination of both of these mechanisms. The present results provide indirect evidence for the former account, in that more abstract, less constrained movement metaphors relied upon by expressive dance resulted in greater RS due to larger BOLD responses for novel themes relative to the more concrete, better-constrained associations conveyed by pantomime.

Shared Brain Substrates for Meaning in Action and Language

The middle temporal gyrus and superior temporal sulcus regions identified here as part of a functional module of the AON that “reads” body language have been linked previously to a variety of high-level linguistic domains related to understanding meaning. Among these are conceptual knowledge ( Lambon Ralph et al., 2009 ), language comprehension ( Hasson et al., 2006 ; Noppeney and Penny, 2006 ; Price, 2010 ), sensitivity to the congruency between intentions and actions, both verbal/conceptual ( Deen and McCarthy, 2010 ), and perceptual/implicit ( Wyk et al., 2009 ), as well as understanding abstract language and metaphorical descriptions of action ( Desai et al., 2011 ). While together these studies demonstrate that high-level linguistic processing involves bilateral superior and middle temporal regions, there is evidence for a general predominance of the left hemisphere in comprehending semantics ( Price, 2010 ), and a predominance of the right hemisphere in incorporating socio-emotional information and affective context ( Wyk et al., 2009 ). For example, brain atrophy associated with a primary progressive aphasia characterized by profound disturbances in semantic comprehension occurs bilaterally in anterior middle temporal regions, but is more pronounced in the left hemisphere ( Gorno-Tempini et al., 2004 ). In contrast, neural degeneration in right hemisphere orbitofrontal, insula, and anterior middle temporal regions is associated not only with semantic dementia but also deficits in socio-emotional sensitivity and regulation ( Rosen et al., 2005 ).

This hemispheric asymmetry in brain substrates associated with interpreting meaning in verbal language is paralleled in the present results, which not only link the same bilateral temporal-orbitofrontal brain substrates to comprehending meaning from affectively expressive body language, but also demonstrate a predominance of the left hemisphere in deciphering the particularly abstract movement metaphors conveyed by dance. This asymmetry was evident as greater RS for repeated themes for dance relative to pantomime, which was driven by a greater initial activation for novel themes, suggesting that these left-hemisphere regions were engaged more vigorously when decoding more abstract movement metaphors. Together, these results illustrate a striking overlap in the brain substrates involved in processing meaning in verbal language and decoding meaning from expressive body movement. This overlap suggests that a long-hypothesized evolutionary link between gestural body movement and language ( Hewes et al., 1973 ; Harnad et al., 1976 ; Rizzolatti and Arbib, 1998 ; Corballis, 2003 ) may be instantiated by a network of shared brain substrates for representing semiotic structure, which constitutes the informational scaffolding for building meaning in both language and gesture ( Lemke, 1987 ; Freeman, 1997 ; McNeill, 2012 ; Lhommet and Marsella, 2013 ). While speculative, under this view the temporal-orbitofrontal AON module for coding meaning observed may provide a neural basis for semiosis (the construction of meaning), which would lend support to the intriguing philosophical argument that meaning is fundamentally grounded in processes of the body, brain, and the social environment within which they are immersed ( Thibault, 2004 ).

Summary and Conclusions

The present results identify a system of temporal, orbitofrontal, insula, and amygdala brain regions that supports the meaningful interpretation of expressive body language. We propose that these areas reveal a previously undefined superordinate functional module within a known, stratified hierarchical brain network for action representation. The findings are consistent with a predictive coding model of action understanding, wherein the meaning that is imbued into expressive body movements through subtle kinematics and prosodic nuances is decoded as a distinct property of action via feed-forward and feedback processing across the levels of a hierarchical AON. Under this view, recurrent processing loops integrate lower-level representations of movement dynamics and socio-affective perceptual information to generate, evaluate, and update predictive inferences about expressive content that are mediated in a superordinate temporal-orbitofrontal module of the AON. Thus, while lower-level action representation in motor-related brain areas (sometimes referred to as a human “mirror neuron system”) may be a key step in the construction of meaning from movement, it is not these motor areas that code the specific meaning of an expressive body movement. Rather, we have demonstrated an additional level of the cortical action representation hierarchy in non-motor regions of the AON. The results highlight an important link between action representation and language, and point to the possibility of shared brain substrates for constructing meaning in both domains.

Author Contributions

CT, GS, and SG designed the experiment. CT and GS created stimuli, which included recruiting professional dancers and filming expressive movement sequences. GS carried out video editing. CT completed computer programming for experimental control and data analysis. GS and CT recruited participants and conducted behavioral and fMRI testing. CT and SG designed the data analysis and CT and GS carried it out. GS conducted a literature review, and CT wrote the paper with reviews and edits from SG.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

Research supported by the James S. McDonnell Foundation.

Supplementary Material

The Supplementary Material for this article can be found online at: http://dx.doi.org/10.6084/m9.figshare.1508616

Adolphs, R., Damasio, H., Tranel, D., Cooper, G., and Damasio, A. R. (2000). A role for somatosensory cortices in the visual recognition of emotion as revealed by three-dimensional lesion mapping. J. Neurosci. 20, 2683–2690.

PubMed Abstract | Google Scholar

Adolphs, R., Tranel, D., and Damasio, A. R. (2003). Dissociable neural systems for recognizing emotions. Brain Cogn. 52, 61–69. doi: 10.1016/S0278-2626(03)00009-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Allison, T., Puce, A., and McCarthy, G. (2000). Social perception from visual cues: role of the STS region. Trends Cogn. Sci. 4, 267–278. doi: 10.1016/S1364-6613(00)01501-1

Anderson, S. W., Bechara, A., Damasio, H., Tranel, D., and Damasio, A. R. (1999). Impairment of social and moral behavior related to early damage in human prefrontal cortex. Nat. Neurosci. 2, 1032–1037. doi: 10.1038/14833

Ansuini, C., Cavallo, A., Bertone, C., and Becchio, C. (2015). Intentions in the brain: the unveiling of Mister Hyde. Neuroscientist 21, 126–135. doi: 10.1177/1073858414533827

Astafiev, S. V., Stanley, C. M., Shulman, G. L., and Corbetta, M. (2004). Extrastriate body area in human occipital cortex responds to the performance of motor actions. Nat. Neurosci. 7, 542–548. doi: 10.1038/nn1241

Becchio, C., Cavallo, A., Begliomini, C., Sartori, L., Feltrin, G., and Castiello, U. (2012). Social grasping: from mirroring to mentalizing. Neuroimage 61, 240–248. doi: 10.1016/j.neuroimage.2012.03.013

Bechara, A., and Damasio, A. R. (2005). The somatic marker hypothesis: a neural theory of economic decision. Games Econ. Behav. 52, 336–372. doi: 10.1016/j.geb.2004.06.010

CrossRef Full Text | Google Scholar

Bechara, A., Damasio, H., and Damasio, A. R. (2003). Role of the amygdala in decision making. Ann. N.Y. Acad. Sci. 985, 356–369. doi: 10.1111/j.1749-6632.2003.tb07094.x

Bertenthal, B. I., Longo, M. R., and Kosobud, A. (2006). Imitative response tendencies following observation of intransitive actions. J. Exp. Psychol. 32, 210–225. doi: 10.1037/0096-1523.32.2.210

Bonda, E., Petrides, M., Ostry, D., and Evans, A. (1996). Specific involvement of human parietal systems and the amygdala in the perception of biological motion. J. Neurosci. 16, 3737–3744.

Botvinick, M. M. (2008). Hierarchical models of behavior and prefrontal function. Trends Cogn. Sci. 12, 201–208. doi: 10.1016/j.tics.2008.02.009

Brainard, D. H. (1997). The psychophysics toolbox. Spat. Vis. 10, 433–436. doi: 10.1163/156856897X00357

Brass, M., Schmitt, R. M., Spengler, S., and Gergely, G. (2007). Investigating action understanding: inferential processes versus action simulation. Curr. Biol. 17, 2117–2121. doi: 10.1016/j.cub.2007.11.057

Brown, E. C., and Brüne, M. (2012). The role of prediction in social neuroscience. Front. Hum. Neurosci . 6:147. doi: 10.3389/fnhum.2012.00147

Buccino, G., Binkofski, F., Fink, G. R., Fadiga, L., Fogassi, L., Gallese, V., et al. (2001). Action observation activates premotor and parietal areas in a somatotopic manner: an fMRI study. Eur. J. Neurosci. 13, 400–404. doi: 10.1046/j.1460-9568.2001.01385.x

Calvo-Merino, B., Glaser, D. E., Grèzes, J., Passingham, R. E., and Haggard, P. (2005). Action observation and acquired motor skills: an FMRI study with expert dancers. Cereb. Cortex 15, 1243. doi: 10.1093/cercor/bhi007

Cardinal, R. N., Parkinson, J. A., Hall, J., and Everitt, B. J. (2002). Emotion and motivation: the role of the amygdala, ventral striatum, and prefrontal cortex. Neurosci. Biobehav. Rev. 26, 321–352. doi: 10.1016/S0149-7634(02)00007-6

Chambon, V., Domenech, P., Pacherie, E., Koechlin, E., Baraduc, P., and Farrer, C. (2011). What are they up to? The role of sensory evidence and prior knowledge in action understanding. PLoS ONE 6:e17133. doi: 10.1371/journal.pone.0017133

Ciaramidaro, A., Becchio, C., Colle, L., Bara, B. G., and Walter, H. (2014). Do you mean me? Communicative intentions recruit the mirror and the mentalizing system. Soc. Cogn. Affect. Neurosci . 9, 909–916. doi: 10.1093/scan/nst062

Cooper, R. P., and Shallice, T. (2006). Hierarchical schemas and goals in the control of sequential behavior. Psychol. Rev. 113, 887–916. discussion 917–931. doi: 10.1037/0033-295x.113.4.887

Corballis, M. C. (2003). “From hand to mouth: the gestural origins of language,” in Language Evolution: The States of the Art , eds M. H. Christiansen and S. Kirby (Oxford University Press). Available online at: http://groups.lis.illinois.edu/amag/langev/paper/corballis03fromHandToMouth.html

PubMed Abstract

Cross, E. S., Hamilton, A. F. C., and Grafton, S. T. (2006). Building a motor simulation de novo : observation of dance by dancers. Neuroimage 31, 1257–1267. doi: 10.1016/j.neuroimage.2006.01.033

Cross, E. S., Kraemer, D. J. M., Hamilton, A. F. D. C., Kelley, W. M., and Grafton, S. T. (2009). Sensitivity of the action observation network to physical and observational learning. Cereb. Cortex 19, 315. doi: 10.1093/cercor/bhn083

Damasio, A. R., Grabowski, T. J., Bechara, A., Damasio, H., Ponto, L. L., Parvizi, J., et al. (2000). Subcortical and cortical brain activity during the feeling of self-generated emotions. Nat. Neurosci. 3, 1049–1056. doi: 10.1038/79871

Dapretto, M., Davies, M. S., Pfeifer, J. H., Scott, A. A., Sigman, M., Bookheimer, S. Y., et al. (2006). Understanding emotions in others: mirror neuron dysfunction in children with autism spectrum disorders. Nat. Neurosci. 9, 28–30. doi: 10.1038/nn1611

Dean, P., Redgrave, P., and Westby, G. W. M. (1989). Event or emergency? Two response systems in the mammalian superior colliculus. Trends Neurosci . 12, 137–147. doi: 10.1016/0166-2236(89)90052-0

Decety, J., and Ickes, W. (2011). The Social Neuroscience of Empathy . Cambridge, MA: MIT Press.

Google Scholar

Deen, B., and McCarthy, G. (2010). Reading about the actions of others: biological motion imagery and action congruency influence brain activity. Neuropsychologia 48, 1607–1615. doi: 10.1016/j.neuropsychologia.2010.01.028

de Gelder, B. (2006). Towards the neurobiology of emotional body language. Nat. Rev. Neurosci. 7, 242–249. doi: 10.1038/nrn1872

de Gelder, B., and Hadjikhani, N. (2006). Non-conscious recognition of emotional body language. Neuroreport 17, 583. doi: 10.1097/00001756-200604240-00006

Desai, R. H., Binder, J. R., Conant, L. L., Mano, Q. R., and Seidenberg, M. S. (2011). The neural career of sensory-motor metaphors. J. Cogn. Neurosci. 23, 2376–2386. doi: 10.1162/jocn.2010.21596

Desikan, R. S., Ségonne, F., Fischl, B., Quinn, B. T., Dickerson, B. C., Blacker, D., et al. (2006). An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. Neuroimage 31, 968–980. doi: 10.1016/j.neuroimage.2006.01.021

Eickhoff, S. B., Heim, S., Zilles, K., and Amunts, K. (2006). Testing anatomically specified hypotheses in functional imaging using cytoarchitectonic maps. Neuroimage 32, 570–582. doi: 10.1016/j.neuroimage.2006.04.204

Eickhoff, S. B., Paus, T., Caspers, S., Grosbras, M. H., Evans, A. C., Zilles, K., et al. (2007). Assignment of functional activations to probabilistic cytoarchitectonic areas revisited. Neuroimage 36, 511–521. doi: 10.1016/j.neuroimage.2007.03.060

Eickhoff, S. B., Stephan, K. E., Mohlberg, H., Grefkes, C., Fink, G. R., Amunts, K., et al. (2005). A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. Neuroimage 25, 1325–1335. doi: 10.1016/j.neuroimage.2004.12.034

Fadiga, L., Fogassi, L., Gallese, V., and Rizzolatti, G. (2000). Visuomotor neurons: ambiguity of the discharge or motor perception? Int. J. Psychophysiol. 35, 165–177. doi: 10.1016/S0167-8760(99)00051-3

Ferrari, P. F., Gallese, V., Rizzolatti, G., and Fogassi, L. (2003). Mirror neurons responding to the observation of ingestive and communicative mouth actions in the monkey ventral premotor cortex. Eur. J. Neurosci. 17, 1703–1714. doi: 10.1046/j.1460-9568.2003.02601.x

Frazier, J. A., Chiu, S., Breeze, J. L., Makris, N., Lange, N., Kennedy, D. N., et al. (2005). Structural brain magnetic resonance imaging of limbic and thalamic volumes in pediatric bipolar disorder. Am. J. Psychiatry 162, 1256–1265. doi: 10.1176/appi.ajp.162.7.1256

Freeman, W. J. (1997). A neurobiological interpretation of semiotics: meaning vs. representation. IEEE Int. Conf. Syst. Man Cybern. Comput. Cybern. Simul. 2, 93–102. doi: 10.1109/ICSMC.1997.638197

Frey, S. H., and Gerry, V. E. (2006). Modulation of neural activity during observational learning of actions and their sequential orders. J. Neurosci. 26, 13194–13201. doi: 10.1523/JNEUROSCI.3914-06.2006

Friederici, A. D., Rüschemeyer, S.-A., Hahne, A., and Fiebach, C. J. (2003). The role of left inferior frontal and superior temporal cortex in sentence comprehension: localizing syntactic and semantic processes. Cereb. Cortex 13, 170–177. doi: 10.1093/cercor/13.2.170

Friston, K., Mattout, J., and Kilner, J. (2011). Action understanding and active inference. Biol. Cybern. 104, 137–60. doi: 10.1007/s00422-011-0424-z

Gallese, V. (2003). The roots of empathy: the shared manifold hypothesis and the neural basis of intersubjectivity. Psychopathology 36, 171–180. doi: 10.1159/000072786

Gallese, V., Fadiga, L., Fogassi, L., and Rizzolatti, G. (1996). Action recognition in the premotor cortex. Brain 119, 593. doi: 10.1093/brain/119.2.593

Gallese, V., and Goldman, A. (1998). Mirror neurons and the simulation theory of mind-reading. Trends Cogn. Sci. 2, 493–501. doi: 10.1016/S1364-6613(98)01262-5

Gallese, V., Keysers, C., and Rizzolatti, G. (2004). A unifying view of the basis of social cognition. Trends Cogn. Sci. 8, 396–403. doi: 10.1016/j.tics.2004.07.002

Goldstein, J. M., Seidman, L. J., Makris, N., Ahern, T., O'Brien, L. M., Caviness, V. S., et al. (2007). Hypothalamic abnormalities in Schizophrenia: sex effects and genetic vulnerability. Biol. Psychiatry 61, 935–945. doi: 10.1016/j.biopsych.2006.06.027

Golub, G. H., and Reinsch, C. (1970). Singular value decomposition and least squares solutions. Numer. Math. 14, 403–420. doi: 10.1007/BF02163027

Gorno-Tempini, M. L., Dronkers, N. F., Rankin, K. P., Ogar, J. M., Phengrasamy, L., Rosen, H. J., et al. (2004). Cognition and anatomy in three variants of primary progressive aphasia. Ann. Neurol. 55, 335–346. doi: 10.1002/ana.10825

Grafton, S. T. (2009). Embodied cognition and the simulation of action to understand others. Ann. N.Y. Acad. Sci. 1156, 97–117. doi: 10.1111/j.1749-6632.2009.04425.x

Grafton, S. T., Arbib, M. A., Fadiga, L., and Rizzolatti, G. (1996). Localization of grasp representations in humans by positron emission tomography. Exp. Brain Res. 112, 103–111. doi: 10.1007/BF00227183

Grafton, S. T., and Tipper, C. M. (2012). Decoding intention: a neuroergonomic perspective. Neuroimage 59, 14–24. doi: 10.1016/j.neuroimage.2011.05.064

Grèzes, J., Decety, J., and Grezes, J. (2001). Functional anatomy of execution, mental simulation, observation, and verb generation of actions: a meta-analysis. Hum. Brain Mapp. 12, 1–19. doi: 10.1002/1097-0193(200101)12:1<1::AID-HBM10>3.0.CO;2-V

Grezes, J., Fonlupt, P., Bertenthal, B., Delon-Martin, C., Segebarth, C., Decety, J., et al. (2001). Does perception of biological motion rely on specific brain regions? Neuroimage 13, 775–785. doi: 10.1006/nimg.2000.0740

Grill-Spector, K., Henson, R., and Martin, A. (2006). Repetition and the brain: neural models of stimulus-specific effects. Trends Cogn. Sci. 10, 14–23. doi: 10.1016/j.tics.2005.11.006

Grill-Spector, K., and Malach, R. (2001). fMR-adaptation: a tool for studying the functional properties of human cortical neurons. Acta Psychol. 107, 293–321. doi: 10.1016/S0001-6918(01)00019-1

Hadjikhani, N., and de Gelder, B. (2003). Seeing fearful body expressions activates the fusiform cortex and amygdala. Curr. Biol. 13, 2201–2205. doi: 10.1016/j.cub.2003.11.049

Hamilton, A. F. C., and Grafton, S. T. (2006). Goal representation in human anterior intraparietal sulcus. J. Neurosci. 26, 1133. doi: 10.1523/JNEUROSCI.4551-05.2006

Hamilton, A. F. D. C., and Grafton, S. T. (2008). Action outcomes are represented in human inferior frontoparietal cortex. Cereb. Cortex 18, 1160–1168. doi: 10.1093/cercor/bhm150

Hamilton, A. F., and Grafton, S. T. (2007). “The motor hierarchy: from kinematics to goals and intentions,” in Sensorimotor Foundations of Higher Cognition: Attention and Performance , Vol. 22, eds P. Haggard, Y. Rossetti, and M. Kawato (Oxford: Oxford University Press), 381–402.

Hari, R., Forss, N., Avikainen, S., Kirveskari, E., Salenius, S., and Rizzolatti, G. (1998). Activation of human primary motor cortex during action observation: a neuromagnetic study. Proc. Natl. Acad. Sci. U.S.A. 95, 15061–15065. doi: 10.1073/pnas.95.25.15061

Harnad, S. R., Steklis, H. D., and Lancaster, J. (eds.). (1976). “Origins and evolution of language and speech,” in Annals of the New York Academy of Sciences (New York, NY: New York Academy of Sciences), 280.

Hasson, U., Nusbaum, H. C., and Small, S. L. (2006). Repetition suppression for spoken sentences and the effect of task demands. J. Cogn. Neurosci. 18, 2013–2029. doi: 10.1162/jocn.2006.18.12.2013

Haxby, J. V., Hoffman, E. A., and Gobbini, M. I. (2002). Human neural systems for face recognition and social communication. Biol. Psychiatry 51, 59–67. doi: 10.1016/S0006-3223(01)01330-0

Hewes, G. W., Andrew, R. J., Carini, L., Choe, H., Gardner, R. A., Kortlandt, A., et al. (1973). Primate communication and the gestural origin of language [and comments and reply]. Curr. Anthropol. 14, 5–24. doi: 10.1086/201401

Hoffman, E. A., and Haxby, J. V. (2000). Distinct representations of eye gaze and identity in the distributed human neural system for face perception. Nat. Neurosci. 3, 80–84. doi: 10.1038/71152

Iacoboni, M., and Dapretto, M. (2006). The mirror neuron system and the consequences of its dysfunction. Nat. Rev. Neurosci. 7, 942–51. doi: 10.1038/nrn2024

Iacoboni, M., Molnar-Szakacs, I., Gallese, V., Buccino, G., Mazziotta, J. C., and Rizzolatti, G. (2005). Grasping the intentions of others with one's own mirror neuron system. PLoS Biol. 3:e79. doi: 10.1371/journal.pbio.0030079

Jacob, P. (2008). What do mirror neurons contribute to human social cognition? Mind Lang. 23, 190–223. doi: 10.1111/j.1468-0017.2007.00337.x

Jenkinson, M., Beckmann, C. F., Behrens, T. E. J., Woolrich, M. W., and Smith, S. M. (2012). Fsl. Neuroimage 62, 782–90. doi: 10.1016/j.neuroimage.2011.09.015

Kilner, J. M. (2011). More than one pathway to action understanding. Trends Cogn. Sci. 15, 352–37. doi: 10.1016/j.tics.2011.06.005

Kilner, J. M., and Frith, C. D. (2008). Action observation: inferring intentions without mirror neurons. Curr. Biol. 18, R32–R33. doi: 10.1016/j.cub.2007.11.008

Kirchhoff, B. A., Wagner, A. D., Maril, A., and Stern, C. E. (2000). Prefrontal-temporal circuitry for episodic encoding and subsequent memory. J. Neurosci. 20, 6173–6180.

Lambon Ralph, M. A., Pobric, G., and Jefferies, E. (2009). Conceptual knowledge is underpinned by the temporal pole bilaterally: convergent evidence from rTMS. Cereb. Cortex 19, 832–838. doi: 10.1093/cercor/bhn131

Lemke, J. L. (1987). “Strategic deployment of speech and action: a sociosemiotic analysis,” in Semiotics 1983: Proceedings of the Semiotic Society of America ‘Snowbird’ Conference , eds J. Evans and J. Deely (Lanham, MD: University Press of America), 67–79.

Lhommet, M., and Marsella, S. C. (2013). “Gesture with meaning,” in Intelligent Virtual Agents , eds Y. Nakano, M. Neff, A. Paiva, and M. Walker (Berlin; Heidelberg: Springer), 303–312. doi: 10.1007/978-3-642-40415-3_27

CrossRef Full Text

Makris, N., Goldstein, J. M., Kennedy, D., Hodge, S. M., Caviness, V. S., Faraone, S. V., et al. (2006). Decreased volume of left and total anterior insular lobule in schizophrenia. Schizophr. Res. 83, 155–171. doi: 10.1016/j.schres.2005.11.020

McCall, C., Tipper, C. M., Blascovich, J., and Grafton, S. T. (2012). Attitudes trigger motor behavior through conditioned associations: neural and behavioral evidence. Soc. Cogn. Affect. Neurosci. 7, 841–889. doi: 10.1093/scan/nsr057

McCarthy, G., Puce, A., Gore, J. C., and Allison, T. (1997). Face-specific processing in the human fusiform gyrus. J. Cogn. Neurosci. 9, 605–610. doi: 10.1162/jocn.1997.9.5.605

McNeill, D. (2012). How Language Began: Gesture and Speech in Human Evolution . Cambridge: Cambridge University Press. Available online at: https://scholar.google.ca/scholar?q=How+Language+Began+Gesture+and+Speech+in+Human+Evolution&hl=en&as_sdt=0&as_vis=1&oi=scholart&sa=X&ei=-ezxVISFIdCboQS1q4KACQ&ved=0CBsQgQMwAA

Morris, D. (2002). Peoplewatching: The Desmond Morris Guide to Body Language . New York, NY: Vintage Books. Available online at: http://www.amazon.ca/Peoplewatching-Desmond-Morris-Guide-Language/dp/0099429780 (Accessed March 10, 2014).

Ni, W., Constable, R. T., Mencl, W. E., Pugh, K. R., Fulbright, R. K., Shaywitz, S. E., et al. (2000). An event-related neuroimaging study distinguishing form and content in sentence processing. J. Cogn. Neurosci. 12, 120–133. doi: 10.1162/08989290051137648

Nichols, T. E. (2012). Multiple testing corrections, nonparametric methods, and random field theory. Neuroimage 62, 811–815. doi: 10.1016/j.neuroimage.2012.04.014

Niedenthal, P. M., Barsalou, L. W., Winkielman, P., Krauth-Gruber, S., and Ric, F. (2005). Embodiment in attitudes, social perception, and emotion. Personal. Soc. Psychol. Rev. 9, 184–211. doi: 10.1207/s15327957pspr0903_1

Noppeney, U., and Penny, W. D. (2006). Two approaches to repetition suppression. Hum. Brain Mapp. 27, 411–416. doi: 10.1002/hbm.20242

Ogawa, K., and Inui, T. (2011). Neural representation of observed actions in the parietal and premotor cortex. Neuroimage 56, 728–35. doi: 10.1016/j.neuroimage.2010.10.043

Ollinger, J. M., Shulman, G. L., and Corbetta, M. (2001). Separating processes within a trial in event-related functional MRI: II. Analysis. Neuroimage 13, 218–229. doi: 10.1006/nimg.2000.0711

Oosterhof, N. N., Tipper, S. P., and Downing, P. E. (2013). Crossmodal and action-specific: neuroimaging the human mirror neuron system. Trends Cogn. Sci. 17, 311–338. doi: 10.1016/j.tics.2013.04.012

Ortigue, S., Sinigaglia, C., Rizzolatti, G., Grafton, S. T., and Rochelle, E. T. (2010). Understanding actions of others: the electrodynamics of the left and right hemispheres. A high-density EEG neuroimaging study. PLoS ONE 5:e12160. doi: 10.1371/journal.pone.0012160

Peelen, M. V., and Downing, P. E. (2005). Selectivity for the human body in the fusiform gyrus. J. Neurophysiol. 93, 603–608. doi: 10.1152/jn.00513.2004

Pellegrino, G., Fadiga, L., Fogassi, L., Gallese, V., Rizzolatti, G., and di Pellegrino, G. (1992). Understanding motor events: a neurophysiological study. Exp. Brain Res. 91, 176–180. doi: 10.1007/BF00230027

Pelli, D. G., and Brainard, D. H. (1997). The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat. Vis. 10, 433–436. doi: 10.1163/156856897X00366

Pelphrey, K. A., Morris, J. P., Michelich, C. R., Allison, T., and McCarthy, G. (2005). Functional anatomy of biological motion perception in posterior temporal cortex: an fMRI study of eye, mouth, and hand movements. Cereb. Cortex 15, 1866–1876. doi: 10.1093/cercor/bhi064

Poldrack, R. A., Mumford, J. A., and Nichols, T. E. (2011). Handbook of Functional MRI Data Analysis . New York, NY: Cambridge University Press. doi: 10.1017/cbo9780511895029

Price, C. J. (2010). The anatomy of language: a review of 100 fMRI studies published in 2009. Ann. N.Y. Acad. Sci. 1191, 62–88. doi: 10.1111/j.1749-6632.2010.05444.x

Mitz, A. R., Godschalk, M., and Wise, S. P. (1991). Learning-dependent neuronal activity in the premotor cortex: activity during the acquisition of conditional motor associations. J. Neurosci. 11, 1855–1872.

Rizzolatti, G., and Arbib, M. A. (1998). Language within our grasp. Trends Neurosci. 21, 188–194. doi: 10.1016/S0166-2236(98)01260-0

Rizzolatti, G., and Craighero, L. (2004). The mirror-neuron system. Annu. Rev. Neurosci. 27, 169–192. doi: 10.1146/annurev.neuro.27.070203.144230

Rizzolatti, G., Fadiga, L., Gallese, V., and Fogassi, L. (1996a). Premotor cortex and the recognition of motor actions. Cogn. brain Res. 3, 131–141. doi: 10.1016/0926-6410(95)00038-0

Rizzolatti, G., Fadiga, L., Matelli, M., Bettinardi, V., Paulesu, E., Perani, D., et al. (1996b). Localization of grasp representations in humans by PET: 1. Observation versus execution. Exp. Brain Res. 111, 246–252. doi: 10.1007/BF00227301

Rizzolatti, G., Fogassi, L., and Gallese, V. (2001). Neurophysiological mechanisms underlying the understanding and imitation of action. Nat. Rev. Neurosci. 2, 661–670. doi: 10.1038/35090060

Rosen, H. J., Allison, S. C., Schauer, G. F., Gorno-Tempini, M. L., Weiner, M. W., and Miller, B. L. (2005). Neuroanatomical correlates of behavioural disorders in dementia. Brain 128, 2612–2625. doi: 10.1093/brain/awh628

Sah, P., Faber, E. S. L., De Armentia, M. L., and Power, J. (2003). The amygdaloid complex: anatomy and physiology. Physiol. Rev. 83, 803–834. doi: 10.1152/physrev.00002.2003

Shapiro, L. (2008). Making sense of mirror neurons. Synthese 167, 439–456. doi: 10.1007/s11229-008-9385-8

Smith, S. M., Jenkinson, M., Woolrich, M. W., Beckmann, C. F., Behrens, T. E. J., Johansen-Berg, H., et al. (2004). Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage 23(Suppl. 1), S208–S219. doi: 10.1016/j.neuroimage.2004.07.051

Tettamanti, M., Buccino, G., Saccuman, M. C., Gallese, V., Danna, M., Scifo, P., et al. (2005). Listening to action-related sentences activates fronto-parietal motor circuits. J. Cogn. Neurosci. 17, 273–281. doi: 10.1162/0898929053124965

Thibault, P. (2004). Brain, Mind and the Signifying Body: An Ecosocial Semiotic Theory . London: A&C Black. Available online at: https://scholar.google.ca/scholar?q=Brain,+Mind+and+the+Signifying+Body:+An+Ecosocial+Semiotic+Theory&hl=en&as_sdt=0&as_vis=1&oi=scholart&sa=X&ei=Lf3xVOayBMK0ogSniYLwCA&ved=0CB0QgQMwAA

Tunik, E., Rice, N. J., Hamilton, A. F., and Grafton, S. T. (2007). Beyond grasping: representation of action in human anterior intraparietal sulcus. Neuroimage 36, T77–T86. doi: 10.1016/j.neuroimage.2007.03.026

Uithol, S., van Rooij, I., Bekkering, H., and Haselager, P. (2011). Understanding motor resonance. Soc. Neurosci. 6, 388–397. doi: 10.1080/17470919.2011.559129

Ulloa, E. R., and Pineda, J. A. (2007). Recognition of point-light biological motion: Mu rhythms and mirror neuron activity. Behav. Brain Res. 183, 188–194. doi: 10.1016/j.bbr.2007.06.007

Urgesi, C., Candidi, M., Ionta, S., and Aglioti, S. M. (2006). Representation of body identity and body actions in extrastriate body area and ventral premotor cortex. Nat. Neurosci. 10, 30–31. doi: 10.1038/nn1815

Van Essen, D. C. (2005). A Population-Average, Landmark- and Surface-based (PALS) atlas of human cerebral cortex. Neuroimage 28, 635–662. doi: 10.1016/j.neuroimage.2005.06.058

Van Essen, D. C., Drury, H. A., Dickson, J., Harwell, J., Hanlon, D., and Anderson, C. H. (2001). An integrated software suite for surface-based analyses of cerebral cortex. J. Am. Med. Inform. Assoc. 8, 443–459. doi: 10.1136/jamia.2001.0080443

Wiggs, C. L., and Martin, A. (1998). Properties and mechanisms of perceptual priming. Curr. Opin. Neurobiol. 8, 227–233. doi: 10.1016/S0959-4388(98)80144-X

Wyk, B. C. V., Hudac, C. M., Carter, E. J., Sobel, D. M., and Pelphrey, K. A. (2009). Action understanding in the superior temporal sulcus region. Psychol. Sci. 20, 771. doi: 10.1111/j.1467-9280.2009.02359.x

Zentgraf, K., Stark, R., Reiser, M., Künzell, S., Schienle, A., Kirsch, P., et al. (2005). Differential activation of pre-SMA and SMA proper during action observation: effects of instructions. Neuroimage 26, 662–672. doi: 10.1016/j.neuroimage.2005.02.015

Keywords: action observation, dance, social neuroscience, fMRI, repetition suppression, predictive coding

Citation: Tipper CM, Signorini G and Grafton ST (2015) Body language in the brain: constructing meaning from expressive movement. Front. Hum. Neurosci . 9:450. doi: 10.3389/fnhum.2015.00450

Received: 28 March 2015; Accepted: 28 July 2015; Published: 21 August 2015.

Reviewed by:

Copyright © 2015 Tipper, Signorini and Grafton. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Christine M. Tipper, Mental Health and Integrated Neurobehavioral Development Research Core, Child and Family Research Institute, 3rd Floor - 938 West 28th Avenue, Vancouver, BC V5Z 4H4, Canada, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

  • svg]:fill-accent-900">

The truth about reading body language

By Ramin Skibba/Undark

Posted on Oct 8, 2020 8:00 PM EDT

Ramin Skibba is an astrophysicist turned science writer and freelance journalist who is based in San Diego. This story originally featured on Undark .

Last week, tens of millions of people tuned into the first debate between President Donald J. Trump and former Vice President Joe Biden. Similar viewership is expected for the next two contests—assuming they go ahead following Trump’s COVID-19 diagnosis last week—as well as for Wednesday’s vice presidential debate in Salt Lake City. Along with listening to the candidates’ words, many viewers of the closely watched political spectacles will also pay attention to the debaters’ demeanor, posture, tics, and gestures.

Body language can exude confidence or awkwardness, charisma or anxiety. In recent years, it has also become the subject of a small cottage industry premised on the idea that nonverbal cues can reveal important truths about people in high-stakes situations. News outlets like The Washington Post and Politico interview consultants and bring them on as columnists to analyze speakers’ body language after debates and diplomatic meetings between world leaders. On YouTube, self-appointed experts claiming to read public figures’ expressions sometimes garner millions of views .

Some of this analysis explores how body language can influence audiences. Other times, pundits try to explain what public figures are thinking or feeling based on subtle cues. After Trump and Biden’s first debate, for example, one analyst told The Independent , a British newspaper, that when Biden looked down at his lectern as Trump spoke, it “could be interpreted as submission to the attack” or a sign of self-control.

This work has a more consequential side: Many police departments and federal agencies use body language analysis as a forensics technique, claiming that these tools can help assess people’s intentions or truthfulness. Body language consultants, an Intercept investigation reported in August, have trained federal and local “law enforcement across the country.”

Psychologists and other researchers agree that body language can convey certain emotional states. But many bold claims haven’t been backed by scientific evidence. For instance, claims that a single gesture reliably indicates what a person thinks or desires—that maintaining eye contact for too long means a person is lying, that a smile without crinkles around the eyes isn’t a genuine one, or that a pointed finger with a closed hand is a display of dominance.

“Nonverbal communication in politics is extremely important because it creates impressions among the public, and this can influence whether people trust a politician,” says Vincent Denault, a communication researcher at the University of Montreal.

But when it comes to pundits commenting about body language in the media, “what you see is often more entertainment than science,” he says. “It can contribute to misinformation.”

Modern research on body language —often called nonverbal behavior—began in the 1960s and ’70s with studies that aimed to demonstrate the universality of facial expressions of emotion. That work was inspired, in part, by Charles Darwin’s neglected study from a century earlier, “The Expression of Emotions in Man and Animals,” according to David Matsumoto, a San Francisco State University psychologist and director of Humintell, a company that provides body language trainings and does research for companies and government agencies.

Since then, researchers have examined how parts of the brain seemingly react to particular facial expressions, and how infants begin to imitate facial and hand gestures. But scientists have also mapped the complexities and subtleties of body language, which can sometimes be challenging to decipher despite its ubiquity.

For researchers like Denault, the scope of nonverbal communication has expanded to include anything beyond a person’s spoken words. A speaker might make an impression by shrugging their shoulders, scratching their nose, tapping their foot, rolling their eyes, or wiping sweat off their face, as Richard Nixon famously did in one of his 1960 presidential election debates against John F. Kennedy. A person’s clothes, their Zoom background, and their tone, pauses, and “uhs” and “ums” while speaking all count as nonverbal cues that can shape a viewer’s perceptions.

While many experts caution that body language is complex and context-dependent, for years a small class of consultants and specialists have been applying body language research in myriad scenarios, including career coaching, work presentations, and airport screenings.

“I help people influence and persuade others around how trustworthy and credible their message is by helping them with their specific nonverbal communication,” says Mark Bowden, a body language consultant and author of the book Winning Body Language , a guide for corporate and political clients. He focuses on where a person faces their body and how much space they take up, as well as their gestures.

Some analysts also claim to be able to use those signals to interpret hidden motivations and emotions. For example, some news stories feature analysts explaining that the positioning of Donald Trump’s hands during speeches indicates that he believes in what he’s saying, or that when people touch their faces it’s a clear sign of nervousness .

But, Denault said, “associating ‘states of mind’ to specific gestures, or concluding that this gesture will have this effect on the public, without any nuance, is dubious.”

Still, analysts like Bowden and Joe Navarro, a former FBI agent and the author of What Every Body is Saying , a book about interpreting nonverbal behavior, have made careers in part out of those kinds of insights.

Navarro, who has analyzed politicians’ body language for Politico and written for CNBC about how to read the body language of someone wearing a protective mask during the COVID-19 pandemic, says that he has a particular method for assessing speakers like the presidential candidates. “I record it and then watch it with the sound off,” he said. “I look for behavior that stands out: these discomfort displays, the furrowing of the forehead and the glabella, the area between the eyes, or the pursing of the lips or the ventilating by pulling their shirt collar.” As an example, he argues that it’s easy to spot Donald Trump’s lip movements when he reacts to a question he apparently doesn’t like.

While the work of Navarro and other analysts can attract large audiences, many experts are unsure whether their methods are as reliable as claimed.

“Our facial expressions convey certain types of emotional states,” Matsumoto says. So do some motions, like a shrug. “But there’s a lot of noise, too,” he says. “People do all kinds of things with their bodies.” For example, a person’s raised eyebrow could be express disbelief—but it might also signal discomfort or surprise. The same hand gesture could mean different things in different cultures.

Denault and Matsumoto are both skeptical of those making strong conclusions based on body language observations. Because of all the ambiguities, even perceptive observers can’t infer a person’s thoughts or intentions based on their nonverbal behavior alone, Denault argues.

Dawn Sweet, a University of Idaho communication researcher, agrees. “There’s not likely to be a single behavior diagnostic ever to be found” for someone lying or acting aggressively, she says.

Sweet and her fellow researchers often look at a person’s body language and spoken words together, since they’re usually communicating the same things. The researchers also examine the context of a person’s behavior and learn more about the speaker, since it matters if the behavior is typical for them or a deviation.

Sweet cites an earlier analysis of dozens of studies involving more than 1,300 estimates of 158 possible signs of deception. These studies focused on body language cues that people sometimes associate with lying, like fidgeting or avoiding eye contact. The studies found that cues like these have either no links or only weak links to lying. No one has a giveaway like Pinocchio and his nose.

For that reason, some researchers, like California State University, Fullerton psychologist Iris Blandón-Gitlin, simply avoid looking at such nonverbal cues altogether. “My research is focused mostly on understanding what people are saying,” she says. In general, she finds that lying takes effort, and liars tend to tell more simplistic stories, with fewer details.

Asked about these kinds of concerns, Navarro defends his methods. “Nonverbals are quicker to observe, and they’re authentic and very accurate,” he says. He points to the role of body language in understanding what a baby is feeling before it’s able to talk, and in whether one feels safe in the presence of potentially threatening behavior. People even pick mates based on nonverbal cues, he says. But he agrees that some kinds of behavior can be more reliably interpreted than others and that nonverbal behavior is not effective for conclusively detecting deception.

Despite these expert reservations, body language analysis has also been used in criminal cases , with police, federal agents, and prosecutors using the techniques to try to determine whether a suspect is telling the truth, or whether someone convicted of a crime feels remorse.

But, like many other kinds of forensic science, body language analysis has been shown to be unreliable. The technique could unjustly sway judges and jurors in trials, says Denault, who describes some of these judgments as pseudoscience. Unsupported claims about body language, he says, may seem to offer simple solutions to the complex challenge of evaluating testimony, but evidence-based research doesn’t really provide easy answers.

That said, if security and justice professionals and other officials focus on vetted findings that have scientific consensus, Denault argues that research on nonverbal behavior could still benefit them, for example, by helping police officers behave in a way that puts suspects at ease and helps build rapport.

Whether assessing the behavior of a politician or a suspect, Sweet cautions that people easily jump to conclusions that merely confirm their preconceptions. A person might look uncomfortable, nervous, or fearful at a given moment, but observers rarely know why. An observer might think they’re noticing a telling gesture that reveals information about what another person is thinking, when they’re really just finding a reason to justify an initial belief that the person is lying or aggressive.

Matsumoto warns people not to trust every media analyst they see or read who invokes body language. “There’s a lot of great information a person can get from nonverbals,” he says. “But you have to be careful.”

American Psychological Association Logo

Speaking of Psychology: Nonverbal communication speaks volumes, with David Matsumoto, PhD

If you think reading people is not a science, think again. Understanding expressions that only appear on someone’s face for tenths of a second can mean a lot to those who know what to look for. In this episode, psychologist and nonverbal communication expert David Matsumoto, PhD, talks about why nonverbal communication is so important in everything from police investigations to intercultural exchanges.

About the expert: David Matsumoto, PhD

David Matsumoto, PhD

Matsumoto is also the head instructor of the East Bay Judo Institute in El Cerrito, California. He holds a 7th degree black belt and has won countless awards, including the U.S. Olympic Committee’s Coach of the Year Award in 2003. Matsumoto served as the head coach of the 1996 Atlanta Olympic Judo Team and was the team leader for the 2000 Sydney Olympic Judo Team.

Audrey Hamilton: A fleeting change in someone’s face or body language can signal a lot of different emotions. Why do people’s faces change when they’re angry or sad? In this episode, we speak with a psychologist and expert in facial expression, gestures and other nonverbal behavior about how not speaking can speak volumes. I’m Audrey Hamilton and this is “Speaking of Psychology.”

David Matsumoto is a professor of psychology and director of the Culture and Emotion Research Laboratory at San Francisco State University. An expert on facial expressions, nonverbal behavior and deception, he is director of Humintell, a company that conducts research and training for organizations such as the Transportation Security Administration, the FBI and the U.S. Marshalls Service. Welcome, Dr. Matsumoto. 

David Matsumoto: Thank you for having me. 

Audrey Hamilton: We’re probably all familiar with the universal facial expressions of our emotions – you know, anger, joy, sadness – you know, those are some of them. Can you give examples of some of the less obvious facial expressions? I think you call them microexpressions, you know where someone is maybe attempting to conceal his or her emotions. These are much harder to detect. Is that right? 

David Matsumoto: Microexpressions are unconscious, extremely quick, sometimes full-face expressions of an emotion. And sometimes they’re partial and very subtle expressions of emotion. But because they’re extremely quick and because they’re unconscious, when they occur, they occur often times less than half a second – sometimes as fast as one-tenth of a second or even one-fifteenth of a second. Most people don’t even see them. Some people do see them but they don’t know what they’re seeing. They see something that has changed on the face, but they don’t know exactly what is was that was changed.

Audrey Hamilton: It’s fleeting? 

David Matsumoto: It’s very fleeting, but if you take a freeze frame on it on a video, you’ll see that a lot of times there’s a big facial expression that is very clear about what the person’s mental state is. 

Audrey Hamilton: It all sounds very interesting, but how is this useful in the real world? You work with numerous organizations like I mentioned – the FBI, the TSA – to help train interrogators and business people in the skill of reading people. Tell us about your applied work in training programs. 

David Matsumoto: Well, learning to read microexpressions and nonverbal behaviors in general can be very valuable for anyone whose job it is to understand other people’s true feelings, their thoughts, their motivations, their personalities or their intentions. So obviously, there’s an application for people who are doing interviews or interrogations. That would be people in the criminal justice system, law enforcement, national security, intelligence – those are the kinds of people that we primarily work with because their job is to try to find about whether a person is concealing facts or concealing knowledge or concealing something or has some information that would be useful for solving a crime or getting some other kinds of information. And so, when one wants to be able to do that it’s very useful to be able to read these microexpressions.

But again, the application is very clear for anybody whose job it is to be able to get that kind of additional insight – what I call data superiority – for the individual who’s observing others. So it could be for sales people. It could be for the legal profession. It could be for healthcare professionals or psychotherapists. Medical doctors. Sales person, I think I mentioned sales person. Anybody whose job it is to gain some additional insight about the person that you’re talking with so that you can leverage that information for a particular outcome. 

Audrey Hamilton: I imagine these skills are particularly important in intercultural exchanges. Are facial expressions and gestures different in other cultures and can you give us some examples? 

David Matsumoto: Well, facial expressions of emotion are universal in the sense that everybody around the world regardless of race, culture, nationality, sex, gender, etc., whatever the demographic variable is, we all show the same facial muscle expressions on our faces when we have the same emotions. 

Now, of course, the question is context will moderate all of that and what kinds of things bring about different emotions in different cultures. So, of course, there are cultural differences and large individual differences in when people express emotions and how they express them when they feel the emotions. But if there’s no reason to change anything when people are feeling extremely strong emotions and they can express it freely, they will express those emotions on their faces in exactly the same ways. 

Gestures are very different. There are many different types of gestures and so the two types of gestures that we generally work with are called speech illustrators and emblems. Speech illustrators are these gestures that accompany speech that when you see a person using their hands when they’re talking to illustrate a point; they’re like animation. They’re like how we use our voice. They’re functionally universal in the sense that everybody around the world uses hand gestures as speech illustrators. But people around the world differ in the amount that they do them and in the form. So if you can picture people waving around. Some people in some cultures wave around their hands in a certain way. Some people point when they talk. Some people are doing various different types of things with their hands when they talk. So the form in which the illustrator occurs is different, but the function is the same across different cultures. 

Emblems is another type of gesture. These are generally culturally specific. These are gestures that refer to specific words or phrases. So, if you can imagine, the listeners can imagine the thumbs up, which has a meaning around the world, which is like “OK” or “good.” These things are culture specific, so every culture, just as every culture has a verbal vocabulary – different verbal vocabulary – every culture creates a vocabulary of emblematic gestures that correspond to certain types of phrases that they think are important to have in a gesture. 

So those are very culture specific. Now what’s really interesting about that is that some of our most recent research published a couple of years ago has shown that some gestures are beginning to be universally recognized around the world, like head nods for yes and head shakes for no. Of course, there’s places around the world that still do them in different ways. But they are increasingly being recognized universally around the world, probably because of a lot of shared mass media and because of the Internet or movies and things like that. So, in summary, with nonverbal behaviors, there’s some aspects of it that are very universal and some aspects of it that are culturally specific. 

Audrey Hamilton: Some of your research has involved the study of blind athletes. I thought this was interesting. Can you tell us how that research has furthered your understanding of human emotions? 

David Matsumoto: Yeah, well to tell you the truth, one of the pervasive questions about facial expressions of emotion in the past has been whether they’re universal or not and I think there’s very conclusive evidence about the universality of facial expressions of emotion. 

Then, the next question becomes where do they come from? Because it could be that we are all born with some kind of innate skill that is an evolutionarily based kind of adaptation that we share with non-human primates and other animals. Or it could be that humans have just all around the world learned, regardless of where they are, from the time that they’re infants. So it could be something that is learned or something that is biologically innate. 

Now studying blind individuals, and especially congenitally blind individuals, is a particularly great thing to do to address this particular research question because when you study blind individuals and you study their expressions you know that as long as they were congenitally blind that there was no way that they could possibly learn to see those expressions and put them on their faces from birth because they’ve been blind from birth. And so when you study a population like that it helps you address a certain research question. And so in the studies that we’ve done, we’ve actually studied the spontaneous facial expressions of blind individuals from around the world from many different cultures and we show that in the same emotionally evocative situations that blind individuals produce on their faces exactly the same facial muscle configurations where the same emotions as sighted individuals do. And again, because these are individuals who are blind from birth, there’s no way that they could have possibly learned to do that by seeing others do it. 

And so it leads me to think and many others to believe that the ability to have facial expressions of emotion is something that is biologically innate and that we are all born with. 

I’ve done judo for 48 years of my life here and I’ve been fortunate enough to be part of our Olympic movement in judo. I was the Olympic coach for the 1996 and 2000 Olympic Games for the United States. We studied the expressions of the athletes in the sighted – in the regular Olympic Games – for these are all sighted individuals and we study their expressions right at the moment they won or lost their medal match. And we’re taking photographs. These are high-speed photographs – eight shots per second with a very expensive camera – and so we can track the expressions – you know in minute second by second or fractions of a second resolution – right at the time of winning or losing the match. And we also could see the expression of the same athletes on the podium 30 minutes later in a social context. So we could do that comparison. 

Two weeks after the Olympic Games, every Olympics, what happens in every Olympics is the Paralympics rolls into town using exactly the same venue. So my guy was there still and every sport has a different disability. For judo, it’s blindness. So all of the judo athletes in the judo Paralympic Games are all blind. Half of them or some degree of them are congenitally blind and some are acquired blindness through some kind of disease or accident (there are no differences between them, by the way). But anyway, we were able to do the same kind of study with the Paralympic judo blind athletes in the Paralympic Games.

When you compare the expressions of the blind athletes in the Paralympic Games to the sighted athletes in the regular Olympic Games, what you find is that for the winners – winners and losers – they all do the same thing. We measure the exact facial muscle movements that are occurring right at the time of winning or losing that match. So I think the correspondence – the correlation between the facial muscle movements is something like 0.9 or some incredibly high number that you never see in research nowadays – so that correspondence is amazingly high between the blind and the sighted athletes.  

What’s really interesting about blind athletes is this – or sighted – if we asked our listeners to show on their faces what do you do, what do you show, what do you think you do on your face when you express anger? Everybody can give you something and it will be pretty much accurate. And the reason is because all of us have seen it. We’ve seen it in ourselves if we’ve seen ourselves angry in the mirror. Or we see it in others when they’re angry. So we see it. We know what it looks like. We’ve seen ourselves do it. We know what it feels like. A blind athlete has never seen it. So if you ask a blind person, “Hey, show me what you look like when you’re angry or when you’re sad,” you’ll get something that’s close but you don’t get the exact facial muscle movements that occur when those emotions occur spontaneously. However, when it occurs spontaneously, the exact facial muscle movements are exactly the same. So blind individuals produce them spontaneously but don’t produce exactly the same thing when you ask them to pose whereas sighted people do. 

Audrey Hamilton: Interesting. 

David Matsumoto: And so this to me is another example of how there’s differences between the blind and the sighted and why they are because this is a biologically innate thing. They can do it when it’s spontaneous.  

Audrey Hamilton: Well, thank you Dr. Matsumoto for joining us today. It’s been very interesting. 

David Matsumoto: My pleasure. 

Audrey Hamilton: For more information on Dr. Matsumoto’s work and to hear more episodes, please go to our website . With the American Psychological Association’s “Speaking of Psychology,” I’m Audrey Hamilton.  

Speaking of Psychology

Download Episode

Episode 34:  Nonverbal communication speaks volumes

Save the MP3 file linked above to listen to it on your computer or mobile device.

Speaking of Psychology

Speaking of Psychology is an audio podcast series highlighting some of the latest, most important, and relevant psychological research being conducted today.

Produced by the American Psychological Association, these podcasts will help listeners apply the science of psychology to their everyday lives.

Subscribe and download via:

Listen to podcast on iTunes

About the host: Audrey Hamilton

Audrey Hamilton was the host of Speaking of Psychology from 2013 to 2018. A former broadcast news reporter, she worked in APA’s Office of Public Affairs from 2008 to 2018.

Contact APA Office of Public Affairs

Science of People - Logo

23 Essential Body Language Examples and Their Meanings

Body language is the science of nonverbal signals. I’ve studied body language for over 10 years—here are my top body language cues you can use today.

Subscribe to our weekly newsletter

Learning to decode body language is powerful and one of the most important nonverbal communication skills.

This guide is your key to reading people AND having confident body language.

Watch our video below to learn how to read people and decode 7 body language cues:

In this article, we’re going to cover the essential must-knows to mastering your body language skills.

Before we dive in, be sure to take our body language quiz here to find out how good you are at reading body language!

What is Body Language?

Body language is the science of nonverbal signals such as gestures, facial expressions, and eye gaze that communicate a person’s emotions and intentions. In total, there are 11 types of body language that we use to communicate. Unlike words, body language is often done subconsciously and constitutes a large part of our communication.

Our founder at Science of People has identified 97 cues you should know. Get started with the 23 in this article. Want to learn them all? Check out:

research about body language

Unlock the Secrets of Charisma

Control and leverage the tiny signals you’re sending – from your stance and facial expressions to your word choice and vocal tone – to improve your personal and professional relationships.

Why is Body Language So Important?

Body language is a key part of how we communicate with each other. It helps show our feelings and attitudes, even when our words say something different. Being good at understanding body language can make conversations better and help people get along well.

People who are good at reading body language typically excel in their careers, have great relationships, and get “freebies” in life.

If you want to learn more about the importance of body language, I recommend checking out my article here: 5 Powerful Reasons Why Body Language is Important .

Body language can be broken down into 2 major categories—positive or open body language and negative or closed body language.

And just like how they sound, these 2 broad categories of cues signal just how open (or closed) someone is from their external environment. Whether at a networking event talking to a random stranger you’ve just met, giving a presentation or speech or on a first date , knowing how to read these cues is key to knowing how receptive others are to you or the situation.

Reading body language is as close to mind reading as we can get.

Open Body Language Examples

The eyebrow flash.

The eyebrow flash which is one of many open body language examples

When someone does an eyebrow flash, you’ll typically see their eyebrows raise slightly for less than ⅕ of a second.

What it Means: The eyebrow raise is a great sign of interest. People tend to use the eyebrow flash in 3 main ways:

  • The eyebrow flash can show interest professionally, as when giving approval, agreeing to something, thanking someone, or seeking confirmation. It’s used as a nonverbal “yes” during conversation.
  • The eyebrow flash can also show interest romantically.
  • Or the eyebrow flash can show interest socially, as when 2 people recognize each other. It signals to the other person that you are happy to see them.

Whenever we use the eyebrow flash, we call attention to our face. Teachers and speakers often use it as a way to say, “Listen to this!” or “Look at me!”

Interestingly, some cultures like the Japanese find this cue indecent and avoid it 1 https://www.amazon.in/NCHI-Science-Technology-Medicine-Colonialindia/dp/0521055822 .

The Science: According to researchers 2 https://www.researchgate.net/publication/243768681_Human_facial_expressions_as_adaptations_Evolutionary_questions_in_facial_expression_research at the University of Pittsburgh, the eyebrow flash is a universally recognized form of greeting and can be found all over the world, suggesting that this gesture is common among all cultures.

This gesture is even used by monkeys and apes 3 https://www.amazon.com/Definitive-Book-Body-Language-attitudes/dp/1409168506 !

How to Use it: There are so many ways to use the eyebrow flash. Here are a few:

  • To Show Liking: When you see someone you like or who you want to like you, give them a quick eyebrow flash followed by a warm smile.
  • To Increase Engagement: If you want someone to listen to something you are about to say, raise your eyebrows right before you deliver.
  • To Show Interest: Are you curious? Your eyebrows are the best way to show it!

The Equal Handshake

The equal handshake, which is one of many open body language examples

An equal handshake has these 7 elements:

  • good eye contact
  • a warm, genuine smile
  • an extended arm with a slight bend at the elbow
  • fingers pointing downward while approaching the other person’s hand
  • this one’s the big one —EQUAL pressure during the hand clasp
  • slight forward lean toward the other person
  • a slow release after 1–2 seconds

What it Means: This handshake is a breath of fresh air and signals mutual respect for both parties.

An equal handshake signals confidence, openness, and power during an interaction and leaves both participants feeling warm and fuzzy inside.

How to Use it: Before shaking hands, consider the context. Salespeople learned early on that an uninvited or surprise handshake from nowhere was damaging to their sales—the buyers obviously didn’t welcome them, and they felt forced to shake hands.

Handshakes also aren’t universal—some cultures commonly bow as a greeting, as they do in Japan, and people in other cultures give a kiss on the cheek, as they do in Italy or Spain.

A good rule of thumb is to only shake hands when you know the other person will warmly reciprocate it. Otherwise, a head nod is a good option—or wait for the other person to initiate the handshake.

On another important note, older people require less pressure, so avoid crushing an older person’s hand with your firm grip. When shaking hands with a higher-status individual, allow them to set the length and pressure of the handshake first, and follow up with an equal exchange for maximum bonding.

Authentic Mirroring

Authentic mirroring, which is one of many open body language examples

Displaying similar body language to other participants during a social situation.

What it Means: Mirroring is a highly rapport-building cue that signals a desire to connect with someone else. People tend to mirror only whom they like, and seeing someone else mirror our own body language creates a feeling of similarity and likeness.

The Science: Mirroring is powerful. Studies have shown that mirroring leads to the following:

  • Greater compliance 4 https://pubmed.ncbi.nlm.nih.gov/21375122/ with requests. So mirror if you want to persuade someone.
  • Higher sales numbers 5 https://www.researchgate.net/publication/251630934_Retail_salespeople%27s_mimicry_of_customers_Effects_on_consumer_behavior . So be sure to mirror if you are in sales.
  • Positive evaluations. So mirror your manager to build rapport.
  • Even larger tips 6 http://j.b.legal.free.fr/Blog/share/M1/Articles%20INC/Mimicry/Mimicry%20for%20money.pdf from customers!

Mirroring others is literally hardwired into our brains. Professor Joseph Heinrich 7 https://henrich.fas.harvard.edu/files/henrich/files/henrichcv2017_oct.pdf from the University of Michigan explains that mirroring others helps us cooperate—which leads to more food, better health, and economic growth for communities.

How to Use it: Make sure to mirror subtly. If someone nods their head vigorously in agreement, and you do the same, you may come off as too obvious—this can lead to suspicion or decreased rapport.

You can also avoid mirroring someone entirely if you’re disinterested in them or want to create boundaries.

If the other person is displaying negative body language cues, try displaying open positive language cues yourself to get them to open up, instead of copying their closed gestures.

Mutual Gazing

Mutual gazing, which is one of many open body language examples

Eye contact that is mutual—neither lacking eye contact nor being a little too interested.

What it Means: Longer eye contact, especially from people who are high-status, makes us feel favored. This is especially true when receiving eye contact from celebrities or movie stars 8 https://www.amazon.com/What-Every-Body-Saying-Speed-Reading/dp/0061438294 .

Increased eye contact also indicates the other person may be curious as when people are more attentive to their surroundings, their blink rate will generally decrease 1 https://www.amazon.in/NCHI-Science-Technology-Medicine-Colonialindia/dp/0521055822 .

Warning: Do not make 100% eye contact! That is actually a territorial signal and shows aggression. People often do it before a fight.

You want to do mutual gazing. Eye contact when you agree, when you are listening, when you are exchanging ideas, or when staring at your amazing self in the mirror!

The Science: Making eye contact just 30% of the time has been shown 9 https://pubmed.ncbi.nlm.nih.gov/16081035/ to significantly increase what people remember you say.

You can also give a boost to your perceived persuasiveness, truthfulness, sincerity, and credibility just by mutual eye gazing 1 https://www.amazon.in/NCHI-Science-Technology-Medicine-Colonialindia/dp/0521055822 .

Interestingly, certain personality traits were found to relate to more mutual gazing—namely, extraversion, agreeableness, and openness 1 https://www.amazon.in/NCHI-Science-Technology-Medicine-Colonialindia/dp/0521055822 .

How to Use it: Increase your eye gaze to bond. However, make sure to glance away occasionally, since too much eye contact can be seen as threatening and make people feel uncomfortable.

Body language ilustration of a fake smiling girl

This is a fake smile. This smile lacks the characteristic “crow’s feet” wrinkles around the corners of the eyes.

Lack of Barriers

Lack of barriers, which is one of many open body language examples

Keeping objects (like phones, bags, or glasses) out of the way when talking signals that you are fully present and open to the interaction.

What it Means: Removing physical barriers between you and the other person indicates that you’re giving them your full attention.

Objects—anything from your notebook, coffee mug, or even a desk—can act as distractions or shields, so keeping the space clear demonstrates your interest in a meaningful exchange.

Even having your smartphone nearby can reduce your cognitive function 10 https://www.journals.uchicago.edu/doi/10.1086/691462 !

How to Use it: When you’re in a conversation, be mindful of any objects you may be holding or actions you might be performing that could create a barrier. Put your phone down or away, keep bags or other items to the side, and make sure your hands are free to gesture naturally. This will not only make you appear more open but will also encourage the other person to do the same.

Duchenne Smile

Duchenne Smile, which is one of many open body language examples

The Duchenne smile is a smile that signals true happiness and is characterized by the “crow’s feet” wrinkles around the corners of the eyes along with upturned corners of the mouth.

The opposite is a fake smile:

*Avoid at all costs*

What it Means: When you see a Duchenne smile, this likely indicates genuine happiness.

It is difficult, but not impossible, to fake a real smile. In most cases, we smile dozens of times in normal conversation, but many of these smiles are given out of politeness or formality.

The Science: Research shows that babies several weeks old will already use the Duchenne smile for their mothers only while using a more polite, social smile for others 8 https://www.amazon.com/What-Every-Body-Saying-Speed-Reading/dp/0061438294 .

People also tend to smile more with others than when alone—in fact, when we see a smiling face, endorphins are released into our system 3 https://www.amazon.com/Definitive-Book-Body-Language-attitudes/dp/1409168506 .

Studies show that athletes will smile noticeably differently, whether they finish in first, second, or third place. This distinction was the same even in congenitally blind athletes who never even saw a smile before 3 https://www.amazon.com/Definitive-Book-Body-Language-attitudes/dp/1409168506 .

How to Use it: When smiling, remember to “smile with your eyes” instead of just your mouth. It also helps to smile widely enough to bring the cheeks up, helping activate the muscles around your eyes. Remember to maintain the smile even after an encounter—in fake happiness encounters, you may often see an “on-off” smile that flashes and then vanishes quickly after 2 people in the interaction go their separate ways Peoplewatching .

Example: In this example, George W. Bush flashes a childish Duchenne smile ( “Oops, I got caught!”) when he tries to open a door, but fails:

YouTube video

Shared Laughter

Shared laughter, which is one of many open body language examples

Simultaneous laughter shared between individuals in response to a joke or funny observation.

What it Means: When you crack a joke and the other person shares a laugh with you, this is a good sign that they are open to connecting with you. Laughter is meant to establish potential relationships 11 http://www.mysmu.edu/faculty/normanli/Lietal2009.pdf or maintain existing ones, especially if the joke wasn’t particularly funny.

Laughter is also an indication that someone is relaxed, since stiff and nervous people usually do not laugh genuinely or instead may give a tense laugh if they feel nervous.

The Science: Neurologist Henri Rubenstein found that just one minute of laughter provides up to 45 minutes of subsequent relaxation 3 https://www.amazon.com/Definitive-Book-Body-Language-attitudes/dp/1409168506 ! The relaxation boost you get certainly justifies watching your favorite comedians on TV.

As we age, we usually laugh less. Adults laugh an average of only 15 times per day, while preschoolers laugh 400 times daily 3 https://www.amazon.com/Definitive-Book-Body-Language-attitudes/dp/1409168506 .

A great way to boost your laughter is to get more social! Robert Provine found that laughter is more than 30x more likely to occur in social situations than when a person is alone. In his study, participants were videotaped watching a funny video clip in 3 different situations:

  • with a same-sex stranger, and
  • with a same-sex friend.

Those who watched alone had significantly less laughter than those who watched with a stranger or friend.

How to Use it: Try incorporating humor into your conversations such as giving the opposite answer to a yes/no question.

Example: If people are expecting you to say yes, say no; if people are expecting you to say no, say yes instead. It’s simple but effective.

This is Jennifer Lawrence’s go-to strategy.

YouTube video

The World’s Funniest Joke In 2001, Richard Wiseman set out to find the world’s funniest joke. In his experiment, Wiseman set up a website named LaughLab 12 laughlab.co.uk , in which users could input their favorite joke, and participants could rate them. By the end of the project, which garnered 40,000 jokes and had over 350,000 participants from 70 countries, one joke was found to stand out above the rest: Two hunters are out in the woods when one of them collapses. He doesn’t seem to be breathing, and his eyes are glazed. The other guy whips out his phone and calls the emergency services. He gasps, “My friend is dead! What can I do?” The operator says, “Calm down. I can help. First, let’s make sure he’s dead.” There is a silence, then a shot is heard. Back on the phone, the guy says, “OK, now what?”

Open palms, which is one of many open body language examples

When using hand gestures, make sure you display your palms and don’t hide them from others. Pockets, hands behind back, and closed fists can all act as barriers against open palms.

What it Means: People who display open palms are seen as honest and sincere. It can also be used as a questioning gesture.

Have you ever been in a situation where you met someone, and they seem nice, but something inside you felt a bit… off? It might have been that their palms weren’t showing.

Evolutionarily, when we see closed palms, our brains receive signals that we might be in danger—after all, the other person could be brandishing a weapon or hiding something dangerous.

How to Use it: When gesturing with your hands, make sure your hands are open most of the time and that people can see your open palms. It is also a good idea to keep the palms facing upward most of the time rather than facing downward.

Leaning in, which is one of many open body language examples

Leaning slightly toward the person you are communicating with shows that you are engaged and interested.

What it Means: Leaning in while talking to someone usually signals that you are fully present and interested in the conversation. This action draws you physically closer to the other person, building a sense of intimacy and focus. It can be a strong indicator of attentiveness and a desire to understand or connect with the other person.

The Science: Studies 13 https://www.researchgate.net/publication/259128505_Inclined_to_better_understanding-The_coordination_of_talk_and_’leaning_forward’_in_doing_repair have shown that leaning in can actually facilitate better understanding and communication. It creates what psychologists call “proximity,” or closeness, that encourages more open sharing of information.

How to Use it: Leaning in should be a natural and subtle move, not an exaggerated lunge! Use this body language cue when you truly want to engage with someone—whether you’re trying to understand what they’re saying or show that you agree with them.

However, it’s crucial to gauge the other person’s comfort level; leaning in too aggressively or when the other person is leaning away can create major discomfort.

Warm touch, which is one of many open body language examples

Appropriate touches like a gentle pat on the back or arm can convey openness and empathy.

What it Means: Using a warm touch, such as a pat on the back or a light touch on the arm, often signals that you’re emotionally present and attuned to the other person’s needs or feelings. This gesture can create an immediate bond, break tension, or offer comfort.

The Science: Touch triggers the release of oxytocin, often referred to as the “love hormone” or “bonding hormone,” which plays a significant role in social bonding and attachment. This can also depend on the context (some people may not like to be touched), but oxytocin-increasing effects can even last after a conversation 14 https://www.sciencedaily.com/releases/2023/05/230509122117.htm .

Research 15 https://journals.sagepub.com/doi/10.1177/1088868316650307 has shown that appropriate touch can reduce stress hormones, lower heart rate, and increase feelings of trust and security.

How to Use it: Warm touch can be a powerful way to connect, but it’s essential to be aware of the other person’s comfort zone and cultural norms. A well-timed pat on the back can enhance a friendly conversation or provide consolation in a more serious moment. Use warm touch judiciously, always being aware of cues that indicate whether the other person is receptive to this level of contact.

Closed Body Language Examples

Crossed ankles.

Crossed ankles, which is one of many closed body language examples

The feet are crossed, and one ankle lies on top of the other. This can be done whether sitting or standing—or even with the feet on the table.

What it Means: A person crossing their ankles might feel uncomfortable and closed-off, although there is an exception (I’ll talk about that below). The tighter their ankles are locked, the more anxiety or stress the person may be experiencing.

Women often sit with their ankles locked 8 https://www.amazon.com/What-Every-Body-Saying-Speed-Reading/dp/0061438294 , especially if they are wearing a skirt. However, it is unnatural to sit like this for a prolonged period of time and should be considered strange, especially if done by males.

When taken a step further, people may lock their feet around the legs of a chair under high-stress situations. I call this the “ejection seat” position because it’s something many people would do if they were about to be launched out of their seat.

The big exception to this rule is if you see the ankles crossed while legs are outstretched on the floor. This can be a relaxed posture with the legs taking up space.

The Science: In a study of 319 dental patients by the Peases 3 https://www.amazon.com/Definitive-Book-Body-Language-attitudes/dp/1409168506 , ankle locking was a common body language cue done by most patients: 68% of patients getting a checkup locked their ankles, 89% of patients locked their ankles as soon as they sat in their chair to get some dental work done, and a whopping 98% of them ankle-locked when they received an injection.

It’s safe to say that these patients felt de-feeted during this situation!

Hand Clasping

Hand clasping, which is one of many closed body language examples

When we don’t have someone else to hold onto, we might choose to hold our own hand. Sometimes we interlace our fingers, and other times we hug one hand on top of the other.

Here’s an interesting fact: every time we interlock our fingers, one thumb is on top, and this is our dominant thumb Peoplewatchig . For most people, it feels super weird if we switch thumbs and put our dominant one underneath!

What it Means: Interlaced fingers are a form of “self-hug.” Essentially, people who perform this gesture are comforting themselves with their hands, and it acts as a nostalgic reminder of the security we felt when holding hands with our parents as kids.

As adults we do this when we’re insecure—you’ll find this during overly formal events or when meeting a nervous client at work.

How to Use it: Use this gesture if you want to conclude a meeting or end an interaction with someone. If you want to appear confident, you can even use this cue but with your thumbs stuck out—this signals confidence instead of stress.

If you see someone with interlaced fingers and want to open them up, try humor. Once they start laughing, you’ll see their body language start opening up!

Blading, which is one of many closed body language examples

Have you ever seen a fencing bout before? These guys are on their feet, constantly moving back and forth in a game of who-can-stab-the-other-guy-first. It’s basically chess but with swords.

But the way that fencers use their stance is exactly what people do when closing off. When blading, the torso is turned away, maximizing reach, while minimizing damage to the oh-so-vulnerable frontal parts in the event of contact.

Since up to 90% 16 https://www.livescience.com/what-causes-left-handedness.html of people are right-handed, when you see blading, the left foot (which is also non-dominant in most cases) is usually the one that steps forward, or the right foot may step backward.

What it Means: Blading can commonly be seen right before a fight begins. You can see it before a bar fight breaks loose, during a boxing match, or if you made a statement your conversation partner doesn’t agree is correct.

If you’re talking to a buddy in a front-to-front situation, and you see him blade all of a sudden, he might be feeling a bit defensive or threatened.

An exception to blading is when both people are observing an event and square up shoulder-to-shoulder such as sitting on the couch and watching TV together.

Thumbs Hidden

Thumbs hidden, which is one of many closed body language examples

The thumbs are hidden away from view such as inside pockets or even wrapped around the other fingers.

What it Means: Usually a display of lower self-confidence, hiding thumbs usually signals concern, insecurity, or feelings of threat. High-status people have been observed to do this sometimes when relaxing 8 https://www.amazon.com/What-Every-Body-Saying-Speed-Reading/dp/0061438294 but never when they’re “on.”

Dogs also perform a similar cue by hiding their ears during times of stress. They do this in order to streamline themselves in case they need to make a mad dash… like if they manage to bite a hole through their $50 doggy bed while you were out dining with your partner (oddly specific?).

How to Use it: Around close friends and trusted others it’s totally fine to relax your hands in your pockets once in a while. But if you want to make the other person feel a bit insecure for whatever reason, sticking your hands deep in your pockets is a surefire way to do it!

Body Language image of a man with his thumbs out of pockets

a) confidence . Even though the hands are inside the pockets, the big difference here is that the thumbs are sticking out. Thumbs are also the most powerful digits of your hand. When they are displayed confidently, this can often indicate confidence or power in a given situation.

Neck Rubbing

Neck rubbing, which is one of many closed body language examples

When people rub their necks they’ll usually do it on the side or back of the neck. In more extreme cases, you’ll see the suprasternal notch, which is the part where your neck meets your clavicle, being touched (usually more in women).

What it Means: People usually rub their neck when feeling insecure or stressed. For some people this is their go-to method to relieve stress:

Those who habitually rub the neck also have a tendency to be more negative or critical 3 https://www.amazon.com/Definitive-Book-Body-Language-attitudes/dp/1409168506 than others.

The Science: When the nerve on the side of the neck called the vagus nerve is massaged, acetylcholine, a neurotransmitter that sends signals to the heart, causes the heart rate to go down.

A Deadly Example: Warning: This example contains graphic content.

In the formal interview of a Canadian-born Chinese-Vietnamese woman named Jennifer Pan, she told detectives that her parents were murdered in her house by 3 unknown thugs.

However, the interview officially turned into an interrogation when the detectives became suspicious. They noticed her story didn’t line up, and the nonverbal cues she displayed weren’t quite normal for her situation. It turns out that she actually staged the murder herself, and she was faking her story the entire time!

One nonverbal cue she consistently displayed that signaled high stress was touching her suprasternal notch (timestamp 36:47):

YouTube video

Physical Retreat

Physical retreat, which is one of many closed body language examples

Stepping back or leaning away from someone suggests you may be disinterested or uncomfortable.

What it Means: If you find yourself stepping back or leaning away during a conversation, it usually indicates a desire for more personal space , which could stem from discomfort, disinterest, or even distrust. This physical retreat serves as a subtle cue that you’re not fully engaged in the interaction.

The Science: A physical retreat often triggers psychological mechanisms related to the fight-or-flight response, such as increasing adrenaline 17 https://www.sciencedirect.com/science/article/abs/pii/S0924977X20302546 , signaling to others that you are in a defensive or guarded state, or even want to run away.

How to Use it: Being aware of your own tendencies to step back or lean away can help you better understand your feelings in a given situation. If you notice yourself retreating, it might be worth asking yourself why you feel the need to create more physical distance. On the flip side, if you notice someone else retreating, it could be a signal for you to reassess the situation and perhaps change your approach.

The body language of arms crossed in different ways

Hunched Shoulders

Hunched shoulders, which is one of many closed body language examples

How many times have you heard “shoulders back, head straight!”

Believe it or not, hunched shoulders are becoming even more common nowadays, as you’ll see people slumped over, looking at their cellphones. Over time this might even become the norm as people develop chronically-hunched shoulders from staring at smartphones and hunched over laptops all day.

All kidding aside, people who are super submissive in social situations like those with clinical depression or self-proclaimed “social failures” may also walk with a permanent stoop and with shoulders rounded and their neck hunched forward.

Meaning: This is a naturally defensive posture. Forward shoulders may indicate that someone is trying to hide something or feeling vulnerable, since this posture closes off your vulnerable neck and chest areas.

You’ll also rarely see this in fashion shows and magazines, as it instantly drops your attraction value. This cue literally reminds me of a turtle withdrawing into its shell.

Perhaps a better name for this cue would be “turtling!”

Rubbing Eyes

Rubbing eyes, which is one of many closed body language examples

People who rub their eyes usually use their index finger, middle, or thumb to get in on that eyelid action. It can range either from a gentle, split-second touch to more obvious rubbing.

What it Means: Rubbing the eyelids really helps people calm down as it acts like a “visual reset.” Essentially what you’re saying when you rub your eyes is this: “Look, please go away. I wish everything in front of me would just vanish!” You’ll typically see this gesture with high-stakes poker players as soon as they lose a hand or during an argument between an angry and frustrated couple.

Of course, people naturally do this to get those nasty eye boogies out so always take into account how tired someone is before placing a negative label on them.

The Science: Rubbing the eyelids stimulates a special nerve in the eyelids called the vagus nerve 18 https://www.livescience.com/vagus-nerve.html which helps slow down heart and breathing rates when it’s massaged.

You can also see people rub their eyelids during conversations and interrogations when they are asked a difficult or stress-inducing question. They want to cut off eye contact to reduce their own stress or anxiety.

You may often see this gesture more in men than women because women might be conditioned to avoid rubbing their eyes, especially if they wear eye makeup.

How to Use It: Having a hard day at work? Try closing your eyes in a safe space and gently rubbing your eyelids while taking a breath. I’ve found just 30 seconds of this helps immensely and gives a sense of calm during a stressful day.

Fidgeting with Objects

Fidgeting with objects, which is one of many closed body language examples

Fidgeting involves playing with nearby objects, such as keys, coins, a pen, a ring, or a necklace. And yes, even with the infamous fidget spinner.

What it Means: Fidgeting typically signals boredom. Bored of talking, bored of sitting down, and yes—even bored of you ( ouch!) .

People who fidget may be subconsciously desiring sensory reassurance 19 https://books.google.com/books?id=7xzhVIwIqSMC&lpg=PA180&ots=BWWFrFNWBP&dq=desmond%20morris%20putting%20objects%20mouth&pg=PA180#v=onepage&q=desmond%20morris%20putting%20objects%20mouth&f=false . This is similar to how babies hold onto their favorite toy. Other times, it may mean that people are anxious or short on time—and in some cases, even disappointed.

The Science: Observations at railway stations and airports revealed that there are 10x as many displacement activities in flying situations than in ordinary circumstances. In other words, people fidget a lot when they’re about to fly. These behaviors include the following:

  • checking tickets
  • taking out passports and putting them away
  • rearranging hand baggage
  • making sure their wallet is in place
  • dropping things and picking them up

In contrast only 8% of people boarding a train showed signs of fidgeting compared to 80% of people at a check-in desk of a jumbo-jet flight across the Atlantic Desmong People .

How to Use it: If you want an easy out to a conversation just start jangling your keys or coins in your pocket or hands. It might be a bit rude, but if you’ve really gotta go, this is a great way to end a conversation .

Historic Example: In 1969 when Elvis Presley made his first public stage appearance in 9 years, he displayed signs of fidgeting. What do you think he was feeling, judging by this picture?

Touching Ears

Touching ears, which is one of many closed body language examples

The ear is rubbed, pulled, scratched, touched, picked at, or rubbed vigorously.

What it Means: OK, you might have noticed a trend by now—touching yourself basically means anxiety. Not in all cases, but unless you’ve just got an itch that won’t go away, repetitive self-touch in all forms is a way to ease tension throughout your body.

People generally scratch behind their ears, says Dutch biologist Nikolaas Tinbergen 20 https://psycnet.apa.org/record/2004-16480-000 , as a way to ease tension during stressful situations—such as when you’ve made a public speaking blunder in front of thousands of people.

Effectively, people who touch their ears this may be trying to “block” information that they’ve just heard—whether it’s a prodding question, or even if they’ve been accused.

Example: You may be familiar with the American actress Carol Burnett, who was famous for tugging on her left ear. She did this at the end of each show to let her grandmother know she was doing well and loved her. After her grandmother’s passing, she continued tugging her ear as a tradition and in memory of her beloved grandmother.

Pocketed Hands

Pocketed hands, which is one of many closed body language examples

Keeping hands in pockets may indicate disinterest or discomfort in revealing one’s thoughts and feelings.

What it Means: Having your hands in your pockets during a conversation generally signals a reserved or closed-off attitude. It might mean you’re uncomfortable, disinterested, or unwilling to engage fully with the other person. This gesture often hampers open communication and can make you appear unapproachable.

The Science: Psychological research 21 https://www.researchgate.net/publication/304151618_Body-Language-Communication_194_Aproprioception_The_IW_case suggests that hand gestures contribute significantly to communication. Therefore, pocketed hands limit this expressive capability, often leading to misinterpretation or a lack of connection during interactions.

How to Use it: If you notice yourself resorting to this stance, it may be helpful to ask yourself: “Am I nervous, uncomfortable, or disengaged?” Likewise, if you observe someone else with pocketed hands, it might be a sign to approach the situation with greater sensitivity.

Example: In many crime dramas, like “Law & Order,” suspects or witnesses often put their hands in their pockets when being questioned, which immediately makes them appear more guarded and less trustworthy to the detectives.

What Are the 11 Types of Body Language?

Besides open and closed, body language can be further broken down into 11 different channels, including facial expressions, body proxemics, and ornaments.

11 Types of Body Language board with different signs

Facial Expressions

Researcher Dr. Paul Ekman discovered 7 universal microexpressions which are short facial gestures every human makes when they feel an intense emotion. We are very drawn to looking at and observing the face to understand someone’s hidden emotions.

Body Proxemics

Proxemics is a term for how our body moves in space. We are constantly looking at how someone is moving—are they gesturing? Leaning? Moving toward or away from us? Body movements tell us a lot about preferences and feelings.

The most common gestures are hand gestures. We often use our hands to express our emotions, tell a story, or comfort ourselves. My team even did an experiment on TED talks and found the most popular speakers also used the most hand gestures.

Clothes, jewelry, sunglasses, and hairstyles are all extensions of our body language. Not only do certain colors and styles send signals to others, how we interact with our ornaments is also telling. Is someone a fidgeter with their watch or ring?

Interest cues can be signs of attraction or general interest that usually don’t involve touch. From obvious cues like winking and smiling, to more subtle ones like a flick of the hair or displaying the wrist, knowing which cues to give and recognizing them is key to building rapport.

Eye movements and changes tell us a lot about others’ intentions. During an interaction, we can often see changes such as longer eye gaze, sideways glances, and blocked eyes. These cues can indicate emotions like attraction, skepticism, or stress.

Pacifying behaviors consist of a wide range of self-soothing behaviors that serve to calm us down after experiencing something unpleasant. This can be seen with fidgeting, bouncing feet, and arm rubbing. As a general rule of thumb, any repetitive behavior is likely pacifying.

Haptics refers to body language cues that involve touch. These include handshakes, touching another’s arm, hugs, a pat on the shoulder, and kissing. Since we interact with the world through touch, we can observe how others touch us to get an insight on their preferences.

Blocking cues are performed to magically “vanish” the cause of people’s stress or anxiety. Like the three wise monkeys—“see no evil, hear no evil, speak no evil”—these cues consist of barriers like touching the mouth or crossing the arms to block out the environment.

Paralanguage

Paralanguage is the nonverbal communications of your voice, such as pitch, tone, and cadence. Often, we can hear how confident or anxious one feels by simply listening to their voice. By learning paralanguage, we can even master our own voices and give power to our words.

Emblems, or symbolic cues, represent messages that are consciously understood by others, and are often used in place of words. There are over 800 emblems, from your “OK” sign and “thumbs up,” and they are heavily dependent on a person’s culture and geographic location.

Understanding & Interpreting body language

Body language isn’t just about seeing a body language cue. It’s also about interpretation and knowing what to look for. If you really want to take a deep dive into body language, check out the most advanced book on cues:

In the world of body language, there are 2 camps: Absolutists believe that whenever a body language cue appears, it 100% has the interpreted meaning. For example, if a person crosses their arms, it means they are feeling blocked off in all cases. Contextualists believe that body language depends on the situation. If a person crosses their arms, it could mean that they’re cold, or it’s simply more comfortable for them.

The key to understanding body language is to be a contextualist, not an absolutist. Learning about body language cues without knowing how to apply them may skew your opinions about others for the worse, rather than improving them for the better.

Body Language Mini FAQ

Here are some other questions I’ve been asked about body language, which I’ve compiled into a mini FAQ: 

Yes! Body language cues and their consistency have been scientifically proven time and time again by researchers such as Paul Ekman, Joe Navarro, Barbara and Allan Pease, Desmond Morris, and Carol Kinsey Goman. However, it’s important to note that everyone has their individual quirks that may be different from the norm.

No. While many cues are universal, such as the eyebrow flash and 7 facial microexpressions, many body language cues are specific to a culture or geographic location. For example, many Western cultures prefer a handshake as a greeting; however, some Spanish or Latin cultures may kiss, Thai culture often employs the “wai” greeting, and the Japanese may prefer to bow.

A nonverbal cue is anything that is done nonverbally during an interaction, such as a hand gesture or bodily movement. Many body language cues can be interpreted to reveal a person’s intentions or feelings during a situation.

When there is a mismatch between a person’s words and body language, it is generally preferred to rely on their body language for an accurate interpretation of their true feelings. Most people make a conscious effort to choose their words carefully; however, body language is much harder to consciously control and therefore more reliable in most cases.

Nonverbal communication is the broad term used to describe all types of communication without using words. Body language is a category of nonverbal communication that focuses on all parts of the body, such as facial expressions and gestures.

Absolutely! Many people, especially those who are new to reading body language, will make the mistake of attempting to read body language but get it wrong. They may read a certain body language cue and forget to take into consideration the context or environment. They may also read a cue but miss out on other, more important cues that signal the opposite of their interpretation.

Common body language cues that indicate lying are touching the nose, increased eye contact, licking the lips, uncertain vocal tonality, and a frozen posture. There are many lying cues that may indicate deception. However, there is no single cue that definitively means a person is lying.

It depends. Some people are naturally gifted at reading body language and can pick up on it readily. For others, it may take months in order to get a basic grasp of body language. The amount of time spent observing cues, a person’s perceptiveness, and the amount of training and research one does all affect a person’s body-language-reading abilities.

I hope this article has been useful to you! To continue the guide, please click on the next article link below.  And if you have any other questions about body language, please leave a comment below so I can potentially add it to the mini FAQ!

To your success,

5 Essential Body Language Examples and Their Meanings

Crack The Code on Facial Expressions

The human face is constantly sending signals, and we use it to understand the person’s intentions when we speak to them. In Decode, we dive deep into these microexpressions to teach you how to instantly pick up on them and understand the meaning behind what is said to you. Learn how to decode emotions in our advanced communication course, People School.

Side Note: As much as possible we tried to use academic research or expert opinion for this master body language guide. Occasionally, when we could not find research we include anecdotes that are helpful. As more research comes out on nonverbal behavior we will be sure to add it!

  • Why Body Language is Important
  • Examples and Meanings
  • How to Read People
  • Presentation Cues
  • Interview Cues
  • Workplace Cues
  • Business Cues
  • Aggressive Cues
  • Confident Cues
  • Condescending Cues
  • Presidential Cues
  • Resting Bitch Face
  • Advertising Cues
  • AI and Body Language
  • Facial Microexpressions
  • Shoulder Cues
  • Hand Gestures
  • Female Cues
  • Rules of Attraction
  • How to Flirt

Article sources

Popular guides, how to deal with difficult people at work.

Do you have a difficult boss? Colleague? Client? Learn how to transform your difficult relationship. I’ll show you my science-based approach to building a strong, productive relationship with even the most difficult people.

Related Articles

Science of People offers over 1000+ articles on people skills and nonverbal behavior.

Get our latest insights and advice delivered to your inbox.

It’s a privilege to be in your inbox. We promise only to send the good stuff.

🧠 Body Language Mastery is OPEN. Claim your 25% launch discount.🚨

Jeff Thompson Ph.D.

Body Language

The science of body language & the debates, research shows that yes, body language does matter..

Posted October 21, 2012

  • What Is Cognition?
  • Take our Mental Processing Test
  • Find a therapist near me

Body language evaluation has increasingly become more scrutinized as each debate passes and expect it to continue with the final debate between President Obama and Governor Romney. Media pundits, nonverbal communication experts and researchers, politicians, and the general audience have all been more than willing to share their thoughts and interpretations.

With such a large amount of people weighing in, differences are certain to appear.

research about body language

You don't have to always point to make a point.

EVERYONE IS AN “EXPERT”- WHO TO BELIEVE?

The question becomes who do you believe, and what are their comments and analysis based on? How do you determine the difference between opinions and facts?

The issue that arises often with nonverbal communication is that it is both a science and an art. An example is eye contact is a sign of building rapport (science) yet how one applies it (the “art”) can have an adverse effect (think a “cold” “hard” stare).

This article offers an overview of relevant research in nonverbal communication and offers tips on how you can apply it to the debates. This will allow you to make a more informed interpretation of the nonverbal communication used by the candidates and more accurately evaluate the numerous body language evaluations that will be offered post debate in the media.

WHAT THE RESEARCH SAYS*

1. (Seiter et al., 1998) the facial expressions of a speaker’s opponent during a debate affected judgements of the speaker. When the opponent displayed disagreement by rolling his eyes, shaking his head… viewers had more positive attitudes toward the speaker, rating him higher on competence, character, composure, and sociability.

2. (Seiter, 2001) When a nonspeaking debater expressed nearly continuous disbelief by frowning, head shaking, mouthing “no” or “what?” audience members regarded him as deceptive and the speaker as truthful. However, moderate signs of disbelief lowered the ratings of truthfulness for both speakers.

3. (Smith, 2000) People speculated how much Gore’s “rude” behavior of occasional sighs of exasperation that were easy to hear may have damaged his performance.

4. (Luntz Research Companies, 2004) President Bush’s smirks, grimaces, and contorted facial expressions during his debate with John Kerry were so pronounced that they actually undermined his support among many undecided voters.

5. (Burgoon et al., 1996) Persuasiveness of a speaker includes the following nonverbal behaviors: eye contact, forceful gestures, open body positions, head nodding, close distances, touch, facial pleasantness, fluent speech, moderately loud vocal tones, moderately fast speech, and pitch variation.

6. (O’Keefe, 1990) The more a listener is focused on the issue, the less likely they are influenced by the speaker’s nonverbal cues.

7. (Exline, 1985) in regards to the Carter-Ford 1976 presidential debate, power and credibility correlate with greater relaxation and poise. Observers rated more favorably the segments that showed less tension, and tension related adaptors (lip licking, postural sway, shifting gaze, eye blinks, and speech non-fluencies).

8. (Ritter & Henry, 1990) Jimmy Carter’s loss to Ronald Regan in the 1980 debate was attributed to Carter’s visible tension and his inability to coordinate his nonverbal behavior with his verbal message.

9. (Guerrrero & Floyd, 2006) People who communicate in a dynamic fashion have a dramatic, memorable, and attention -grabbing communication style that is immediate, expressive, and energetic.

10. (Cherulnik, et al., 2001) In regards to Bush-Clinton debate, coaching a speaker to act charismatic may not yield the desired effect if the speaker’s nonverbal communication seems deliberate rather than spontaneous.

research about body language

Notice The difference in Gov Romney's stance- which looks more confident?

Check Your Biases At The Door

Remember when watching the debates, you also possess biases. We all have them and notable biases include in-group bias : giving preferential treatment to those in your own group (the same political party). Another is the fundamental attribution error bias. This is seeing one person’s actions are a result of their disposition while another’s actions is a result of the situation. Acknowledging your biases allows you to be able to then “check” them and conduct a more neutral analysis.

research about body language

CPR- Charisma, Professionalism, Rapport

Elsewhere I mention the CPR model (charisma, professionalism, and rapport) as a way to analyze each debater. Research states:

(Petland, 2008) Charismatic people are unusually expressive, sensitive, and have strong internal control.

(Fox Cabane, 2012) Power and warmth are needed in order to be charismatic . Someone who is powerful but not warm can be impressive but isn’t necessarily perceived as charismatic and can come across as arrogant, cold, or standoffish. Someone who possesses warmth without power can be likeable, but is not necessarily perceived as charismatic and can come across as overeager, subservient, or desperate to please.

(Andersen, 2008) Immediacy behaviors are persuasive for several reasons; they may increase perceptions of trustworthiness, and dynamism (charisma), which are both key components underlying credibility. Immediacy behaviors also command attention and reflect interpersonal warmth and liking.

WHAT IS SEMIOTICS?

Semiotics is the study of signs. In this case the signs are the nonverbal communication cues and elements. Semiotic analysis is simple (really!) and applying it to nonverbal communication helps reduce one’s biases. The three steps include

1) Semantics: identify the nonverbal cue (i.e. the gesture, facial expression, or posture),

2) Syntactics: look at it in respect to what else is going on (other cues and/or words spoken), and

3) Pragmatics: analysis based on the context and previous research.

Now that you are aware of the research and have “checked” your biases, enjoy the next debate. This article has offered insight into the research that contributes to my analysis, a model to view the effectiveness of the nonverbal communication being used and a method to analyze it. With this information, you can conduct your own nonverbal analysis and also compare it to the reviews of others.

Like this article? If yes, you will proably like my tweets too: @NonverbalPhD

research about body language

Pointing, used sparingly & strategically, along with congruent gestures can be effective.

*This collection of research is primarily from the texts written by: Nonverbal Communication in Human Interaction ; ; Nonverbal Communication In Everyday Life ; and Nonverbal Communication .

Jeff Thompson Ph.D.

Jeff Thompson, Ph.D., is an adjunct associate research scientist in the Department of Psychiatry at Columbia University Medical Center and the New York State Psychiatric Institute.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

September 2024 magazine cover

It’s increasingly common for someone to be diagnosed with a condition such as ADHD or autism as an adult. A diagnosis often brings relief, but it can also come with as many questions as answers.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

Rainbow Therapy - Logo

Body Language in Autism

  • September 24, 2024

Understanding body language is a vital part of effective communication . For individuals on the autism spectrum, nonverbal cues can sometimes be challenging to interpret or express. As a parent or caregiver, learning about the unique ways autistic individuals use body language can help you better connect and communicate with them.

Recognition by Typically Developing Children

Studies have shown that children with autism spectrum disorder (ASD) aged 5 to 12 years perform similarly to typically developing peers when it comes to recognizing emotions through static body postures . This finding indicates that children with ASD have the ability to interpret emotions conveyed through body language.

Recognition by Children with Autism

Research suggests that children with autism can accurately interpret emotions from body posture , even surpassing typically developing children in certain scenarios. This ability is correlated with their theory of mind skills, demonstrating a deeper understanding of others’ perspectives.

While both groups of children find it easier to read emotions from body language than from the eyes, individuals with autism may benefit from the physical distance often maintained during interactions. This suggests that leveraging body language can be a valuable communication tool for autistic individuals.

However, challenges may arise in relating these cues to the underlying emotions and thoughts of others. To support individuals with autism in developing comprehensive social interaction skills, it’s essential to incorporate strategies that help bridge this gap between reading body language and comprehending the associated emotions.

body language autism

Factors Affecting Emotion Recognition

Exploring the factors that influence the ability to recognize emotions through body language is essential in understanding individuals with autism. Two significant factors that play a crucial role in emotion recognition are the link to the Theory of Mind (ToM) and the correlation with verbal intelligence.

Link to Theory of Mind (ToM)

Theory of Mind (ToM) refers to the ability to understand and interpret the mental states of oneself and others. Typically developing children have been shown to outperform individuals with ASD in Theory of Mind tasks. For instance, in standard tests of emotion recognition using photos of eyes, typically developing children excel when compared to children with ASD.

Children with autism often struggle to put themselves in another person’s place to comprehend their feelings accurately. This difficulty in understanding the perspective of others can impact their ability to interpret body language signals effectively. Recognizing emotions from body posture is closely linked to the Theory of Mind, especially for individuals with ASD.

Correlation with Verbal Intelligence

Another factor that influences emotion recognition through body language in individuals with autism is the correlation with verbal intelligence. While individuals with autism can interpret body language cues effectively, they may face challenges in linking these non-verbal cues to understand the emotions of others.

It has been observed that individuals with autism tend to focus on the small, local details of body movement rather than processing the motion of the entire body as a whole. This hyper-focus on specific details may explain their difficulty in grasping implicit emotional meanings from certain movements or postures.

body language autism

Brain Regions Involved in Perception

Exploring the neuroscience behind emotion perception in individuals with autism sheds light on specific brain regions responsible for processing body language cues. Two key areas implicated in this process are the superior temporal sulcus function and the mechanism of combining motion information.

Superior Temporal Sulcus Function

Research studies have highlighted the importance of the superior temporal sulcus , located in the middle of the brain, in perceiving others’ movements and deciphering their mental states. This brain region plays a significant role in interpreting social cues embedded in body language, such as emotions and intentions. Understanding how the superior temporal sulcus functions provides valuable insights into the neural mechanisms involved in processing social information in individuals with autism.

Combining Motion Information

The study on body language perception in autism also emphasizes the challenge of integrating motion information across different spatial locations , particularly in individuals with high-functioning autism. This difficulty in combining motion details poses obstacles in accurately interpreting social cues conveyed through body language. Further investigation is warranted to unravel how the brain processes social content within the context of others’ movements, especially in individuals with autism.

Implications for Autism Therapy

When it comes to therapy for individuals with autism, understanding and addressing the challenges related to interpreting body language can play a crucial role in improving social interactions and communication skills. This section focuses on two key aspects of implications for autism therapy: developing social skills and future research directions.

body language autism

Developing Social Skills

While individuals with autism may be able to recognize body language cues, they may struggle to connect these cues with the underlying emotions. Anomalies in processing spatial frequencies could contribute to these difficulties. Research indicates that individuals with autism may rely more on high spatial frequencies, potentially missing important visual details that aid in emotional understanding.

Therapeutic approaches that target social skills development in individuals with autism can include structured programs that teach explicit strategies for interpreting and responding to body language. By focusing on enhancing the recognition and understanding of emotions conveyed through nonverbal communication, individuals with autism can improve their social interactions and relationships.

Future Research Directions

As research continues to uncover the complexities of body language perception in autism, future studies are needed to delve deeper into how individuals with autism process social cues and movements. It has been observed that individuals with autism often exhibit lower accuracy in interpreting emotions conveyed through body language compared to neurotypical individuals.

Further exploration into how the brain processes social information from others’ movements can provide valuable insights for tailored interventions. By expanding research efforts and incorporating the findings into clinical practice, therapists and caregivers can better support individuals with autism in navigating the complexities of interpersonal communication.

Rainbow ABA is committed to providing high-quality applied behavior analysis (ABA) therapy for individuals with autism. Our experienced therapists are dedicated to helping children and adults develop essential communication, social, and life skills. We offer ABA services in New Jersey , Texas, Oklahoma, and Georgia. Contact us today to learn how we can benefit you or your loved one!

https://www.thetransmitter.org/spectrum/autism-impedes-ability-to-read-body-language/

https://www.newscientist.com/article/dn27960-interpreting-body-language-is-no-problem-for-kids-with-autism/

https://pubmed.ncbi.nlm.nih.gov/26079273/

Recent Posts

  • Autism and Boundaries in Relationships
  • Exploring Reciprocity in Autism Spectrum Disorder
  • Imitation in ABA Therapy
  • Video Modeling Examples in Autism
  • Shaping in Applied Behavior Analysis

Recent Comments

  • September 2024
  • August 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • ABA Therapy
  • Autism and Mental Health
  • Autism and School
  • Autism Causes
  • Autism Comorbidities
  • Autism Daily Living
  • autism diagnosis
  • Autism Interventions
  • Autism Sensory Issues
  • Autism Statistics
  • Autism Support
  • Autism Tools
  • communication
  • Parents' Guide
  • Uncategorized

research about body language

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Healthcare (Basel)

Logo of healthcare

An Analysis of Body Language of Patients Using Artificial Intelligence

Rawad abdulghafor.

1 Department of Computer Science, Faculty of Information and Communication Technology, International Islamic University Malaysia, Kuala Lumpur 53100, Malaysia

Abdelrahman Abdelmohsen

Sherzod turaev.

2 Department of Computer Science and Software Engineering, College of Information Technology, United Arab Emirates University, Al Ain 15551, United Arab Emirates

Mohammed A. H. Ali

3 Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia

Sharyar Wani

Associated data.

Not Applicable.

In recent decades, epidemic and pandemic illnesses have grown prevalent and are a regular source of concern throughout the world. The extent to which the globe has been affected by the COVID-19 epidemic is well documented. Smart technology is now widely used in medical applications, with the automated detection of status and feelings becoming a significant study area. As a result, a variety of studies have begun to focus on the automated detection of symptoms in individuals infected with a pandemic or epidemic disease by studying their body language. The recognition and interpretation of arm and leg motions, facial recognition, and body postures is still a developing field, and there is a dearth of comprehensive studies that might aid in illness diagnosis utilizing artificial intelligence techniques and technologies. This literature review is a meta review of past papers that utilized AI for body language classification through full-body tracking or facial expressions detection for various tasks such as fall detection and COVID-19 detection, it looks at different methods proposed by each paper, their significance and their results.

1. Introduction

One of the languages of communication is body language. Languages are divided into two categories: verbal and nonverbal. Body language is a type of nonverbal communication in which the body’s movements and actions are utilized instead of words to communicate and transmit information. According to [ 1 , 2 ], nonverbal cues such as gestures, body posture, eye movement, facial expressions, touch, and personal space utilization are all examples of body language.

Body language analysis is also necessary to avoid misunderstandings about the meanings and objectives of a single movement that has several meanings. gaze direction; iris extension; hand and leg position; sitting, walking, standing, or lying manner; body posture; and movement are all examples of how a person’s inner state is portrayed. Hands are arguably the richest wellspring of body language information after the face [ 3 ]. For example, one may tell if a person is honest (by turning the hands inside towards the interlocutor) or disingenuous (by turning the hands outside towards the interlocutor) (hiding hands behind the back). During a conversation, using open-handed gestures might convey the image of a more trustworthy individual, a tactic that is frequently employed in discussions and political conversations. It has been demonstrated that persons who make open-handed gestures are liked [ 4 ]. The posture of one’s head may also indicate a lot about one’s emotional state: people are more likely to talk more when the listener supports them by nodding. The rate at which you nod might indicate whether you have patience or not. The head is still at the front of the speaker in a neutral stance. When a person’s chin is elevated, it might indicate dominance or even arrogance. Revealing the neck could be interpreted as a gesture of surrender.

In the last few years, automatic body language analysis has gained popularity. This is due in part to the large number of application domains for this technology, which range from any type of human–computer interaction scenario (e.g., affective robotics [ 5 ]) to security (e.g., video surveillance [ 6 ]), to e-Health (e.g., therapy [ 7 ] or automated diagnosis [ 8 ]) are examples of e-Health, as are language/communication, e.g., sign language recognition [ 9 ]), and amusement (e.g., interactive gaming [ 10 ]). As a result, we can find research papers on a variety of topics related to human behavior analysis, such as action/gesture recognition [ 11 , 12 ], social interaction modeling [ 13 , 14 ], facial emotion analysis [ 15 ], and personality trait identification [ 16 ], to name a few. Ray Birdwhistell conducted research on using body language for emotional identification and discovered that the final message of a speech is altered only 35 percent by the actual words and 65 percent by nonverbal signals [ 17 ]. In addition, according to psychological study, facial expression sends 55 percent of total information and intonation expresses 38 percent in communication [ 4 ].

We will provide a new thorough survey in this study to help develop research in this area. First, we provide a description and explanation of the many sorts of gestures, as well as an argument for the necessity of instinctive body language detection in determining people’s moods and sentiments. Then we look at broad studies in the realm of body language processing. After that, we concentrate on the health care body language analysis study. Furthermore, we will define the automated recognition frame for numerous body language characteristics using artificial intelligence. Furthermore, we will describe an automated gesture recognition model that aids in the better identification of epidemic and pandemic illness external signs.

2. Body Language Analysis

2.1. overview of body language analysis.

Body language interpretations fluctuate from nation to country and from culture to culture. There is substantial debate about whether body language can be considered a universal language for all humans. Some academics believe that most of the interpersonal communication is based on physical symbols or gestures, because the interplay of body language enhances rapid information transfer and comprehension [ 18 ].

Body language analysis is also necessary to avoid misunderstandings about the meanings and objectives of a single movement that has several meanings. A person’s expressive movement, for example, may be caused by a physical limitation or a compulsive movement rather than being deliberate. Furthermore, one person’s bodily movement may not signify the same thing to another. For example, itching may cause a person to massage her eyes rather than weariness. Because of their societal peculiarities, other cultures also require thorough examination. There are certain common body language motions, but there are also movements unique to each culture. This varies depending on the nation, area, and even social category. In this chapter of this study, we will discuss the various aspects of body language analysis and will explain this below.

2.2. Body Language Analysis in Communication

In research from [ 19 ], body language is a kind of nonverbal communication. Humans nearly exclusively transmit and interpret such messages subconsciously. Body language may provide information about a person’s mood or mental condition. Aggression, concentration, boredom, relaxed mood, joy, amusement, and drunkenness are just a few of the messages it might convey. Body language is a science that influences all aspects of our lives. Body language is a technique through which a person may not only learn about other people by observing their body motions but also improve himself properly and become a successful person. Body language is a form of art that allows a person to acquire a new level of fame.

If language is a way of social connection, then body language is unquestionably a reflection of personality development. It can allow for reading other people’s minds, allowing a person to effortlessly mold himself to fit the thinking of others and make decisions for effective and impactful planning. The person’s mental mood, physical fitness, and physical ability are all expressed through body language. It allows you to have a deeper knowledge of individuals and their motives. It builds a stronger bond than a lengthy discussion or dispute. Reading body language is crucial for appropriate social interaction and nonverbal communication.

In human social contact, nonverbal communication is very significant. Every speaking act we perform is accompanied by our body language, and even if we do not talk, our nonverbal behavior continually communicates information that might be relevant. As a result, the following research [ 20 ] seeks to provide a summary of many nonverbal communication components. Nonverbal communication is usually characterized as the opposite of verbal communication: any occurrences with a communicative value that are not part of verbal communication are grouped under the umbrella term nonverbal communication, as well as auditory factors such as speaking styles and speech quality. On the one hand, paralinguistic (i.e., vocal) phenomena such as individual voice characteristics, speech melody, temporal features, articulation forms, and side noise can be found.

Nonvocal phenomena in conversation, on the other hand, include a speaker’s exterior traits, bodily reactions, and a variety of kinesics phenomena that can be split into macro-kinesics and micro-kinesics phenomena. Figure 1 depicts a comprehensive review of the many types of nonverbal communication.

An external file that holds a picture, illustration, etc.
Object name is healthcare-10-02504-g001.jpg

Overview of the main forms of nonverbal communication. The figure has been taken from [ 20 ].

In this study from [ 21 ], nonverbal conduct encompasses all forms of communication other than speaking. The term “communication” refers to the act of sending and receiving messages. Even though language use is a uniquely human trait, differing perspectives revolve around nonverbal behaviors and the current context. We employ body language without realizing it, as well as see and understand the body language of others. Nonverbal conduct is divided into three categories: verbal–vocal, nonverbal vocal, and nonverbal nonvocal. The link between verbal and nonverbal conduct is demonstrated through several gestures. Nonverbal events have a crucial role in the structure and occurrence of interpersonal communication, as well as the interaction’s movement-to-movement control. Nonverbal cues such as hierarchy and priority among communicators, signaling the flow of interaction, and giving meta-communication and feedback contribute to governing the system.

As shown in [ 22 ], body language is one of the most crucial aspects of communication. Communication that cannot be resolved due to body language stays unfinished. Our physical appearance also has a significant impact on how well we deliver our message. Our thoughts, expressions, postures, and gestures all have a significant impact on the weight of meaning and emotion carried by our phrases and words. Understanding and conveying emotions and thoughts rely heavily on body language. It is important for the proper expression and comprehension of messages during the communication process. It also promotes oral communication and establishes communication integrity. Body language accounts for 55% of how we impress people when speaking, words account for 7%, and discourse accounts for 38%.

It is critical to concentrate on this distinction if you want to be an effective speaker. Because body language is swiftly registered in the subconscious, the audience focuses on it. Even if the audience does not comprehend the spoken language, the audience may grasp the message through body language.

2.3. Body Language in Public Speaking

Although our face is the indicator of our thinking, we cannot deny that words are also quite powerful. We may look to the French and Russian Revolutions for instances of great speeches delivered by leaders. However, we cannot afford to overlook the reality that actions speak louder than words, i.e., body language is more potent than words. We use words to disguise our emotions and sentiments much of the time, but our body language makes them quite evident. Our formal and professional lives are completely reliant on nonverbal communication, which we engage in through our behaviors and body language. People in the office do not speak much yet transmit everything through their body language. Whenever they communicate, they consciously or unconsciously use their body language more than their words. In any conversation, body language is important. The image of Lord Krishna speaking to Arjuna on the fields of Kurukshetra will be read, described, and analyzed in this research study [ 23 ]. It will discuss the significance of body language when speaking in public.

2.4. Body Language Analysis in Teaching

The main goal of the study [ 24 ], was to assess the influence of teachers’ nonverbal communication on teaching success based on the findings of research on the link between teaching quality and teachers’ nonverbal communication and its impact on teaching success. The study results demonstrated that there was a substantial link between the quality, quantity, and technique of nonverbal communication used by instructors when instructing. According to the study’s findings of the research evaluated, the more teachers using verbal and nonverbal communication, the more effective their instruction and the academic achievement of their pupils were.

According to other research, why do certain teachers exude a mystical charisma and charm that sets them different from their colleagues? The Classroom X-Factor investigates the idea of possessing what the public has come to refer to as the “X-Factor” from the perspective of the teacher, providing unique insights into the use of nonverbal communication in the classroom. This study shows how both trainee and practicing teachers may find their own X-Factor to assist shift their perspectives and perceptions of themselves during the live act of teaching, using classroom and curricular provided examples. It also shows how instructors may change the way they engage with their students while simultaneously providing them with significant and powerful learning opportunities. Teachers may generate their own X-Factor by adopting easy strategies derived from psychology and cognitive science, and therefore boost their satisfaction and efficacy as professionals. Facial and vocal expression, gesture and body language, eye contact and smiling, teacher apparel, color and the utilization of space, nonverbal communication, and educational approaches are among the tactics outlined. Furthermore, the study includes a part with fictitious anecdotes that serve to contextualize the facts presented throughout the text [ 25 ].

2.5. Body Language Analysis in Sport

The literature reviewed shows that nonverbal behavior (NVB) changes as a result of situational variables, because either a person shows a nonverbal response to provoking internally and externally circumstances (as is theorized for just some basic emotions conveyed in the face) or because a person intentionally desires to convey certain information nonverbal cues to observers in a given situation. Certain NVBs have been demonstrated to have a range of consequences on later interpersonal results, including cognition, emotion, and behavior, when they are displayed and seen (e.g., [ 26 ] for reviews).

2.6. Body Language Analysis in Leadership

The authors of [ 27 ] examined the possibility of gender disparities in leaders’ nonverbal actions, as well as the impact these differences may have on their relative effectiveness. Nonverbal communication may reveal a leader’s emotions and increase followers’ involvement. Once the leader is aware of his or her gestures and body motions, he or she may compare them to those of more effective leaders. On certain levels, gender inequalities in nonverbal behavior occur. Women are linked to transformative traits such as compassion, love, and concern for others. Men, on the other hand, relate to traits such as aggressiveness, dominance, and mastery.

This demonstrated that productive women do not always exhibit the same nonverbal behaviors as effective males. Nonverbal hesitations, tag questions, hedges, and intensifiers are more likely to be used by fluent speakers. This suggests that leaders who shake their heads are more likely to exhibit higher counts of speaker fluency behaviors. It is also not tied to gender in any way. Another intriguing finding is that the head movement of nodding is linked to the behaviors of upper grin, broad smile, and leaning forward. This demonstrates that these affirming, good behaviors are linked in a major way. Furthermore, the observed leaders’ speech fluency is substantially connected with their head movement shaking.

2.7. Body Language Analysis in Culture

In [ 28 ], the authors discussed a range of body languages used in many civilizations throughout the world. The meanings that may be conveyed through body language are numerous. The following is an example: People from all cultures use the same body language, such as staring and eye control, facial emotions, gestures, and body movements, to communicate their common sense. Distinct cultures have different ways of communicating non-verbally, and varied people have different ways of expressing themselves via gestures. Nonverbal communication, in the same way as traffic, has a purpose and follows a set of norms to ensure that it flows smoothly among people from many diverse cultures.

On the other hand, cultures can use the same body language to communicate diverse meanings. There are three sides to it:

  • Eye contact differs by culture.
  • Other nonverbal signals vary by culture.
  • The right distance between two individuals reveals their distinct attitudes from different civilizations.

Our culture is as much about body language as it is about verbal discourse. Learning the various basic norms of body language in other cultures might help us better understand one other. People from many cultures are able to converse with one another. However, cultural exchanges and cultural shocks caused by our body language are becoming increasingly harsh and unavoidable.

As a result, while communicating in a certain language, it is best to utilize the nonverbal behavior that corresponds to that language. When a person is fully bilingual, he changes his body language at the same time as he changes his language. This facilitates and improves communication.

Lingua franca is a linguistic bridge that connects two persons who speak different native languages.

In this regard, it has been determined in [ 29 ] that we communicate with our vocal organs and that our bodies’ body language can be a lingua franca for multilingual interlocutory.

The findings indicate that the listener was attempting to comprehend the speaker’s gesture. Because the speaker cannot speak English fluently, he was having difficulty achieving precise diction. The speaker ultimately succeeded in expressing his views with gestures towards the end of the video. Furthermore, the interlocutors were involved in the delivery and reception of suggested meaning via gestures and body language. Even though a lingua franca (e.g., English) already existed, body language adds significance to the message.

Furthermore, according to the data collected in this study, the Korean model and a client had a tumultuous history while shooting certain photoshoots. The customer was not pleased with the model’s attitude, which he felt insulted him. Nonetheless, the Korean model apologized in a traditional Korean manner by kneeling to the customer and the judges.

The judges and the client were both impressed by her formal and courteous demeanor. To finish the analysis, this research employed multimodal transcription analysis with Jefferson and Modada transcript notation, as well as YouTube data clips. Some mistakes may continue, which might be an excellent starting point for additional study in the fields of lingua franca and body language to gain a more comprehensive understanding.

2.8. Body Language in Body Motions

Both cognitive-based social interactions and emotion-based nonverbal communication rely heavily on body movements and words. Nodding, head position, hand gestures, eye movements, facial expressions, and upper/lower-body posture, as well as speaking, are recognized to communicate emotion and purpose in human communication.

2.8.1. Facial Expressions

According to new research, facial expressions are changes in the appearance of the face caused by the movement of facial muscles. it is a nonverbal communication route. Emotional facial expressions are both symptoms and communication cues of an underlying emotional state. People seldom convey their feelings by using characteristic expressions connected with certain emotions that are also widely recognized across countries and settings. Furthermore, environmental circumstances have a significant impact on both the expression and detection of emotional responses by observers [ 30 ].

In recent years, as [ 31 ] notes, there has been a surge in interest in both emotions and their regulation, notably in the neurosciences and, more specifically, in psychiatry. Researchers have attempted to uncover patterns of expression in experimental investigations analyzing facial expressions. There is a large amount of data accessible; some of it has been validated, while others have been refuted, depending on the emotion studied and the method employed to assess it. Interpreting data that have not always been completely proven and are based on Paul Ekman’s hypothesis of six main types of expression is a key issue (happiness, anger, disgust, fear, sadness, and surprise).

The sense of happiness, with its expressive element of the “smile,” is the only one of Ekman’s “basic emotions” that is observably linked to the underlying physiological and facial pattern of expression. Regarding Ekman’s other basic patterns of expression, there is much scholarly debate. A better understanding of how emotions are regulated and how the dynamics of emotional facial expression may be described could lead to more basic research in a social situation. Even more crucially, it has the potential to increase knowledge of the interaction and social repercussions of emotional expression deficiencies in people with mental illness, as well as a therapeutic intervention. Innovative study in the realm of emotional facial expression might give thorough solutions to unanswered issues in the field of emotion research.

2.8.2. Gestures

Gestures are generally hand gestures (but they can also include head and facial movements) that serve two purposes: to illustrate speech and to transmit verbal meaning. Gestures are fascinating because they represent a sort of cognitive science; that is, they are motions that express an idea or a mental process [ 32 ].

Whenever a person is pondering what to say, gesturing relieves the cognitive burden. When people have been given a memory job while also explaining how and where to solve a math issue, for example, they recall more objects if they use gestures while describing the arithmetic. When counting objects, being able to point allows for higher precision and speed; when people are not permitted to tell, even nodding allows for greater precision [ 33 ]. Gestures aid in the smoothing of interactions and the facilitation of some components of memory. As a result, gestures can provide valuable insight into speakers’ states of mind and mental representations. Gestures may be divided into two types: those that occur in conjunction with speech and those that exist independently of speech [ 26 ].

3. Body Language Analysis and AI

3.1. overview.

In face-to-face talks, humans have demonstrated a remarkable capacity to infer emotions, and much of this inference is based on body language. Touching one’s nose conveys incredulity, whereas holding one’s head in the hands expresses upset among individuals of comparable cultures. Understanding the meaning of body language appears to be a natural talent for humans. In [ 34 ], the authors presented a two-stage system that forecast emotions related to body language with normal RGB video inputs to assist robots to develop comparable skills. The programmed guessed body language using input movies based on approximated human positions in the first step. After that, the expected body language was transmitted into the second step, which interpreted emotions.

Automated emotion identification based on body language is beneficial in a variety of applications, including health care, internet chatting, and computer-mediated communications [ 35 ]. Even though automated body language and emotions identification algorithms are used in a variety of applications, the body language and emotions of interest vary. Online chatting systems, for example, are focused on detecting people’s emotions, i.e., if they are happy or unhappy, whereas health care applications are concerned with spotting possible indicators of mental diseases such as depression or severe anxiety. Because a certain emotion can only be expressed through the associated body language, many applications necessitate the annotation of various body language and emotions.

3.2. Recognition of Facial Expressions

Facial expressions (FE) are important affect signaling systems that provide information about a person’s emotional state. They form a basic communication mechanism between people in social circumstances, along with voice, language, hands, and body position. AFER (automated FE recognition) is a multidisciplinary field that straddles behavioral science, neuroscience, and artificial intelligence.

Face recognition is a prominent and well-established topic in computer vision. Deep face recognition has advanced significantly in recent years, thanks to the rapid development of machine learning models and large-scale datasets. It is now widely employed in a variety of real-world applications. An end-to-end deep face recognition system produces the face feature for the recognition given a natural picture or video frame as input [ 36 ]. Face detection, feature extraction, and face recognition (seen in Figure 2 ) are the three main phases in developing a strong face recognition system [ 37 , 38 ]. The face detection stage is utilized to recognize and locate the system’s human face picture. The feature extraction stage is used to extract feature vectors for every human face that was found in the previous step. Finally, this face recognition stage compares the retrieved characteristics from the human face with all template face databases to determine the human face identification.

An external file that holds a picture, illustration, etc.
Object name is healthcare-10-02504-g002.jpg

Face recognition structure. The figure has been taken from [ 39 ].

3.2.1. Face Detection

The face recognition system starts with the identification of human faces in each picture. The goal of this phase is to see if there are any human faces in the supplied image. Face detection might be hampered by fluctuations in lighting and facial expression. Pre-processing activities are carried out to enable the creation of a more robust face recognition system. Many approaches, such as in [ 40 ] and the histogram of oriented gradient (HOG) [ 41 ], are utilized to identify and locate the human face picture. Face detection may also be utilized for video and picture categorization, among other things.

3.2.2. Feature Extraction

The major purpose of this phase is to extract the characteristics of the face photos that were discovered in the detection stage. This stage defines a face using a “signature,” which is a set of characteristic vectors that characterize the major aspects of the face picture, such as the mouth, nose, and eyes, as well as their geometric distribution [ 42 ]. Each face has a unique structure, size, and form that allows it to be recognized. To recognize the face using size and distance, some ways involve extracting the contour of the lips, eyes, or nose [ 37 ]. To extract facial characteristics, approaches such as HOG [ 43 ], independent component analysis (ICA), linear discriminant analysis (LDA) [ 44 ], and scale-invariant feature transform (SIFT) [ 38 ], and local binary pattern (LBP) [ 42 ] are commonly utilized.

3.2.3. Face Recognition

This phase compares the features derived from the backdrop in the feature extraction stage to recognized faces recorded in a database. Face recognition may be used for two different purposes: identification and verification. A test face is compared with a set of faces during the identification process to discover the most likely match. To determine the approval or rejection decision, a test face is compared with a known face in the database during the identification process [ 45 ]. This challenge has been successfully addressed by correlation filters (CFs) [ 46 ], convolutional neural networks (CNN) [ 47 ], and k-nearest neighbor (K-NN) [ 48 ].

3.3. Face Recognition Techniques

Considering the data that have been reported thus far, these authors [ 44 ] believed that three techniques stand out as particularly promising for future development in this area: (i) the development of 3D face recognition, (ii) the use of multimodal fusion techniques of complementary data types, particularly those based on visible and near-infrared images, and (iii) the use of deep learning methods.

3.3.1. D Facial Recognition

Due to the 3D structure of the face, some characteristics are lost in 2D image-based approaches. Two key unsolved issues in 2D face recognition are lighting and position variability. The scientific community has recently focused on 3D facial recognition to tackle unsolved challenges in 2D face recognition and obtain considerably greater accuracy by assessing the geometry of hard features on the face. As a result, various contemporary techniques based on 3D datasets in [ 49 , 50 ] have been created.

3.3.2. Multimodal Facial Recognition

Sensors with the demonstrated capacity to capture not only 2D texture information but also face shape, that is, three-dimensional information, have been created in recent years. As a result, several recent studies have combined the two forms of 2D and 3D information to take use of each and create a hybrid system that increases recognition as a single modality [ 51 ].

3.3.3. Deep Learning Facial Recognition

DL is a wide notion with no precise definition; nonetheless, researchers [ 52 , 53 ] have agreed that DL refers to a collection of algorithms that aim to model high-level abstractions by modeling several processing levels. This field of study, which dates to the 1980s, is a branch of autonomous learning in which algorithms are employed to create deep neural networks (DNN) that are more accurate than traditional procedures. Recently, progress has been made to the point that DL outperforms humans in several tasks, such as object recognition in photos.

3.4. Recognition of Gestures

We reviewed contemporary deep-learning-based algorithms for gesture identification in videos in this part, which are primarily driven by the fields of human–computer, machine–human, and robot interaction.

3.4.1. Convolutional Neural Networks in 2D

Applying 2D CNNs to individual frames and afterward averaging the result for categorization is the first way that immediately springs to mind for identifying a sequence of pictures. In [ 54 ], it has been described a CNN machine learning framework for human posture estimation and constructs a spatial component that tries to make joint predictions by considering the locations of related joints. They train numerous convents to perform binary body-part categorization independently (i.e., presence or absence of that body part). These nets are applied to overlapping portions of the input as sliding windows, resulting in smaller networks with greater performance. However. a CNN-based mega model for human posture estimation has been presented in [ 55 ]. The authors extract characteristics from the input picture using a CNN. These characteristics are subsequently fed into joint point regression and body part identification tasks. For gesture recognition gesture identification (fingers spelling of ASL) using depth pictures, Kang et al. (2015) use a CNN to extract the features from the fully connected layers. Moreover, a deep learning model for estimating hand posture that uses both unlabeled and synthesized created data is offered in [ 56 ]. The key to the developed framework is that instead of embedding organization in the model architecture, the authors incorporate information about the structure into the training approach by segmenting hands into portions. For identifying 24 American Sign Language (ASL) hand movements, CNN and stacked de-noising autoencoder (SDAE) were employed in [ 57 ]. A multiview system for point cloud hand posture identification has been showed in [ 58 ]. It has been created view picture by projecting the hand point cloud onto several view planes and then feature extraction from these views using CNN. A CNN that uses a GMM-skin detector to recognize hands and then align them to the major axes has been presented in [ 59 ]. After that, they used a CNN with pooling and sampling layers, as well as a typical feed-forward NN as a classifier.

Meanwhile, a CNN that retrieves 3D joints based on synthetic training examples for hand position prediction has been presented in [ 60 ]. A neural network turns its output of the convolution layer to heat maps (one for each joint) on top of the final layer, displaying the likelihood for each joint. An optimization problem is used to recover poses from a series of heatmaps.

3.4.2. Features That Are Dependent on Motion

Gesture recognition has been widely utilized using neural networks and CNNs based on body posture and hand estimation as well as motion data. To achieve better results, temporal data must be incorporated into the models rather than geographical data. Two-stream (spatiotemporal) CNNs to learn from a set of training gestures for gesture style detection in biometrics have been studied in [ 61 ]. The spatial network is fed with raw depth data, while the temporal network is fed with optical flow. However, color and motion information to estimate articulated human position in videos were used in [ 62 ]. With an RGB picture and a collection of motion characteristics as input data, the authors present a convolutional network (ConvNet) framework for predicting the 2D position of human joints in the video. The perspective projections of the 3D speed of sliding surfaces are one of the motion characteristics employed in this technique. For gesture identification from depth data, three forms of dynamic-depth image (DDI), dynamic-depth normal image (DDNI), and dynamic-depth motion normal image (DDMNI), as with the input data of 2D networks, were employed in [ 54 ]. The authors used bidirectional rank pooling to create these dynamic pictures from a series of depth photos. These representations are capable of successfully capturing spatiotemporal data. A comparable concept of gesture recognition in continual depth video is proposed in [ 41 ]. They determine the utter and total depth difference between the current frame and the beginning frame for every gesture segment, which is a kind of motion characteristic as the input data of a deep learning network, and then they begin building an improved depth motion map (IDMM) by calculating the utter and total depth difference between the current frame and the beginning frame for each gesture segment; this serves as a kind of motion characteristic as the input data of a deep learning network.

3.4.3. Convolutional Neural Networks in 3D

Many 3D CNNs for gesture recognition have been presented by [ 3 , 49 , 63 ], where a three-dimensional convolutional neural network (CNN) for recognizing driver hand gestures based on depth and intensity data has been presented in [ 3 ]. For the final forecast, the authors use data from several spatial scales. It also makes use of spatiotemporal data enrichment for even more effective training and to avoid overfitting. However, a recurrent mechanism to the 3D CNN to recognize and classify dynamic hand movements has been added in [ 55 ]. A 3D CNN is used to extract spatiotemporal features, a recurrent layer is used for global temporal modeling, and a SoftMax layer is used to forecast class-conditional gesture probabilities. Continuously, a 3D CNN for sign language identification that extracts discriminative spatiotemporal characteristics from a raw video stream has been presented in [ 63 ]. (RGB-D and Skeleton data) of streaming video, containing color information, depth clue, and body joint locations, are utilized as input to the 3D CNN to improve the performance has been offered in [ 64 ]. By merging depth and RGB video, a 3D CNN model for large-scale gesture detection. In a similar vein, an end-to-end 3D CNN based on the model of [ 65 ] and uses it for large-scale gesture detection has been pointed in [ 66 ], the wide range of use cases of CNNs for various gesture recognition tasks across the years proves their effectiveness in such tasks, the presence of an extra dimension makes 3D CNNs unique in that the third dimension can be mapped to a time dimension to process videos or a depth dimension to acquire more useful data for a task as seen in [ 67 ]. Previous literature support this finding by indicating that combining 3D-based CNNs with temporal models such as an RNN yields desirable results and allows the usage of continuous streams such as videos, currently, CNNs are widely utilized for 2D and 3D based image and gesture recognition and detection tasks.

3.4.4. RNN and LSTM Models for Temporal Deep Learning

Interestingly, despite being a promising study area, periodic deep learning models have still not been frequently employed for gesture identification. In [ 68 ], it has been offered a multimodal (depth, skeleton, and voice) gesture recognition system based on RNN, which we are aware of. Each modality is initially processed in small spatiotemporal blocks, wherein discriminative data-specific characteristics are either manually retrieved or learned. After that, RNN is used to simulate large-scale temporal relationships, data fusion, and gesture categorization. Furthermore, in [ 69 ] it has been studied a multi-stream RNN for large-scale gesture detection. [ 70 ] proposes a convolutional long short-term memory recurrent neural network (CNNLSTM) capable of learning gestures of various lengths and complexity. Faced with the same challenge [ 71 ], it has been suggested MRNN, a multi-stream model that combines RNN capabilities with LSTM cells to help handle variable-length gestures. However, in [ 51 ] sequentially supervised long short-term memory (SS-LSTM) has been suggested; wherein auxiliary information is employed as sequential supervision at each time step instead of providing a class label to the output layer of RNNs. To identify sample frames from the video sequence and categorize the gesture, the authors in [ 49 ] have been employed a deep learning architecture. To build the tiled binary pattern, they use a tiled picture formed by sampling the whole movie as that of the input of a reconvened. The trained long-term recurring convolution network then receives these representative frames as input. However, it has been presented in [ 71 ] an EM-based approach for poor supervision that integrates CNNs with hidden Markov-Models (HMMs).

4. Body Language Analysis of Patients and AI

4.1. overview.

Different artificial intelligence (AI) methods and techniques have been used in analyzing the body language of the patients. Machine learning methods showed a high level of flexibility to a variety of pharmacological conditions. We briefly discuss some studies held so far in this area.

4.2. Facial Recognition

More specifically, focusing on facial recognition, a pimple system called the facial action coding system (FACS) was introduced in [ 71 ] to analyze facial muscles and thus identify different emotions. The proposed system automatically tracks faces using video and extracts geometric shapes for facial features. The study was conducted on eight patients with schizophrenia, and the study collected dynamic information on facial muscle movements by going through the specifics of the automated FACS system and how it may be used for video analysis. There are three steps to it. The first stage (image processing) explains how face photos are processed for feature extraction automatically. Next is action unit detection, which explains how we train and evaluate action unit classes. The process finishes (application to video analysis) by demonstrating how to utilize classifiers to analyze movies to gather qualitative and quantitative data on affective problems in neuropsychiatric patients. This study showed the possibility of identifying engineering measurements for individual faces and determining their exact differences for recognition purposes. As a result, Controls 3, 2, and 4 patients were quite expressive, according to the automated evaluation, but patients 1, 2, and 4 were relatively flat. Control 1 and patient 3 were both in the middle of the spectrum. Patients 4 and 3 had the highest levels of inappropriate expressiveness, whereas patients 1 and controls 1–4 had moderate levels.

Three methods were used in [ 31 ] to measure facial expression to determine emotions and identify persons with mental illness. The study’s proposed facial action coding system enabled the interpretation of emotional facial expressions and thus contributed to the knowledge of therapeutic intervention for patients with mental illnesses. This can range from seeing a person engaging in a group in real life to filmed encounters in which facial expressions are recorded under laboratory circumstances after the emotion is elicited experimentally. Using the picture of a filmed face for image processing and capturing precise expression changes (called action units), this technology permits the detection of fundamental emotions throughout time. By utilizing surface electrodes, an electromyography (EMG) approach was created to distinguish the activation of facial muscles as correctly and clearly as feasible. This advancement in technology enabled the detection and independent recording of the actions of the modest visible facial muscles. Automatic face recognition: the quality of commercially available systems for automatic face recognition has significantly increased. The SHORE™ technology, which is the world’s premier face detection system, is the result of years of research and development in the field of intelligent systems. SHORE™ led to the development of a high-performance real-time C++ software library. A significant percentage of people suffer from a nervous system imbalance, which causes paralysis of the patient’s movement and unexpected falls. So, A better understanding of how emotions are regulated and how the dynamics of facial expression of emotion can be explained could lead to a better understanding of the interactive and social consequences of emotional expression deficits in people with mental illness, as well as a therapeutic intervention.

Most patients with any neurological condition have ambulatory disruption at any stage of the disease, which can lead to falls without warning signs, and each patient is unique. As a result, a technique to identify shaky motion is required.

4.3. Fall Detection

A thesis topic in [ 72 ] is about assessing the real-time gait of a Parkinson’s disease patient to actively respond to unstable motions. They devised a technique to monitor a real-time gait analysis algorithm by wearing SHIMMER wireless sensors on the waist, chest, and hip based on real-world data to see which one is the most suited to identify any gait deviation. This approach is efficient, sensitive to identifying miner deviation, and user-configurable, allowing the user to adjust the sampling rate and threshold settings for motion analysis. Researchers can utilize this technique without having to develop it themselves in their research. The initial sampling rate is set to 100 MHz, and it operates with precalculated threshold values. Accelerometers worn on the chest reveal excessive acceleration during falls, and thus it is best to wear them on the waist. Additionally, as illustrated in the aware gait, if a patient takes steps with vigor, her or his gait may become steadier: the patient still has postural instability and falls following the DBS treatment. As a result, even after surgery, such people may have impaired cognition. Another discovery is that people with this condition may tilt left or right when turning.

Because the suggested approach is sensitive to detecting falls, it may be used objectively to estimate fall risk. The same algorithm, with small tweaks, may be used to identify seizures in different conditions, primarily epileptic seizures, and inform health care personnel in an emergency.

In the medical field, fall detection is a big issue. Elderly folks are more likely than others to fall. People over the age of 65 account for more than half of all injury-related hospitalizations. Commercial fall detection devices are costly and need a monthly subscription to operate. For retirement homes and clinics to establish a smart city powered by AI and IoT, a more inexpensive and customizable solution is required. A reliable fall-detection system would detect a fall and notify the necessary authorities.

In [ 73 ], they used edge-computing architecture to monitor real-time patient behavior and detect falls using an LSTM fall detection model. To track human activity, they employed MbientLab’s MetaMotionR wireless wearable sensor devices, which relayed real-time streaming data to an edge device. To analyze the streaming sensor data, we used a laptop as an edge device and built a data analysis pipeline utilizing bespoke APIs from Apache Flink, TensorFlow, and MbientLab. The model is trained by the “MobiAct” dataset, which has been released. The models were shown to be efficient and may be used to analyze appropriate sampling rates, sensor location, and multistream data correction by training them using already public datasets and then improving them. Experiments demonstrated that our architecture properly identified falls 95.8% of the time using real-time sensor data. We found that the optimal location for the sensors is at the waist and that the best data gathering frequency is 50 Hz. We showed that combining many sensors to collect multistream data improves performance.

We would like to expand the framework in the future to include several types of cloud platforms, sensors, and parallel data processing pipelines to provide a system for monitoring patients in clinics, hospitals, and retirement homes. We want to use the MbientLab MetaTracker to construct ML models to identify additional activities and analyze biometrics such as the subject’s heartbeat before and after a fall, sleep pattern, and mobility pattern, as well as track patients’ activity.

4.4. Smart Homes in Health Care

Many individuals, particularly the elderly and ill, can live alone and keep their freedom and comfort in smart houses. This aim can only be achieved if smart homes monitor all activities in the house and any anomalies are quickly reported to family or nurses. As shown in Figure 3 , smart houses feature a multilayered design. The physical layer (environment, objects, and inhabitants), communication layer (wired and wireless sensor network), data processing layer (data storage and machine learning techniques), and interface layer are the four levels (software such as a mobile phone application). Sensors collect data about inhabitants’ activities and the status of the environment, then send it to a server’s data processing layer, where it is evaluated. Users get the results (such as alarms) and interact with the smart home through a software interface. Edge sensors make it easier to monitor various metrics across time, the data is then sent to another device for processing and predictions, this lifts the weight of processing from the sensors to more capable devices, ref. [ 74 ] proposes and architecture for smart cameras that allows them to perform high-level inference directly within the sensor without sending the data to another device.

An external file that holds a picture, illustration, etc.
Object name is healthcare-10-02504-g003.jpg

Multilayered architecture of a smart home. The figure was taken from [ 75 ].

The most common uses of smart homes in health care are automation tasks aiming at activity recognition for a range of objectives, such as activity reminders for Alzheimer’s patients and remote monitoring of people’s health via regulating their vital signs.

4.4.1. Anomaly Detection Using Deep Learning

The authors of [ 76 ] used raw outputs from binary sensors, such as motion and door sensors, to train a recurrent network to anticipate which sensor would be turned on/off in the next event, and in this on/off mode, how long it would remain. They then expanded this event into k sequences of successive occurrences using beam search to discover the likely range of forthcoming actions. Several novel approaches for assessing the spatio-temporal sequences’ similarity were used to evaluate the inaccuracy of this prediction, i.e., the distance between these potential sequences and the true string of events. The anomaly scores likelihood can be determined by modeling this inaccuracy as a Gaussian distribution. Abnormal activities will be regarded as input sequences that score higher than a specific threshold. The trials showed that this approach can detect aberrant behaviors with a high level of accuracy.

The suggested method’s general scheme is depicted in Figure 4 . The raw sensor events are first preprocessed, which comprises the processes below:

  • The SA value is derived by adding the S and A values together.
  • SA’s character string has been encoded. This encoding can be done in one of two ways: one-hot encoding or word embedding.
  • D is determined by subtracting the current and previous event timestamps.
  • The return of time, periodicity, and cycle are all taken into account while converting timestamps.

An external file that holds a picture, illustration, etc.
Object name is healthcare-10-02504-g004.jpg

The overall scheme of the proposed method [ 76 ].

4.4.2. Anomaly Detection Using Bayesian Networks

A Bayesian network is a representation of a joint probability distribution of a set of random variables with a possible mutual causal relationship. The network consists of nodes representing the random variables, edges between pairs of nodes representing the causal relationship of these nodes, and a conditional probability distribution in each of the nodes. The main objective of the method is to model the posterior conditional probability distribution of outcome (often causal) variable(s) after observing new evidence [ 77 ].

The goal of [ 75 ] is to identify abnormalities at the proper moment so that harmful situations can be avoided when a person interacts with household products. Its goal is to improve anomaly detection in smart homes by expanding functionality to evaluate raw sensory data and generate suitable guided probabilistic graphical models (Bayesian networks). The idea is to determine the chance of the current sensor turning on and then have the model sound an alarm if the probability falls below a specific threshold. To do this, we create many Bayesian network models of various sizes and analyze them to find the best optimal network with adequate causal links between random variables. The current study is unique in that it uses Bayesian networks to model and train sensory data to detect abnormalities in smart homes. Furthermore, by giving an approach to removing unneeded random variables, identifying the ideal structure of Bayesian networks leads to greater assessment metrics and smaller size. (We look at the first-order Markov property as well as training and evaluating Bayesian networks with various subsets of random variables.)

We use Bayesian network models to analyze sensory data in smart homes to detect abnormalities and improve occupant safety and health. Pre-processing, model learning, model assessment, and anomaly detection are the four primary steps of the proposed technique Figure 5 .

An external file that holds a picture, illustration, etc.
Object name is healthcare-10-02504-g005.jpg

The proposed architecture for anomaly detection in smart homes. The figure has been taken from [ 75 ].

4.4.3. Anomaly Detection Using a Graph-Based Approach

Another approach based on data analysis was presented in [ 78 ] for sensor-based smart home settings that have been effectively deployed in the past several years to help elderly persons live more independently. Smart homes are designed to not interfere with inhabitants’ routine activities and to lower the expense of health care connected with their care. Because senior inhabitants are more prone to cognitive health difficulties, analyzing their daily activity using some type of automated tool based on sensor data might offer valuable information about their health state. It is demonstrated that one way to achieve this is to use a graph-based approach to data collected from residents’ activities. It also presents case studies for cognitively impaired participants and discusses how to link these anomalies to the decline in their cognitive abilities, providing clinicians and caregivers with important information about their patients. An unsupervised graph technique has been employed to discover temporal, geographical, and behavioral abnormalities in senior residents’ everyday activities using activity data from smart home sensors. They further hypothesized that these strange actions may indicate a participant’s cognitive deterioration. Data on smart home activities may be created in real-time, as a data stream. They recruited three cognitively challenged people at random for the trial. They would like to change the sample and conduct several trials in the future to see if comparable anomalies may be found. They would also like to look at the graph topology’s resilience to see how much a change in graph topology affects the outcome of anomaly detection. Furthermore, they intend to enlist the help of a doctor as a domain expert to confirm our theory that these abnormalities are true signs of cognitive deterioration (or MCI).

Continuously, it has been planned to expand tests to a real-time data stream in the future. Planned as well is the conversion of real-time sensor logs into graph streams, as well as the search for abnormalities in graph streams, which might allow a real-time health monitoring tool for residents and assist doctors and nurses.

4.5. AI for Localizing Neural Posture

The elderly and their struggle to live independently without relying on others was the subject of a study under assessment. The goal of the study [ 79 ] was to compare automated learning algorithms used to track their biological functions and motions. Using reference features, the support conveyor algorithm earned the greatest accuracy rate of 95 percent among the eight higher education algorithms evaluated. Long periods of sitting are required in several vocations, which can lead to long-term spine injuries and nervous system illnesses. Some surveys aided in the development of sitting position monitoring systems (SPMS), which use sensors attached to the chair to measure the position of the seated individual. The suggested technique had the disadvantage of requiring too many sensors.

This problem was resolved by designing sitting posture monitoring systems (SPMSs) to help assess the posture of a seated person in real-time and improve sitting posture. To date, SPMS studies have required many sensors mounted on the backrest plate and seat plate of a chair. The present study of [ 80 ], therefore, developed a system that measures a total of six sitting postures including the posture that applied a load to the backrest plate, with four load cells mounted only on the seat plate. Various machine learning algorithms were applied to the body weight ratio measured by the developed SPMS to identify the method that most accurately classified the actual sitting posture of the seated person. After classifying the sitting postures with several classifiers, a support vector machine using the radial basis function kernel was used to obtain average and maximum classification rates of 97.20 percent and 97.94 percent, respectively, from nine subjects. The suggested SPMS was able to categorize six sitting postures, including one with backrest loading, and demonstrated that the sitting posture can be classified even when the number of sensors is reduced.

Another posture can we share here is for patients who are in the hospital for an extended period, pressure ulcer prevention is critical. To arrange posture modification for patients, a human body lying posture (HBLP) monitoring system is required. The traditional technique of HBLP monitoring, video surveillance, has several drawbacks, including subject privacy and field-of-view occlusion. With no sensors or wires attached to the body and no limits imposed on the subject, the paper [ 81 ] presented an autonomous technique for identifying the four state-of-the-art HBLPs in healthy adult subjects: supine, prone, left, and right lateral. Experiments using a collection of textile pressure sensors implanted in a cover put beneath the bedsheet were done on 12 healthy persons (ages 27.35 5.39 years). A supervised artificial neural network classification model was given a histogram of directed gradients and local binary patterns. Scaled conjugate gradient back-propagation was used to train the model. To evaluate the classification’s generalization performance, nested cross-validation with an exhaustive outer validation loop was used. Intriguingly, a high testing prediction accuracy of 97.9% was found, with a Cohen’s kappa coefficient of 97.2 percent. In contrast to most previous similar studies, the classification successfully separated prone and supine postures. They discovered that combining body weight distribution information with shape and edge information improves classification performance and the capacity to distinguish between supine and prone positions. The findings are encouraging in terms of unobtrusively monitoring posture for ulcer prevention. Sleep studies, post-surgical treatments, and other applications that need HBLP identification can all benefit from the approach.

In patients with myopathy, peripheral neuropathy, plexopathy, or cervical/lumbar radiculopathy, needle electromyography (EMG) is utilized to diagnose a neurological injury. Because needle EMG is such an intrusive exam, it is critical to keep the discomfort to a minimum during inspections. The Electrodiagnosis Support System (ESS), a clinical decision support system specialized for upper-limb neurological damage diagnosis, has been described in the work [ 82 ]. ESS can help users through the diagnostic process and make the best option for eliminating unwanted examinations, as well as serve as a teaching tool for medical students. Users may input the results of needle EMG testing and get diagnosis findings using ESS’s graphical user interface, which depicts the neurological anatomy of the upper limb. We used the diagnostic data of 133 real patients to test the system’s accuracy.

4.6. AI for Monitoring Patients

In the recent decade, automated patient monitoring in hospital settings has received considerable attention. An essential issue is mental patient behavior analysis, where good monitoring can reduce the risk of injury to hospital workers, property, and the patients themselves.

For this task, a computer vision system for monitoring patients was created in safe rooms in hospitals to evaluate their movements and determine the danger of hazardous behavior by extracting visual data from cameras mounted in their rooms. To identify harmful behavior, the proposed technique leverages statistics of optical flow vectors collected from patient motions. Additionally, the approach uses foreground segmentation and blob tracking to extract the shape and temporal properties of blobs such as arriving and leaving the room, sleeping, fighting, conversing, and attempting to escape as shown in Figure 6 . Preliminary findings suggest that the technology might be used in a real hospital setting to help avoid harm to patients and employees. A more advanced classification framework for merging the characteristics might be used to increase the system performance and attain a practically low error rate.

An external file that holds a picture, illustration, etc.
Object name is healthcare-10-02504-g006.jpg

Example of activities to be detected. These images have been taken from [ 83 ].

Intelligent sensing sensors and wireless communication networks are used in smart health care equipment and applications. The goal of this integration is to improve patient monitoring and make minor illness detection easier.

The study conducted by [ 84 ] presents a multilevel decision system (MDS) for recognizing and monitoring patient behavior based on sensed data. Wearable sensing devices are implanted in the body to detect physiological changes at set intervals. The data collected by these sensors is utilized by the health care system (HS) to diagnose and predict illnesses. In this suggested MDS, there are two layers of decision-making: the first is aimed to speed up the data collection and fusion process. Data correlation is used to detect certain behaviors during the second-level decision process. Inter-level optimization reduces errors by fusing multi-window sensor data, allowing for correlation. This optimization acts as a bridge between the first and second decision-making stages. The wearable sensor and health care system are depicted as part of the decision-making process. Using multi-window fusion decision-making, the health care system (HS) in Figure 7 performs activity/behavior extraction, data fusion, and feature extraction. It has data streaming characteristics that make it easier to make decisions, even with nonlinear sensor results. Storage, updating, analysis, and correlation of sensor data are carried out in the second decision-making phase. The data from the body-worn wearable sensors is compiled on a smart handheld device (e.g., cellphones, digital gadgets) and sent to the HS over the Internet.

An external file that holds a picture, illustration, etc.
Object name is healthcare-10-02504-g007.jpg

WS to the health care system has been taken from [ 84 ].

The patient’s behavior and the type of the ailment were recognized based on this information for use in future diagnosis and prediction. MDS also uses flexible information analysis to match patient behavioral analysis and come up with improved recommendations. MDS’s dependability is demonstrated by experimental analysis, which improves the true positive rate, F-measure score, and accuracy fusion latency.

4.7. AI and Patient’s Lower Limb Movement

The qualitative and quantitative study of climbing, running, and human walking is referred to as HUMAN lower limb motion analysis. It is based on kinematic notions as well as human anatomy and physiology, and it is frequently used in augmented virtual reality, foot navigation, and medical rehabilitation, among other applications [ 85 ].

4.7.1. Evaluation of Paraplegics’ Legged Mobility

The inability to walk and stand is one of the most important disabilities caused by paraplegia. Along with a reduction in movement. This research [ 86 ] examined a lower limb exoskeleton for paraplegics who require leg movement. The research offers a single-subject case study on a patient with a T10 motor and sensory complete damage, comparing legged movement using an exoskeleton versus locomotion using knee–ankle–foot orthoses (KAFOs). The timed up-and-go test, the Ten-Meter Walk Test (10 MWT), and the six-minute walk test (6 MWT) are used to measure the subject’s capacity to stand, walk, turn, and sit. The Physiological Cost Index was used to determine the level of effort associated with each evaluation tool. Results indicate that the subject was able to perform the respective assessment instruments 25%, 70%, and 80% faster with the exoskeleton relative to the KAFOs for the timed up-and-go test, the 10 MWT, and the 6 MWT, respectively. Measurements of exertion indicate that the exoskeleton requires 1.6, 5.2, and 3.2 times less exertion than the KAFOs for each respective assessment instrument. The results indicate that the enhancement in speed and reduction in exertion is more significant during walking than during gait transitions.

4.7.2. Estimating Clinically of Strokes in Gait Speed Changing

In persons who have had a stroke, gait speed is routinely used to determine walking capacity. It is unclear how much of a difference in gait speed corresponds to a significant change in walking capacity. The goal of the study [ 87 ] was to quantify clinically significant changes in gait speed using two distinct anchors for “significant”: Perceptions of progress in walking capacity among stroke survivors and physical therapists. After a first-time stroke, the participants received outpatient physical treatment (mean 56 days post-stroke). At admission and discharge, self-selected walking speed was assessed. On a 15-point ordinal global rating of change (GROC) scale, subjects and their physical therapists scored their perceived change in walking ability after discharge. Using receiver operating characteristic curves and the participants’ and physical therapists’ GROC as anchors, estimated relevant change values for gait speed were determined. All the subjects’ initial gait speeds were 0.56 (0.22) m/s on average. Depending on the anchor, the assessed significant change in gait speed was between 0.175 m/s (participants felt the change in walking ability) and 0.190 m/s (physical therapists perceived change in walking ability). Individuals who increase their gait speed by 0.175 m/s or more during the subacute period of rehabilitation are more likely to have a considerable improvement in walking ability. Clinicians and researchers can utilize the estimated clinically relevant change value of 0.175 m/s to establish objectives and analyze the change in individual patients, as well as to compare important changes between groups.

4.7.3. Measuring Parkinson’s Gait Quality

Wearable sensors that monitor gait quality in daily activities have the potential to improve medical evaluation of Parkinson’s disease (PD). Four gait partitioning strategies were examined in the work [ 88 ], two based on machine learning and two based on the thresholds approach, all using the four-phase model. During walking tasks, the approaches were evaluated on 26 PD patients in both ON and OFF levodopa circumstances, as well as 11 healthy volunteers. All the participants wore inertial sensors on their feet. The reference time sequence of gait phases was assessed using force resistive sensors. To determine the accuracy of gait phase estimation was using the goodness index (G). For gait quality evaluation, a new synthetic index termed the gait phase quality index (GPQI) was developed. The results indicated that three of the examined techniques had optimal performance (G 0.25) and one threshold approach had acceptable performance (0.25 G 0.70). The GPQI was shown to be considerably higher in PD patients than in healthy controls, with a modest connection with clinical scale scores. Furthermore, GPQI was shown to be greater in the OFF state than in the ON state in individuals with significant gait impairment. Our findings show that real-time gait segmentation based on wearable sensors may be used to assess gait quality in people with Parkinson’s disease.

4.8. Remark

Recent advancements in low-cost smart home devices and wireless sensor technology have resulted in an explosion of small, portable sensors that can measure body motion rapidly and precisely. Movement-tracking technologies that are both practical and beneficial are now available. Therapists need to be aware of the possible benefits and drawbacks of such new technology. As said in [ 89 ], therapists may be able to undertake telerehabilitation in the future using body-worn sensors to assess compliance with home exercise regimens and the quality of their natural movement in the community. Therapists want technology tools that are simple to use and give actionable data and reports to their patients and referring doctors. Therapists should search for systems that have been evaluated in terms of gold standard accuracy as well as clinically relevant outcomes such as fall risk and impairment severity.

5. AI and COVID-19

5.1. overview.

The medical sector is seeking innovative tools to monitor and manage the spread of COVID-19 in this global health disaster. Artificial intelligence (AI), the Internet of Things (IoT), big data, and machine learning are technologies that can readily track the transmission of this virus, identify high-risk individuals, anticipate new illnesses, and aid in real-time infection management. These technologies might also forecast mortality risk by thoroughly evaluating patients’ historical data.

The study by [ 90 ] examined the role of artificial intelligence (AI) as a critical tool for analyzing, preparing for, and combating COVID-19 (Coronavirus) and other pandemics. AI can aid in the fight against the virus by providing population screening, medical assistance, notification, and infection control recommendations. As an evidence-based medical tool, this technology has the potential to enhance the COVID-19 patient’s planning, treatment, and reported outcomes.

Artificial Intelligence (AI) is an emerging and promising technology for detecting early coronavirus infections and monitoring the state of affected individuals. It can monitor the COVID-19 outbreak at many scales, including medical, molecular, and epidemiological applications. It is also beneficial to aid viral research by evaluating the existing data. Artificial intelligence can aid in the creation of effective treatment regimens, preventative initiatives, and medication and vaccine development.

The basic approach of AI and non-AI-based programs that assist general physicians in identifying COVID-19 symptoms is shown in Figure 8 . The flow diagram below illustrates and contrasts the flow of minimum non-AI versus AI-based therapy. The flow diagram below demonstrates how AI is used in key aspects of high-accuracy therapy, reducing the complexity and time required. With the AI application, the physician is not only focused on the patient’s therapy, but also illness control. AI is used to analyze major symptoms and test results with the highest level of accuracy. It also demonstrates that it minimizes the overall number of steps in the entire process, making it more readily available in nature.

An external file that holds a picture, illustration, etc.
Object name is healthcare-10-02504-g008.jpg

The general procedure of AI and non-AI-based applications help general physicians to identify the COVID-19 symptoms. This figure has been taken from [ 90 ].

5.2. AI Training Techniques

The medical field makes use of two different paradigms when it comes to AI, supervised where the data is labeled and the model learns to map features to an outcome, the outcome is known beforehand, making it easier to score the model and track its performance. The other technique utilized is unsupervised learning, unlike supervised learning, unsupervised learning uses unlabeled and unstructured data that is fed to a model and giving it the opportunity to learn and extract useful information from the data as it sees fit, these techniques are utilized for various other tasks such as early warning systems and faster cure discovery.

5.2.1. Supervised Learning

Supervised Learning is one of the most often used techniques in the health care system, and it is well-established. This learning approach makes use of labeled data X with a provided target Y to learn how to predict the correct value of Y given input X.

Supervised learning can help provide a solid foundation for COVID-19 planned observation and forecasting. A neurological system might also be developed to extract the visual features of this disease, which would aid in the proper diagnosis and treatment of those who are affected. An Xception, depth-wise based CNN technique for convolution distinct layers has been presented in [ 91 ]. Two convolution layers are at the top, followed by a related layer, four convolution layers, and depth-wise divisible convolution layers. In research from [ 80 ], it was used to identify bed positions using a variety of bed pressure sensors. It can be a beneficial weapon in battling the COVID-19 because of its capabilities and high-efficiency outcome.

5.2.2. Unsupervised Learning

Instead of using verified data as in the previous learning strategy, this learning technique employs names without information signals. This method is widely used to discover covered structures in data and divide them into small groups. Its primary purpose is to present and construct a clear differentiating proof. This is a potential type of estimation for meeting the general AI requirement, although it lags far behind the previously stated learning approach. The autoencoder [ 92 ] and K-means [ 93 ] are the most well-known unsupervised techniques. The quirk acknowledgment [ 94 ] is one of the most widely seen duties of this learning strategy in the medical field. The affiliation data will begin with comparable scattering; if there is any form of interference, as an exception, this data point can be hailed or observed without difficulty. There are many solutions that are relatively cheap and allow deploying AI models in a fast way such as Nividia’s Jetson nano kit, a Raspberry Pi, or Google’s coral, for example. Therefore, this concept may be used to CT scan pictures as well as other medical applications, such as for COVID-19.

The author proposed a new framework for this learning model for opportunistic cameras that record moving data from a stream [ 95 ]. The neural system is then used to predict how the event broadcast will move. This movement is used to attempt and remove any movement that is concealed in the streaming images. To further explain this concept, we depict the applied learning approach in Figure 9 , where the training and testing attributes collected from the patient are denoted by the symbol X. In this case, accuracy is not a priority; instead, the approach’s purpose is to uncover any interesting examples that may be found among the available data. Furthermore, additional information can be used to corroborate or disprove the samples it detects.

An external file that holds a picture, illustration, etc.
Object name is healthcare-10-02504-g009.jpg

Network architecture for both the optical flow and egomotion and depth networks. This figure has been taken from [ 95 ].

5.3. Real World Use Cases

The contribution of AI to the fight against COVID-19 is discussed in this paper [ 96 ], as well as the present restrictions on these efforts. (i) early warnings and alerts, (ii) prediction and tracking, (iii) data dashboards, (iv) diagnosis, (v) cures and treatment, and (vi) health care workers’ workloads are being reduced are six areas where AI may help in the battle against COVID-19. The conclusion is that AI has yet to influence COVID-19. Its utilization is restricted by a lack of data as well as an abundance of data. To overcome these limitations, a careful balance between data privacy and public health, as well as rigorous human-AI interaction, will be required. These are unlikely to be addressed in time to be of much use during the current pandemic. Meanwhile, a large-scale collection of diagnostic data on who is infectious will be required to save lives, train AI, and reduce economic losses. In the works [ 93 ], Different AI techniques are utilized for COVID-19 detection, these techniques despite their major differences provide admirable results that helped in making it easier and faster to detect the spread of COVID-19, the discussed techniques will be further explained in detail below.

5.3.1. Early Warnings and Alerts

AI can swiftly identify unusual symptoms and other red flags, alerting patients and health care providers [ 97 ]. It aids in cost-effective decision making by allowing for speedier decision making. Through relevant algorithms, it aids in the development of a novel diagnosis and management strategy for COVID-19 patients. With the use of medical imaging technologies such as computed tomography (CT) and magnetic resonance imaging (MRI) scans of human body parts, AI can assist in the identification of infected patients.

For example, BlueDot 4, a Canadian AI model, demonstrates how a low-cost AI tool (BlueDot was supported by a startup investment of roughly US$ 9 million) may outperform humans at detecting infectious disease epidemics as shown in [ 98 ]. According to reports, BlueDot foresaw the epidemic at the end of 2019, giving a warning to its clients on 31 December 2019, one day before the World Health Organization announced so on 9 January 2020. In [ 99 ], a group of academics worked with BlueDot and compiled a list of the top 20 destinations for travelers flying from Wuhan after the epidemic. They cautioned that these cities might be at the front of the global spread of the disease.

While BlueDot is unquestionably a strong tool, much of the press around it has been exaggerated and undervalues the contribution of human scientists. First, while BlueDot raised an alarm on 31 December 2019, another AI-based model at Boston Children’s Hospital (USA), reading the HealthMap in [ 100 ], raised a warning on 30 December 2019.

5.3.2. Prediction and Tracking

AI may be used to track and forecast the spread of COVID-19 over time and space. A neural network may be built to extract visual aspects of this condition, which would aid inadequate monitoring [ 101 ]. It has the potential to offer daily information on patients as well as remedies to be implemented in the COVID-19 pandemic.

For example, during a previous pandemic in 2015, a dynamic neural network to anticipate the spread of the Zika virus was constructed. Models such as these, on the other hand, will need to be retrained using data from the COVID-19 pandemic. Various projects are underway to collect training data from the present epidemic, as detailed below.

Various issues plague accurate pandemic predictions; see, for example, [ 102 ]. This includes a dearth of historical data on which to train the AI, panic behavior that causes “noise” on social media, and the fact that COVID-19 infections have different features than prior pandemics. Not only is there a paucity of historical data, but there are also issues with employing “big data”, such as information gleaned from social media. The risks of big data and AI in the context of infectious illnesses, as demonstrated by Google Flu Trends’ notable failure, remain valid. “Big data hubris and algorithm dynamics”, as [ 103 ] put it. For example, as the virus spreads and the quantity of social media traffic around it grows, so does the amount of noise that must be filtered out before important patterns can be recognized.

AI estimates of COVID-19 spread are not yet particularly accurate or dependable because of a lack of data, big data hubris, algorithmic dynamics, and loud social media.

As a result, most tracking and forecasting models do not employ AI technologies. Instead, most forecasters choose well-established epidemiological models, often known as SIR models, which stand for susceptible, infected, and removed populations in a certain area. The Institute for the Future of Humanity at Oxford University, for example, uses the GLEAMviz epidemiological model to anticipate the virus’s spread, looking in [ 104 ].

An Epidemic Tracker model of illness propagation is available from Metabiota Looking forward to a San Francisco-based startup [ 105 ]. In a YouTube video, watching, Crawford, an Oxford University mathematician, gives a simple and concise explanation of SIR models [ 106 ].

The Robert Koch Institute in Berlin employs an epidemiological SIR model that incorporates government containment measures including quarantines, lockdowns, and social distancing, the model is explained here [ 107 ]. In [ 108 ] recently, it has been pre-published and made it accessible in R format and enhanced the SIR model that takes into consideration public health interventions against the pandemic and uses data from China.

The Robert Kock Institute’s model has already been utilized in the instance of China to show that containment can be effective in slowing the spread to less than exponential rates [ 107 ].

5.3.3. Data Dashboards

COVID-19 tracking and forecasting has spawned a cottage industry of data dashboards for visualizing the actual and predicted spread. The MIT Technology Review [ 109 ] has ranked these dashboards for tracking and forecasting. HealthMap, UpCode, Thebaselab, NextStrain, the BBC, Johns Hopkins’ CSSE, and the New York Times have the best dashboards, according to them. Microsoft Bing’s COVID-19 Tracker is another important dashboard see Figure 10 .

An external file that holds a picture, illustration, etc.
Object name is healthcare-10-02504-g010.jpg

Microsoft Bing’s COVID-19 Tracker, note(s): Screenshot of Bing’s COVID-19 Tracker, 9 February 2022.

While these dashboards provide an increasing number of cites, and a global overview, have their dashboards in place. For example, South Africa established the COVID-19 ZA South Africa Dashboard which is maintained by the University of Pretoria’s Data Science for Social Impact Research Group [ 110 ].

Tableau has produced a COVID-19 data hub with a COVID-19 Starter Workbook to help with the creation of data visualizations and dashboards for the epidemic [ 111 ].

5.3.4. Diagnosis

COVID-19 diagnosis that is quick and accurate can save lives, prevent disease transmission, and produce data for AI models to learn from. In this case, AI might be helpful, especially when establishing a diagnosis based on chest radiography pictures. In a recent assessment of artificial intelligence applications versus coronaviruses, studies have demonstrated that AI can be as accurate as humans, save radiologists’ time, and provide a diagnosis faster and cheaper than normal COVID-19 tests [ 112 ].

For COVID-19, AI can save radiologists time and help them diagnose the disease faster and more affordably than current diagnostics. X-rays and computed tomography (CT) scans are both options. A lesson on how to diagnose COVID-19 utilizing X-ray pictures using Deep Learning has been provided in [ 113 ]. COVID-19 tests are “in low supply and costly”, he points out, but “all hospitals have X-ray machines”. A method for scanning CT scans with mobile phones has been presented in [ 114 ].

In this context, several projects are in the works. COVID-Net, has been created by [ 115 ], is a deep convolutional neural network (see, for example, [ 116 ]) that can diagnose coronavirus from chest X-RAY pictures. It was trained using data from roughly 13,000 individuals with diverse lung diseases, including COVID-19, from an open repository. However, as the authors point out, it is “far from a production-ready solution”, and they urge the scientific community to continue working on it, especially to “increase sensitivity” (Ibid, p.6). A Deep Learning model has been presented in [ 117 ] to diagnose COVID-19 from CT scans (which has not yet been peer-reviewed), concluding that “The deep learning model showed comparable performance with an expert radiologist, and greatly improve the efficiency of radiologists in clinical practice. It holds great potential to relieve the pressure off frontline radiologists, improve early diagnosis, isolation, and treatment, and thus contribute to the control of the epidemic”. (Ibid, p.1).

Researchers from the Dutch University of Delft, for example, developed an AI model for detecting coronavirus from X-rays at the end of March 2020. On their website available in [ 111 ], this model, dubbed CAD4COVID, is touted as an “artificial intelligence program that triages COVID-19 suspicions on chest X-ray pictures”. It is based on prior AI models for TB diagnosis created by the institution.

Although it has been claimed that a handful of Chinese hospitals have installed “AI-assisted” radiology technologies for example see the report in [ 118 ], the promise has yet to be realized. Radiologists in other countries have voiced worry that there is not enough data to train AI models, that most COVID-19 pictures are from Chinese hospitals and may be biased, and that utilizing CT scans and X-rays might contaminate equipment and spread the disease further.

Finally, once one has been diagnosed with the disease, the question of whether and how severely that one will be affected arises. COVID-19 does not always need rigorous treatment. Being able to predict who will be impacted more severely can aid in the targeting of assistance and the allocation and utilization of medical resources. Only 29 patients at Tongji Hospital in Wuhan, China, the authoers of [ 119 ] used the data of thus patients to develop a prognostic prediction algorithm to forecast the mortality risk of a person who has been infected. Howevere, the authores in [ 120 ] have offered an AI that can predict with 80% accuracy who would suffer acute respiratory distress syndrome after contracting COVID-19 (ARDS).

5.3.5. Faster Cure Discovery

Long before the coronavirus epidemic, AI was praised for its ability to aid in the development of novel drugs see for example [ 121 ]. In the instance of coronavirus, several types of research institutes and data centers have already said that AI would be used to find therapies and a vaccine for the virus. The goal is that artificial intelligence will speed up both the discovery and repurposing of current medications. By assessing the existing data on COVID-19, AI is employed for medication research. It may be used to design and develop medication delivery systems. This technique is utilized to speed up drug testing in real-time when normal testing takes a long time, and so helps to considerably speed up this procedure, which would be impossible for a human to do. For example, Google’s DeepMind, which is best known for its AlphaGo game-playing algorithm, AI has been used in [ 122 ] to anticipate the structure of viral proteins, which might aid in the development of novel treatments. DeepMind, on the other hand, makes it explicit on its website associated with COVID-19, 2020) that “we emphasize that these structure predictions have not been experimentally verified…we can’t be certain of the accuracy of the structures we are providing”.

5.3.6. Repurposing Existing Drugs

Beck et, al. [ 123 ] provides findings from a study that used Machine Learning to determine if an existing medicine, atazanavir, may be repurposed for coronavirus treatment. And, in collaboration with Benevolent AI, a UK AI business, [ 101 ] discovered Baricitinib, a drug used to treat myelofibrosis and rheumatoid arthritis, as a viable COVID-19 therapy. AI can assist in the discovery of effective medications to treat coronavirus patients. It has evolved into a useful tool for developing diagnostic tests and vaccines [ 124 ]. In research from [ 125 ], AI aids in the creation of vaccines and therapies at a much faster rate than before, as well as clinical trials during vaccine development.

5.4. AI and Health Care Workers’ Workloads Reduction

Health care workers are overworked because of a sudden and significant increase in the number of patients during the COVID-19 epidemic. In this case, artificial intelligence (AI) is employed to lessen the burden on health care staff. Hence, in research from [ 85 ] utilizing the classification of confirmed instances of coronavirus (new version of COVID-19) as one of the pandemic illnesses, a severe problem in the sustainable development process was studied. As a result, binary classification modeling was employed as one of the artificial intelligence ways using the group method of data handling (GMDH) kind of neural network. The suggested model was built using the Hubei province of China as a case study, with certain significant characteristics such as minimum, average, and maximum city density, relative humidity, daily temperature, and wind speed as input datasets, and the number of verified cases as output dataset for 30 days.

The suggested binary classification model outperforms the competition in terms of predicting confirmed instances. In addition, regression analysis was performed, and the trend of confirmed cases was compared to daily weather parameter changes (humidity, average temperature, and wind).

The relative maximum day temperature and humidity had the greatest influence on the verified cases, according to the findings. The confirmed cases were impacted positively by the relative humidity in the primary case study, which averaged 77.9%, and adversely by the highest daily temperature, which averaged 15.4 °C.

Offering the greatest training to students and clinicians on this emerging illness by utilizing digital techniques and decision science [ 126 ]. AI can improve future patient care and handle more possible difficulties, reducing doctors’ burden.

5.5. Remark

From an epidemiological, diagnostic, and pharmacological standpoint, AI has yet to play a substantial part in the fight against coronavirus. Its application is limited by a shortage of data, outlier data, and an abundance of noise. It is vital to create unbiased time series data for Artificial intelligence training. While the expanding number of worldwide activities in this area is promising, more diagnostic testing is required. Not just for supplying training data for AI models, but also better controlling the epidemic and lowering the cost in terms of human lives and economic harm.

Data is crucial in determining if AI can be used to combat future diseases and pandemics. As in [ 96 ], it has been previously stated that the risk is public health reasons will override data privacy concerns. Long after the epidemic has passed, governments may choose to continue the unparalleled surveillance of their population. As a result, worries regarding data privacy are reasonable.

6. Significance of the Study (Body Language Symptoms for COVID-19)

Communication is one of the most crucial skills a physician should have, according to patient surveys. However, communication encompasses more than just what is spoken. From the time a patient first visits a physician, his or her nonverbal communication, or body language, determines the course of therapy. Bodily language encompasses all nonverbal forms of communication, including posture, facial expression, and body movements. Being aware of such habits can help doctors gain more access to their patients. Patient involvement, compliance, and the result can all be influenced by effective nonverbal communication [ 127 ].

Pandemic and epidemic illnesses are a worldwide threat that might kill millions of people. Doctors have limited abilities to recognize and treat victims. Human and technological resources are still in short supply when it comes to epidemic and pandemic conditions. To better the treatment process and when the patient is unable to travel to the treatment location, remote diagnosis is necessary, and the patient’s status should be automatically examined. Altering facial wrinkles, movements of the eyes and eyebrows, some protrusion of the nose, changing the lips, and the appearance of certain motions of the hands, shoulders, chest, head, and other areas of the body are all characteristics of pandemic and epidemic illnesses. Artificial intelligence technology has shown promise in understanding these motions and cues in some cases. As a result, the concept of allocating body language to identifying epidemic diseases in patients early, treating them early, and assisting doctors in recognizing them arose owing to the speed with which they spread and people died. It should be emphasized that the COVID-19 sickness, which horrified the entire world and revolutionized the world’s life, was the major and crucial motivator for the idea of this study after we studied the body language analysis research in health care and defined the automatic recognition frame using artificial intelligence to recognize various body language elements.

As researchers in information technology and computer science, we must contribute to discussing an automatic gesture recognition model that helps better identify the external symptoms of epidemic and pandemic diseases for helping mankind.

7. Conclusions

In this paper, we reviewed the recent literature analyzing patients’ body language using deep learning techniques. Since most of this research is ongoing, we focused on the body language analysis research in health care. In such recent works, most of the research in health care has been considered to define the automatic recognition frame using artificial intelligence to recognize various body language elements. It will be interesting to discuss an automatic gesture recognition model that helps better identify the external symptoms of epidemic and pandemic diseases.

The body language analysis of patients using artificial intelligence for identifying the external symptoms of epidemic and pandemic diseases is a motivating issue for future research to improve the process of treatment, including for when the patient is inaccessible to the place of the treatment, remote diagnosis is required, and the patient’s condition should be analyzed automatically.

Acknowledgments

The authors would like to thank the Research Management Center, Malaysia International Islamic University for funding this work by Grant RMCG20-023-0023. Also, the authors would like to thank the United Arab Emirates University for funding this work under UAEU Strategic Research Grant G00003676 (Fund No.: 12R136) through Big Data Analytics Center.

Funding Statement

This research was funded by Malaysia International Islamic University Research Management Center Grant RMCG20-023-0023, and United Arab Emirates University Strategic Research Grant G00003676 (Fund No.: 12R136) through Big Data Analytics Center.

Author Contributions

Conceptualization, R.A. and A.A.; Methodology, R.A., S.T. and S.W.; Validation, R.A. and M.A.H.A.; Formal analysis, R.A. and S.T.; Resources, S.T.; Data curation, R.A. and A.A.; Writing – original draft, A.A.; Writing – review & editing, R.A., S.T., M.A.H.A. and S.W.; Visualization, M.A.H.A.; Supervision, R.A. and S.W.; Project administration, R.A. and S.T.; Funding acquisition, A.A. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Informed consent statement, data availability statement, conflicts of interest.

The authors declare no conflict of interest.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

How to Decipher Guinea Pig Sounds, Noises, and Body Language

Understanding What Your Pet Is Communicating

research about body language

Andrew Bret Wallis / Getty Images

Guinea noises can mean lots of different things—it's how these little creatures communicate. By using sounds and postures just like capybaras , guinea pigs can actually say a lot. Though you may not understand all the noises they sometimes make, there are things they do that have a fairly clear meaning and that can help you understand your guinea pigs. Read their body language and interpret their sounds—the chitters, squeaks, and purrs—to know what they're saying.

Interpreting Your Guinea Pig's Sounds and Body Language

Guinea pig sounds.

Guinea pigs make a variety of sounds or vocalizations, some of which most owners will recognize. Content guinea pigs just going about their day often make a variety of squeaks, chortles, and quiet grunts that also seem to accompany casual interactions. Along with these frequent squeaks and chortles, there are a variety of other quite distinctive noises you might hear from your guinea pig. Learn to recognize these!

The Spruce / Elnora Turner

  • Wheeking: This is a distinctive (and common) vocalization made by guinea pigs , and it is most often used to communicate anticipation or excitement, particularly about being fed. It sounds like a long and loud squeal or whistle and sometimes wheeking may simply serve as a call for attention. Many guinea pigs will make a very loud wheeking noise in anticipation of getting some tasty treats when their owners open the fridge or get out the food container.
  • Purring: Purrs have different meanings, depending on the pitch of the sound and the accompanying body language. Guinea pigs that feel contented and comfortable will make a deep purring sound, accompanied by a relaxed, calm posture. However, if the purr is higher pitched, especially towards the end of the purr, this is more likely a sound of annoyance. In fact, a guinea pig making this noise will be tense and may seem to even vibrate. A short purr, sometimes described as a "durr," may indicate fear or uncertainty, usually accompanied by the guinea pig remaining motionless.
  • Rumbling: A guinea pig rumble is deeper than a purring noise. It is made when a male romances a female and sometimes females in season also make it. Often accompanied by a sort of "mating dance," rumbling is also sometimes called "motorboating" or "rumble strutting".
  • Teeth Chattering: This is an aggressive vocalization that is a sign of an agitated or angry guinea pig. Teeth chattering is often accompanied by the guinea pig showing its teeth, which looks like a yawn, and it means "back off" or "stay away."
  • Hissing: Like teeth chattering, this is a sign of a guinea pig who's upset. It is just like the hissing noise that a cat makes.
  • Cooing: Cooing communicates reassurance in guinea pigs. It is a sound most often, but not exclusively, made by mother guinea pigs to their young .
  • Shrieking: A piercing, high-pitched squeak called a shriek is a fairly unmistakable call of alarm, fear, or pain from a guinea pig. If you hear this sound, it would be good to check on your guinea pigs to make sure everything is OK and none of them is hurt.
  • Whining: A whining or moaning type of squeak can communicate annoyance or dislike for something you or another guinea pig is doing.
  • Chirping: This sounds just like a bird chirping and is perhaps the least understood (or heard) noise that guinea pigs make. A chirping guinea pig may also appear to be in a trance-like state. The meaning of this "song" is the subject of much discussion, with no firm conclusions.

Guinea Pig Body Language 

Guinea pigs can also communicate via body language. It's a good idea to get to know what's normal for your guinea pigs so that you can spot changes in their movements and posture, which can act as clues about what is happening with them. Understand what your pet means by these:

  • Popcorning: Easy to recognize, popcorning consists of hopping straight up in the air, sometimes repeatedly, just like popcorn does while it is popping. It is most often seen in young guinea pigs who are especially happy, excited, or just feeling playful. Older pigs also popcorn, though they usually don't jump as high as younger pigs.
  • Freezing: A guinea pig that is startled or uncertain about something in its environment will stand motionless.
  • Sniffing: Sniffing is a way to check out what is going on around them and to get to know others. Guinea pigs particularly like to sniff each other around the nose, chin, ears, and back end.
  • Touching Noses: This is a friendly greeting between guinea pigs.
  • Aggressive Actions: These can include raising their heads and/or rising up on their hind ends with stiff legs, shuffling side to side on stiff legs, fluffing out their fur, and showing their teeth (yawning). These actions are often accompanied by hissing and/or teeth chattering. If your guinea pigs do this with each other, be on high alert for fighting.
  • Strutting: Moving side to side on stiff legs can be a sign of aggression, often accompanied by teeth chattering. Strutting around another guinea pig while rumbling is a typical mating dance and the origin of the term "rumble strutting."
  • Scent Marking: Guinea pigs will rub their chins, cheeks, and hind ends on items they wish to mark as theirs. They may also urinate on things or other guinea pigs to show their dominance.
  • Mounting: This can be either sexual behavior (males to females) or behavior used to show dominance within the guinea pig herd's social structure, especially between females.
  • Fidgeting   While Being Held: This can often be a sign that your guinea pig needs to go to the bathroom or that your guinea pig is just tired of being held. Either way, try returning your guinea pig to its cage for a bit.
  • Tossing Head in the Air: A guinea pig getting annoyed with being petted will toss its head back as a way of asking you to stop.
  • Licking: Most owners consider this a sign of guinea pig affection, though it is possible that they just like the taste of the salt on your skin.
  • Running Away From Being Picked Up: Guinea pigs tend to be timid, especially at first. Running away from you is not a rejection but rather a natural defense mechanism. Given time and patience, almost all guinea pigs will come to accept being picked up for cuddles and playtime outside of the cage.

More from The Spruce Pets

APS

Human Reviewers Can’t Keep Up With Police Bodycam Videos. AI Now Gets the Job

  • Artificial Intelligence (AI)
  • Behavior Change

“Who will watch the watchmen?” In the age of police body cameras, the answer may be “artificial intelligence.”

“For us, it’s a game changer,” says  Jennifer Eberhardt , a psychology professor at Stanford whose work on race and crime won her a MacArthur “genius grant.”

She leads a team of researchers who used AI to help review and analyze videos of nearly 600 traffic stops by Oakland police.

“We could look at the first 27 seconds of the stop, the first roughly 45 words that the officer spoke, and we could use this model to predict whether that driver was going to be handcuffed, searched or arrested by the end of the stop,” she says.

Read the whole story (subscription may be required): NPR

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines .

Please login with your APS account to comment.

research about body language

Language and Memory Are in Focus for Latest Cattell Sabbatical Awards

Gary Lupyan, Tracy Riggins, and Elizabeth Schotter are the latest recipients of the Sabbatical Fund Fellowship from the James McKeen Cattell Fund.

research about body language

There’s No Ghost in the Machine: How AI Changes Our Views of Ourselves

Teaching: Try these classroom activities to clarify the myths and realities of artificial-intelligence capabilities.

research about body language

AI’s Limits, Potential for Psychological Research and Practice

In the latest Science for Society webinar, psychologists came together to discuss the past and current applications of artificial intelligence from a scientific perspective. A recording of the webinar is also available for registrants and APS members.

Privacy Overview

CookieDurationDescription
__cf_bm30 minutesThis cookie, set by Cloudflare, is used to support Cloudflare Bot Management.
CookieDurationDescription
AWSELBCORS5 minutesThis cookie is used by Elastic Load Balancing from Amazon Web Services to effectively balance load on the servers.
CookieDurationDescription
at-randneverAddThis sets this cookie to track page visits, sources of traffic and share counts.
CONSENT2 yearsYouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data.
uvc1 year 27 daysSet by addthis.com to determine the usage of addthis.com service.
_ga2 yearsThe _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors.
_gat_gtag_UA_3507334_11 minuteSet by Google to distinguish users.
_gid1 dayInstalled by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously.
CookieDurationDescription
loc1 year 27 daysAddThis sets this geolocation cookie to help understand the location of users who share the information.
VISITOR_INFO1_LIVE5 months 27 daysA cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface.
YSCsessionYSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages.
yt-remote-connected-devicesneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt-remote-device-idneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt.innertube::nextIdneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.
yt.innertube::requestsneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.

IMAGES

  1. 16 Essential Body Language Examples and Their Meanings

    research about body language

  2. Why Body Language Is Important In Communication

    research about body language

  3. 17 Body Language Infographics to Help Read People

    research about body language

  4. The Importance of Body Language in Presentations

    research about body language

  5. This Body Language Infographic Shows You What People MEAN

    research about body language

  6. 15 Crucial Body Language Statistics & Fun Facts To Know 2022

    research about body language

VIDEO

  1. The Power of Body Language

  2. 6 Body Language Secrets Everyone Should Know #psychology #manipulation #mindset

  3. 5 body language tips you probably didn’t know #fyp #motivation

  4. G20 body language: Reading between the lines

  5. 6 Powerful body language secrets you should know (part 2) #manipulation #psychology

  6. Body Language

COMMENTS

  1. Unspoken science: exploring the significance of body language in

    Research suggests that non-verbal communication constitutes a substantial portion of human interaction, often conveying information that words alone cannot. Body language has a direct impact on how people perceive and interpret scientific ideas and findings. 1 For example, ...

  2. Body language in the brain: constructing meaning from expressive

    Abstract. This fMRI study investigated neural systems that interpret body language—the meaningful emotive expressions conveyed by body movement. Participants watched videos of performers engaged in modern dance or pantomime that conveyed specific themes such as hope, agony, lust, or exhaustion. We tested whether the meaning of an affectively ...

  3. Language, Gesture, and Emotional Communication: An Embodied View of

    Abstract. Spoken language is an innate ability of the human being and represents the most widespread mode of social communication. The ability to share concepts, intentions and feelings, and also to respond to what others are feeling/saying is crucial during social interactions. A growing body of evidence suggests that language evolved from ...

  4. Understanding Body Language

    Video: Datta lab. It might not rival Newton's apple, which led to his formulating the law of gravity, but the collapse of a lighting scaffold played a key role in the discovery that mice, like humans, have body language. Harvard Medical School scientists have developed new computational techniques that can make sense of the bodily movements ...

  5. Body Language Analysis in Healthcare: An Overview

    More extensive research is needed using artificial intelligence (AI) techniques in disease detection. This paper presents a comprehensive survey of the research performed on body language processing. Upon defining and explaining the different types of body language, we justify the use of automatic recognition and its application in healthcare.

  6. Body Language

    Body language is a silent orchestra, as people constantly give clues to what they're thinking and feeling. Non-verbal messages including body movements, facial expressions, vocal tone and volume ...

  7. Decoding spatiotemporal features of emotional body language in social

    How are emotions perceived through human body language in social interactions? This study used point-light displays of human interactions portraying emotional scenes (1) to examine quantitative ...

  8. Towards the neurobiology of emotional body language

    Emotional body language is a rapidly emerging research field in cognitive neuroscience. de Gelder reviews the body's role in our understanding of emotion, action and communication, and discusses ...

  9. Understanding Body Language Does Not Require Matching the Body's

    Body language (BL) is a type of nonverbal communication in which the body communicates the message. ... Experimental Brain Research 112: 103-111. Crossref. PubMed. Web of Science. Google Scholar. Grezes J., Decety J. (2001) Functional anatomy of execution, mental simulation, observation, and verb generation of actions: A meta-analysis. Human ...

  10. PDF Body Language: An Effective Communication Tool

    Body language is instinctively interpreted by us all to a limited degree, but the subject is potentially immensely complex, and perhaps infinitely so, given that the human body is ... dramatically accelerated the research and understanding of the connections between the brain, feelings and thoughts, and body movement. We should expect to see ...

  11. The Body Language Myth

    Though few know it's origin, it comes from a study conducted by Mehrabian and Wiener in 1967. In their experiment, researchers read participants a list of words (like "dear," "honey ...

  12. How to Understand Body Language and Facial Expressions

    Body language refers to the nonverbal signals that we use to communicate. These nonverbal signals make up a huge part of daily communication. In fact, body language may account for between 60% to 65% of all communication. Examples of body language include facial expressions, eye gaze, gestures, posture, and body movements.

  13. Mastering Body Language: A Comprehensive Nonverbal Guide

    The study of body language, formally known as kinesics, has evolved from a niche area of research to a critical component of fields as diverse as psychology, law enforcement, business, and even politics. It's a testament to the universal importance of understanding these unspoken messages that surround us every day.

  14. How Universal Is Body Language?

    New research suggests emotional body language may transcend culture. For all the importance we place on words, whether spoken or written, much of the communicating we do on a regular basis comes ...

  15. Body Language

    Body language refers to the nonverbal communication expressed through gestures, postures, and facial expressions. It encompasses the way we communicate and convey meaning through our bodies, and is an integral part of human communication and culture. AI generated definition based on: Encyclopedia of Language & Linguistics (Second Edition), 2006

  16. Frontiers

    The focus of this investigation was to identify brain substrates that decode meaning from body movement, as evidenced by meaning-specific neural processing that differentiates body movements conveying distinct expressions. To identify brain substrates sensitive to the meaningful emotive state of an actor conveyed through body movement, we used ...

  17. The truth about reading body language

    Modern research on body language—often called nonverbal behavior—began in the 1960s and '70s with studies that aimed to demonstrate the universality of facial expressions of emotion.That ...

  18. How to Understand and Read Body Language

    Nonverbal communication is shaped by several forces including: personality. environment. biology. culture. Understanding what we say without words takes practice and curiosity — and a ...

  19. How To Read Body Language: Examples, Types & Meaning in 2024

    Perception of Body Language. Tipper et al. (2015) further explained that although there is extensive research into the brain systems involved in the perception of body movement, hand gestures, eye movements, and facial expressions, there is little understanding of how the brain understands or reads body language.

  20. Nonverbal communication speaks volumes, with David Matsumoto, PhD

    David Matsumoto, PhD, is a renowned expert in the field of facial expression, gesture, nonverbal behavior, emotion and culture. He has published more than 400 articles, manuscripts, book chapters and books on these subjects. Since 1989, Matsumoto has been a professor of psychology at San Francisco State University.

  21. 23 Essential Body Language Examples and Their Meanings

    slight forward lean toward the other person. a slow release after 1-2 seconds. What it Means: This handshake is a breath of fresh air and signals mutual respect for both parties. An equal handshake signals confidence, openness, and power during an interaction and leaves both participants feeling warm and fuzzy inside.

  22. Body Language: What It Is and How to Read It

    Standing straight with hands at the sides is a common resting position that suggests a willingness to engage and listen. Resting the head in one hand can show interest. When both hands support the ...

  23. The Science of Body Language & The Debates

    Research Shows That Yes, Body Language Does Matter. Posted October 21, 2012. Body language evaluation has increasingly become more scrutinized as each debate passes and expect it to continue with ...

  24. The Influence of "Body Language" on the Assessment of Witness

    The methodology of the doctrinal analysis we conducted and the results are presented. Their implications considering research on nonverbal communication are discussed, and we conclude with a call for legal scholars and practitioners to further address the influence of "body language" in legal proceedings.

  25. Body Language in Autism

    As research continues to uncover the complexities of body language perception in autism, future studies are needed to delve deeper into how individuals with autism process social cues and movements. It has been observed that individuals with autism often exhibit lower accuracy in interpreting emotions conveyed through body language compared to ...

  26. An Analysis of Body Language of Patients Using Artificial Intelligence

    In research from , body language is a kind of nonverbal communication. Humans nearly exclusively transmit and interpret such messages subconsciously. Body language may provide information about a person's mood or mental condition. Aggression, concentration, boredom, relaxed mood, joy, amusement, and drunkenness are just a few of the messages ...

  27. Guinea Pig Sounds, Noises, and Body Language

    Guinea noises can mean lots of different things—it's how these little creatures communicate. By using sounds and postures just like capybaras, guinea pigs can actually say a lot.Though you may not understand all the noises they sometimes make, there are things they do that have a fairly clear meaning and that can help you understand your guinea pigs.

  28. Human Reviewers Can't Keep Up With Police Bodycam Videos. AI Now Gets

    "Who will watch the watchmen?" In the age of police body cameras, the answer may be "artificial intelligence." … "For us, it's a game changer," says Jennifer Eberhardt, a psychology professor at Stanford whose work on race and crime won her a MacArthur "genius grant." She leads a team of researchers who used AI to help review and analyze videos of nearly 600 traffic stops ...