A. Mehrabian and S. R. Ferris, Inference of attitudes from nonverbal communication in two channels, J. Consult. Psychol, vol.31, issue.3, p.248, 1967.

, at C1: Ring Buyers Warm Up to Quartz Jewelry That Is Said to Reflect Their Emotions, The Wall Street Journal, p.9, 1975.

R. W. Picard, , 1997.

R. W. Picard, E. Vyzas, and J. Healey, Toward machine emotional intelligence: analysis of affective physiological state, IEEE Trans. Pattern Anal. Mach. Intell, vol.23, issue.10, pp.1175-1191, 2001.

J. Hernandez, AutoEmotive: bringing empathy to the driving experience to manage stress, DIS 2014, 2014.

A. Zadeh, R. Zellers, E. Pincus, and L. P. Morency, Multimodal sentiment intensity analysis in videos: facial gestures and verbal messages, IEEE Intell. Syst, vol.31, issue.6, pp.82-88, 2016.

M. Wöllmer, YouTube movie reviews: sentiment analysis in an audio-visual context, IEEE Intell. Syst, vol.28, issue.3, pp.46-53, 2013.

V. Perez-rosas, R. Mihalcea, and L. P. Morency, Utterance-level multimodal sentiment analysis, ACL, vol.1, pp.973-982, 2013.

A. Zadeh, M. Chen, S. Poria, E. Cambria, and L. P. Morency, Tensor fusion network for multimodal sentiment analysis, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp.1103-1114, 2017.

S. Poria, E. Cambria, D. Hazarika, N. Majumder, A. Zadeh et al., Contextdependent sentiment analysis in user-generated videos, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, vol.1, pp.873-883, 2017.

S. Poria, E. Cambria, N. Howard, G. B. Huang, and A. Hussain, Fusing audio, visual and textual clues for sentiment analysis from multimodal content, Neurocomputing, vol.174, pp.50-59, 2016.

B. Liu, Sentiment analysis and opinion mining, Synth. Lect. Hum. Lang. Technol, vol.5, issue.1, pp.1-167, 2012.

B. Pang and L. Lee, Opinion mining and sentiment analysis, J. Found. Trends Inf. Retrieval, vol.2, issue.1-2, pp.1-135, 2008.

G. Dziczkowski and K. Wegrzyn-wolska, RRSS -rating reviews support system purpose built for movies recommendation, Advances in Intelligent Web Mastering. Advances in Soft Computing, vol.43, pp.87-93, 2007.
URL : https://hal.archives-ouvertes.fr/hal-00861400

G. Dziczkowski and K. W?grzyn-wolska, An autonomous system designed for automatic detection and rating of film. Extraction and linguistic analysis of sentiments, Proceedings of WIC, 2008.
URL : https://hal.archives-ouvertes.fr/hal-00861362

G. Dziczkowski and K. W?grzyn-wolska, Tool of the intelligence economic: recognition function of reviews critics, ICSOFT 2008 Proceedings, 2008.
URL : https://hal.archives-ouvertes.fr/hal-00861380

, Kepios: Digital in 2018, essential insights into internet, social media, mobile, and ecommerce use around the world, 2018.

M. Ghiassi, J. Skinner, and D. Zimbra, Twitter brand sentiment analysis: a hybrid system using n-gram analysis and dynamic artificial neural network, Expert Syst. Appl, vol.40, issue.16, pp.6266-6282, 2013.

X. Zhou, X. Tao, J. Yong, and Z. Yang, Sentiment analysis on tweets for social events, Proceedings of the 2013 IEEE 17th International Conference on Computer Supported Cooperative Work in Design, CSCWD 2013, pp.557-562, 2013.

M. Salathé, D. Q. Vu, S. Khandelwal, and D. R. Hunter, The dynamics of health behavior sentiments on a large online social network, EPJ Data Sci, vol.2, p.4, 2013.

B. Sriram, D. Fuhry, E. Demir, H. Ferhatosmanoglu, and M. Demirbas, Short text classification in Twitter to improve information filtering, Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp.841-842, 2010.

E. M. Seabrook, M. L. Kern, B. D. Fulcher, and N. S. Rickard, Predicting depression from language-based emotion dynamics: longitudinal analysis of Facebook and Twitter status updates, J. Med. Internet Res, vol.20, issue.5, p.168, 2018.

W. Wang, I. Hernandez, D. A. Newman, J. He, and J. Bian, Twitter analysis: studying US weekly trends in work stress and emotion, Appl. Psychol, vol.65, issue.2, pp.355-378, 2016.

A. G. Reece, A. J. Reagan, K. L. Lix, P. S. Dodds, C. M. Danforth et al., Forecasting the onset and course of mental illness with Twitter data (Unpublished manuscript

J. Park, D. S. Lee, and H. Shablack, When perceptions defy reality: the relationships between depression and actual and perceived Facebook social support, J. Affect. Disord, vol.200, pp.37-44, 2016.

M. Burke and M. Develin, Once more with feeling: supportive responses to social sharing on Facebook, Proceedings of the ACM 2016 Conference on Computer Supported Cooperative Work, pp.1462-1474, 2016.

A. Go, R. Bhayani, and L. Huang, Twitter sentiment classification using distant supervision, J. CS224N Proj. Rep., Stanford, vol.1, p.12, 2009.

K. L. Liu, W. J. Li, and M. Guo, Emoticon smoothed language models for Twitter sentiment analysis, 2012.

K. W?grzyn-wolska, L. Bougueroua, H. Yu, and J. Zhong, Explore the effects of emoticons on Twitter sentiment analysis, Proceedings of Third International Conference on Computer Science & Engineering (CSEN 2016, pp.27-28, 2016.

D. Bitouk, R. Verma, and A. Nenkova, Class-level spectral features for emotion recognition, Speech Commun, vol.52, issue.7-8, pp.613-625, 2010.

C. Busso, Analysis of emotion recognition using facial expressions, speech and multimodal information, Sixth International Conference on Multimodal Interfaces, pp.205-211, 2004.

F. Dellaert, T. Polzin, and A. Waibel, Recognizing emotion in speech, International Conference on Spoken Language (ICSLP 1996), vol.3, pp.1970-1973, 1996.

C. M. Lee, Emotion recognition based on phoneme classes, 8th International Conference on Spoken Language Processing, pp.889-892, 2004.

J. Deng, X. Xu, Z. Zhang, S. Frühholz, D. Grandjean et al., Fisher kernels on phase-based features for speech emotion recognition, Dialogues with Social Robots. LNEE, vol.427, pp.195-203, 2017.

S. Steidl, Automatic classification of emotion-related user states in spontaneous children's speech, 2009.

S. Lugovic, M. Horvat, and I. Dunder, Techniques and applications of emotion recognition in speech, MIPRO 2016/CIS, 2016.

D. Kukolja, S. Popovi?, M. Horvat, B. Kova?, and K. ?osi?, Comparative analysis of emotion estimation methods based on physiological measurements for real-time applications, Int. J. Hum.-Comput. Stud, vol.72, issue.10, pp.717-727, 2014.

A. Davletcharova, S. Sugathan, B. Abraham, and A. P. James, Detection and analysis of emotion from speech signals, Procedia Comput. Sci, vol.58, pp.91-96, 2015.

K. Tyburek, P. Prokopowicz, P. Kotlarz, L. Rutkowski, M. Korytkowski et al., Fuzzy system for the classification of sounds of birds based on the audio descriptors, ICAISC 2014, vol.8468, pp.700-709, 2014.

K. Tyburek, P. Prokopowicz, P. Kotlarz, R. Michal, L. Rutkowski et al., Comparison of the efficiency of time and frequency descriptors based on different classification conceptions, ICAISC 2015. LNCS (LNAI), vol.9119, pp.491-502, 2015.

T. Chaspari, C. Soldatos, and P. Maragos, The development of the Athens Emotional States Inventory (AESI): collection, validation and automatic processing of emotionally loaded sentences, World J. Biol. Psychiatry, vol.16, issue.5, pp.312-322, 2015.

A. Arruti, I. Cearreta, A. Alvarez, E. Lazkano, and B. Sierra, Feature selection for speech emotion recognition in Spanish and Basque: on the use of machine learning to improve human-computer interaction, PLoS ONE, vol.9, issue.10, p.108975, 2014.

P. Ekman, Facial expression and emotion, Am. Psychol, vol.48, pp.384-392, 1993.

R. E. Jack and P. G. Schyns, The human face as a dynamic tool for social communication, Curr. Biol. Rev, vol.25, issue.14, pp.621-634, 2015.

P. Ekman, W. Friesen, and J. Hager, Facial action coding system: Research Nexus, Network Research Information, 2002.

C. H. Hjorztsjö, Man's face and mimic language, 1969.

P. Ekman, T. S. Huang, and T. J. Sejnowski, Final report to NSF of the planning workshop on facial expression understanding, vol.378, 1993.

S. Afzal, T. M. Sezgin, Y. Gao, and P. Robinson, Perception of emotional expressions in different representations using facial feature points, IEEE, pp.978-979, 2009.

F. De-la-torre, W. S. Chu, X. Xiong, F. Vicente, X. Ding et al., IntraFace, IEEE International Conference on Automatic Face and Gesture Recognition Workshops, 2015.

T. Amira, I. Dan, and B. Az-eddine, Monitoring chronic disease at home using connected devices, 2018 13th Annual Conference on System of Systems Engineering (SoSE), pp.400-407, 2018.

L. Shu, A review of emotion recognition using physiological signals, Sensors (Basel), vol.18, issue.7, p.2074, 2018.

W. Wei, Q. Jia, Y. Feng, and G. Chen, Emotion recognition based on weighted fusion strategy of multichannel physiological signals, Comput. Intell. Neurosci, vol.2018, p.5296523, 2018.

M. S. Özerdem and H. Polat, Emotion recognition based on EEG features in movie clips with channel selection, Brain Inform, vol.4, issue.4, pp.241-252, 2017.

E. H. Jang, B. J. Park, M. S. Park, S. H. Kim, and J. H. Sohn, Analysis of physiological signals for recognition of boredom, pain, and surprise emotions, J. Physiol. Anthropol, vol.34, p.25, 2015.

J. Kortelainen, S. Tiinanen, X. Huang, X. Li, S. Laukka et al., Multimodal emotion recognition by combining physiological signals and facial expressions: a preliminary study, Conference Proceeding of the IEEE Engineering in Medicine and Biology Society, vol.2012, pp.5238-5241, 2012.

H. Zacharatos, C. Gatzoulis, and Y. L. Chrysanthou, Automatic emotion recognition based on body movement analysis: a survey, IEEE Comput. Graph Appl, vol.34, issue.6, pp.35-45, 2014.

W. H. Tsui, P. Lee, and T. C. Hsiao, The effect of emotion on keystroke: an experimental study using facial feedback hypothesis, Conference Proceedings of the IEEE Engineering in Medicine and Biology Society, pp.2870-2873, 2013.

S. Li, L. Cui, C. Zhu, B. Li, N. Zhao et al., Emotion recognition using Kinect motion capture data of human gaits, PeerJ, vol.4, p.2364, 2016.

A. Goshvarpour and A. Abbasi, Goshvarpour, A: Fusion of heart rate variability and pulse rate variability for emotion recognition using lagged poincare plots, Australas. Phys. Eng. Sci. Med, vol.40, issue.3, pp.617-629, 2017.

M. Khezri, M. Firoozabadi, and A. R. Sharafat, Reliable emotion recognition system based on dynamic adaptive fusion of forehead biopotentials and physiological signals, Comput. Methods Programs Biomed, vol.122, issue.2, pp.149-164, 2015.

K. Gouizi, F. Bereksi-reguig, and C. Maaoui, Emotion recognition from physiological signals, J. Med. Eng. Technol, vol.35, issue.6-7, pp.300-307, 2011.
URL : https://hal.archives-ouvertes.fr/hal-01346643

G. K. Verma and U. S. Tiwary, Multimodal fusion framework: a multiresolution approach for emotion classification and recognition from physiological signals, Neuroimage, vol.102, pp.162-172, 2014.

H. Yang, A. Willis, A. De-roeck, and B. Nuseibeh, A hybrid model for automatic emotion recognition in suicide notes, Biomed. Inform. Insights, vol.5, pp.17-30, 2012.

F. Eyben, F. Weninger, M. Wöllmer, and B. Shuller, Open-Source Media Interpretation by Large Feature-Space Extraction, 2016.

F. Eyben, M. Wöllmer, and B. Shuller, openEAR -introducing the munich open-source emotion and affect recognition toolkit, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops

H. O'reilly, The EU-emotion stimulus set: a validation study, Behav. Res, vol.48, pp.567-576, 2015.

B. Schuller, Affective and behavioural computing: lessons learnt from the first computational paralinguistics challenge, Comput. Speech Lang, vol.53, pp.156-180, 2019.
URL : https://hal.archives-ouvertes.fr/hal-01993250

, ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use