Deep Learning-based Multimodal Emotion Recognition Technologies, 3D Motion and Multisensory Object Representation Tools, and Visual Matching and Image Processing Algorithms for Idealized Beauty Standards and Social Media-related Body Dissatisfaction
Juraj Cug1, Ana-Maria-Sonia Petreanu2, Alice AlAkoum2, Mustafa Latif Emek3, and Simona Stamule2ABSTRACT. Despite the relevance of augmented reality-based virtual try-on experiences with regard to idealized beauty standards, social media-related body dissatisfaction, and visual and haptic imagery, only limited research has been conducted on this topic. Throughout May 2023, a quantitative literature review of the Web of Science, Scopus, and ProQuest databases was performed, with search terms including “idealized beauty standards” and “social media-related body dissatisfaction” + “deep learning-based multimodal emotion recognition technologies,” “3D motion and multisensory object representation tools,” and “visual matching and image processing algorithms.” As research published between 2017 and 2022 was inspected, only 177 articles satisfied the eligibility criteria, and 25 mainly empirical sources were selected. Data visualization tools: Dimensions (bibliometric mapping) and VOSviewer (layout algorithms). Reporting quality assessment tool: PRISMA. Methodological quality assessment tools include: AXIS, Dedoose, Distiller SR, and MMAT.
Keywords: deep learning; emotion recognition; multisensory; visual matching; image processing; idealized beauty standard; social media-related body dissatisfaction
How to cite: Cug, J., Petreanu, A.-M.-S., AlAkoum, A., Emek, M. L., and Stamule, S. (2023). “Deep Learning-based Multimodal Emotion Recognition Technologies, 3D Motion and Multisensory Object Representation Tools, and Visual Matching and Image Processing Algorithms for Idealized Beauty Standards and Social Media-related Body Dissatisfaction,” Journal of Research in Gender Studies 13(2): 24–38. doi: 10.22381/JRGS13220232.
Received 26 June 2023 • Received in revised form 27 December 2023
Accepted 29 December 2023 • Available online 30 December 2023