Here are some samples of my research projects. Check my
GitHub
and
Google Scholar
for more.
Publications
Publication · IEEE Transactions on Affective Computing
TouchTales: Recognizing Naturalistic Emotions Through Touch in Storytelling
2026
Rúbia Guerra, Laura Cang, et al. (2026)
A care-centered protocol combining autobiographical storytelling with a touch-sensitive object to elicit and label authentic emotions, with ML results reaching accuracy comparable to physiological models. This is my first PhD publication; I co-led the work with Laura, designing the session structure, developing the touch data analysis protocol, and co-writing the ethical and methodological framing for sustainable affective data collection, including the discussion and reflection sections.
Examines challenges in computational reproducibility and proposes a taxonomy and workflow for creating artifacts that support not only re-execution but also comprehension and scrutiny, including integration with tooling and UX studies. I contributed to defining the conceptual space, shaping the design of artifact structures, and analysing how documentation and tool choices affect researchers trying to understand and reuse computational experiments.
Modeling the 'Kiss my Ass'-Smile: Appearance and Functions of Smiles in Negative Social Situations
2024
Mirella Hladký, Rúbia Guerra, et al. (2024)
Builds a corpus of smiles in negative social situations, proposes a functional category system, and trains models that classify smile functions from morphological signals at above-chance performance. Mirella was the primary researcher; I contributed beyond data collection, modelling smile functions from visual features and interpreting how mismatches between expression and internal emotion challenge automatic emotion recognition.
Uses a soft capacitive sensor array to measure shear and normal stress during expressive touch, revealing how individual differences shape the tactile signatures of affective gestures and informing haptic sensing and touch-aware interface design. I worked on feature design and data analysis beyond data collection, performing statistical and ML analyses that relate mechanical patterns of touch to affective interpretation, and helped situate the findings within affective computing and haptic interaction.
Feeling (key)Pressed: Comparing The Ways in Which Force and Self-Reports Reveal Emotion
2023
Laura Cang, Rúbia Guerra, et al. (2023)
Explores how keyboard typing force during stressful videogame play reflects emotional state, comparing force measurements to self-reported affect and EEG to propose a passive, unobtrusive sensing approach to emotion recognition. I led the ML modelling and statistical analyses comparing force and EEG (EEG modelling by Bereket Guta) against multi-pass labels, and helped interpret how these results connect haptic interaction, affective computing, and behaviour-aware systems. Published in IEEE Transactions on Haptics; invited for presentation at the World Haptics Conference 2025.
Choose or Fuse: Enriching Data Views with Multi-label Emotion Dynamics
2022
Rúbia Guerra, et al. (2022)
Addresses how to represent emotion as a dynamic multi-label phenomenon during interactive gameplay, comparing strategies that choose between or fuse multiple views of self-reported affect to enrich emotion recognition models. I helped conceptualize and statistically analyse these different label views and contributed to the discussion of how these choices affect the robustness of emotion prediction over time.
Introduces a multimodal system using haptic feedback via a pantograph device and auditory cues to help novice musicians with dyslexia explore and learn musical notation, with a preliminary user evaluation with dyslexic participants. I co-prototyped the haptic interactions on the Haply force-feedback device with Hannah, designing how notes and bars are rendered through force cues, and contributed to the interaction design, study, and writing around accessibility for dyslexic learners.
Automatic Translation of Sign Language with Multi-stream 3D CNN and Generation of Artificial Depth Maps
Rúbia Guerra, et al.
Combines RGB video, artificially generated depth maps, and motion features to recognize signs in Brazilian, Indian, and Korean sign language datasets, improving accuracy without requiring depth sensors. This work was primarily led by Giulia Zanon; I contributed to designing the ML models and discussing implications for accessible, AI-driven communication technology under resource constraints.
Development of a Database of Libras' Signs for Machine Learning: 3DCNN Case Study
2019
Giulia Zanon, Rúbia Guerra, et al. (2019)
Describes the recording of a new Brazilian Sign Language dataset with 20 signs produced by 10 signers, and evaluates a 3D CNN achieving around 72.6% accuracy as a baseline for future work. I contributed to the ML study, including data processing and 3D CNN experiments, and helped interpret what this early dataset and accuracy level mean for subsequent Libras recognition research.
Facial Expression Analysis in Brazilian Sign Language for Sign Recognition
2018
Rúbia Guerra, Tamires Rezende, Frederico Gadelha, et al. (2018)
Investigates approaches to facial feature extraction for Libras recognition and compares Random Forests to SVM and k-NN, showing that RF can match or exceed the performance of these methods. I worked on modelling and feature extraction for facial expressions and helped interpret the results as evidence that non-manual parameters are a consistent, technically viable component for sign language recognition systems.
Publication · II Congresso de Inovação e Metodologias de Ensino
Analysis of Quality of Life of Electrical Engineering Students
2016
Lucas Chaves, Mateus Rodrigues, Rúbia Guerra, et al. (2016)
Analyses how academic workload, learning environment, and personal factors relate to reported well-being among undergraduate students in an engineering program. I contributed primarily to interpreting the findings, writing the discussion, and presenting the work, highlighting links between quantitative results and students' lived experiences.
Uses the FEEL dataset to compare keyboard force and EEG for modelling dynamic emotions during horror game play, develops strategies for handling multi-pass self-reports, and shows that force signals can outperform EEG for tracking emotion dynamics. I designed and executed the complete analysis pipeline, from data processing and model development to evaluation, and articulated implications for building richer, temporally sensitive affective models grounded in real interaction.
Investigates detection and segmentation of regions of interest to support sign language recognition systems, using deep models to focus on relevant parts of the visual scene. I designed the models, carried out the experiments, and analysed how different region-of-interest strategies affect recognition performance in the context of assistive technologies for sign language users.