Understanding human emotions from everyday behavior is an important goal in digital health, affective computing, and wearable sensing. In our latest publication in the IEEE Internet of Things Journal, we investigate how movement data from wearable sensors can be used to infer emotional states.
The paper “From Steps to Sentiments: Cross-Domain Transfer Learning for Activity-Based Emotion Detection in Wearable IoT Systems” explores how machine learning models trained for activity recognition can be adapted to detect emotional states. Using a cross-domain transfer learning approach, the study demonstrates how knowledge learned from physical activity data can be transferred to emotion recognition tasks.
Wearable devices continuously capture movement patterns, step dynamics, and physical activity signals, which provide valuable information about human behavior in everyday environments. By leveraging these signals, the proposed approach opens new opportunities for non-invasive emotion monitoring using wearable IoT systems.
Such technologies have potential applications in mental health monitoring, digital phenotyping, and personalized healthcare, where unobtrusive sensing and intelligent data analysis can support early detection of behavioral and emotional changes.
This publication also continues a long-standing research collaboration between Qaiser Riaz and Björn Krüger, focusing on machine learning methods for human movement analysis and wearable sensor systems.
The full publication can be accessed via IEEE Xplore:
https://ieeexplore.ieee.org/document/11404152
2026
Imran, Hamza Ali; Riaz, Qaiser; Hamza, Kiran; Muhammad, Shaida; Krüger, Björn
From Steps to Sentiments: Cross-Domain Transfer Learning for Activity-Based Emotion Detection in Wearable IoT Systems Journal Article
In: IEEE Internet of Things Journal, pp. 1-1, 2026.
@article{Imran2026a,
title = {From Steps to Sentiments: Cross-Domain Transfer Learning for Activity-Based Emotion Detection in Wearable IoT Systems},
author = {Hamza Ali Imran and Qaiser Riaz and Kiran Hamza and Shaida Muhammad and Björn Krüger},
doi = {10.1109/JIOT.2026.3666469},
year = {2026},
date = {2026-02-20},
urldate = {2026-02-20},
journal = {IEEE Internet of Things Journal},
pages = {1-1},
abstract = {Context-aware, gait-based sentiment analysis and emotion perception is an emerging research area within Internet of Things (IoT), aiming to make smart systems more intuitive and responsive. Recognizing emotions from wearable inertial sensor data is challenging due to subtle and compound emotional cues, variability across individuals and contexts, and limited, imbalanced datasets. To address these challenges, we propose Jazbat-Net, a lightweight neural network that leverages Transfer Learning (TL). The model is first trained on a large-scale, publicly available multi-activity dataset collected using wearable inertial sensors, and then retrained on a multi-class emotion dataset, effectively transferring knowledge from the pretraining phase. We evaluate Jazbat-Net with and without TL, across both smartwatch and smartphone based data, and for input dimensions ranging from 1D to 6D. The best results are achieved when pretrained on smartphone-based activity data and retrained on smartphone-based emotion data using a 1D input size. The proposed model attains an average classification accuracy of 95%, with a precision score of 95%, a recall score of 97%, and an F1-score of 96%. Moreover, Jazbat-Net achieves a low theoretical time complexity and requires only ≈ 6.96 M Multiply–Accumulate Operations (MACs), which is about 95% fewer computations than the previous State-of-the-Art (SOTA) model. Its space complexity is also low, with a model size of only ≈ 110 KB and peak activation memory of ≈ 0.35 MB. On-device evaluation on a Xiaomi 13T smartphone demonstrates that Jazbat-Net achieves a median inference latency of only ≈ 90.96 ms with a TFLite 32-bit floating point precision (FP32) model size of just ≈ 0.158 MB, making it ≈ 20× smaller and ≈ 20% faster than the previous SOTA model while maintaining comparable accuracy.
},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Context-aware, gait-based sentiment analysis and emotion perception is an emerging research area within Internet of Things (IoT), aiming to make smart systems more intuitive and responsive. Recognizing emotions from wearable inertial sensor data is challenging due to subtle and compound emotional cues, variability across individuals and contexts, and limited, imbalanced datasets. To address these challenges, we propose Jazbat-Net, a lightweight neural network that leverages Transfer Learning (TL). The model is first trained on a large-scale, publicly available multi-activity dataset collected using wearable inertial sensors, and then retrained on a multi-class emotion dataset, effectively transferring knowledge from the pretraining phase. We evaluate Jazbat-Net with and without TL, across both smartwatch and smartphone based data, and for input dimensions ranging from 1D to 6D. The best results are achieved when pretrained on smartphone-based activity data and retrained on smartphone-based emotion data using a 1D input size. The proposed model attains an average classification accuracy of 95%, with a precision score of 95%, a recall score of 97%, and an F1-score of 96%. Moreover, Jazbat-Net achieves a low theoretical time complexity and requires only ≈ 6.96 M Multiply–Accumulate Operations (MACs), which is about 95% fewer computations than the previous State-of-the-Art (SOTA) model. Its space complexity is also low, with a model size of only ≈ 110 KB and peak activation memory of ≈ 0.35 MB. On-device evaluation on a Xiaomi 13T smartphone demonstrates that Jazbat-Net achieves a median inference latency of only ≈ 90.96 ms with a TFLite 32-bit floating point precision (FP32) model size of just ≈ 0.158 MB, making it ≈ 20× smaller and ≈ 20% faster than the previous SOTA model while maintaining comparable accuracy.