<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>wearable IoT &#8211; Digital Health Bonn</title>
	<atom:link href="https://digital-health-bonn.de/tag/wearable-iot/feed/" rel="self" type="application/rss+xml" />
	<link>https://digital-health-bonn.de</link>
	<description>Working Group &#34;Personalized Digital Health and Telemedicine&#34;</description>
	<lastBuildDate>Mon, 09 Mar 2026 15:42:06 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>New Publication: Activity-Based Emotion Detection Using Wearable Sensors and Transfer Learning</title>
		<link>https://digital-health-bonn.de/new-publication-activity-based-emotion-detection-using-wearable-sensors-and-transfer-learning/</link>
		
		<dc:creator><![CDATA[Björn Krüger]]></dc:creator>
		<pubDate>Wed, 25 Feb 2026 15:40:24 +0000</pubDate>
				<category><![CDATA[Allgemein]]></category>
		<category><![CDATA[Publications]]></category>
		<category><![CDATA[activity recognition]]></category>
		<category><![CDATA[affective computing]]></category>
		<category><![CDATA[digital health]]></category>
		<category><![CDATA[emotion detection]]></category>
		<category><![CDATA[emotion recognition from movement]]></category>
		<category><![CDATA[human activity sensing]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[transfer learning]]></category>
		<category><![CDATA[wearable IoT]]></category>
		<category><![CDATA[wearable machine learning]]></category>
		<category><![CDATA[wearable sensors]]></category>
		<guid isPermaLink="false">https://digital-health-bonn.de/?p=773</guid>

					<description><![CDATA[Understanding human emotions from everyday behavior is an important goal in digital health, affective computing, and wearable sensing. In our latest publication in the IEEE Internet of Things Journal, we investigate how movement data from wearable sensors can be used [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Understanding human emotions from everyday behavior is an important goal in <strong>digital health, affective computing, and wearable sensing</strong>. In our latest publication in the <strong>IEEE Internet of Things Journal</strong>, we investigate how <strong>movement data from wearable sensors</strong> can be used to infer emotional states.</p>



<p>The paper <strong>“From Steps to Sentiments: Cross-Domain Transfer Learning for Activity-Based Emotion Detection in Wearable IoT Systems”</strong> explores how <strong>machine learning models trained for activity recognition can be adapted to detect emotional states</strong>. Using a <strong>cross-domain transfer learning approach</strong>, the study demonstrates how knowledge learned from physical activity data can be transferred to emotion recognition tasks.</p>



<p>Wearable devices continuously capture <strong>movement patterns, step dynamics, and physical activity signals</strong>, which provide valuable information about human behavior in everyday environments. By leveraging these signals, the proposed approach opens new opportunities for <strong>non-invasive emotion monitoring using wearable IoT systems</strong>.</p>



<p>Such technologies have potential applications in <strong>mental health monitoring, digital phenotyping, and personalized healthcare</strong>, where unobtrusive sensing and intelligent data analysis can support early detection of behavioral and emotional changes.</p>



<p>This publication also continues a <strong>long-standing research collaboration between Qaiser Riaz and Björn Krüger</strong>, focusing on <strong>machine learning methods for human movement analysis and wearable sensor systems</strong>.</p>



<p>The full publication can be accessed via IEEE Xplore:<br><a href="https://ieeexplore.ieee.org/document/11404152">https://ieeexplore.ieee.org/document/11404152</a></p>


<div class="teachpress_pub_list"><form name="tppublistform" method="get"><a name="tppubs" id="tppubs"></a></form><div class="teachpress_publication_list"><h3 class="tp_h3" id="tp_h3_2026">2026</h3><div class="tp_publication tp_publication_article"><div class="tp_pub_info"><p class="tp_pub_author"> Imran, Hamza Ali;  Riaz, Qaiser;  Hamza, Kiran;  Muhammad, Shaida;  Krüger, Björn</p><p class="tp_pub_title"><a class="tp_title_link" onclick="teachpress_pub_showhide('122','tp_links')" style="cursor:pointer;">From Steps to Sentiments: Cross-Domain Transfer Learning for Activity-Based Emotion Detection in Wearable IoT Systems</a> <span class="tp_pub_type tp_  article">Journal Article</span> </p><p class="tp_pub_additional"><span class="tp_pub_additional_in">In: </span><span class="tp_pub_additional_journal">IEEE Internet of Things Journal, </span><span class="tp_pub_additional_pages">pp. 1-1, </span><span class="tp_pub_additional_year">2026</span>.</p><p class="tp_pub_menu"><span class="tp_abstract_link"><a id="tp_abstract_sh_122" class="tp_show" onclick="teachpress_pub_showhide('122','tp_abstract')" title="Show abstract" style="cursor:pointer;">Abstract</a></span> | <span class="tp_resource_link"><a id="tp_links_sh_122" class="tp_show" onclick="teachpress_pub_showhide('122','tp_links')" title="Show links and resources" style="cursor:pointer;">Links</a></span> | <span class="tp_bibtex_link"><a id="tp_bibtex_sh_122" class="tp_show" onclick="teachpress_pub_showhide('122','tp_bibtex')" title="Show BibTeX entry" style="cursor:pointer;">BibTeX</a></span></p><div class="tp_bibtex" id="tp_bibtex_122" style="display:none;"><div class="tp_bibtex_entry"><pre>@article{Imran2026a,<br />
title = {From Steps to Sentiments: Cross-Domain Transfer Learning for Activity-Based Emotion Detection in Wearable IoT Systems},<br />
author = {Hamza Ali Imran and Qaiser Riaz and Kiran Hamza and Shaida Muhammad and Björn Krüger},<br />
doi = {10.1109/JIOT.2026.3666469},<br />
year  = {2026},<br />
date = {2026-02-20},<br />
urldate = {2026-02-20},<br />
journal = {IEEE Internet of Things Journal},<br />
pages = {1-1},<br />
abstract = {Context-aware, gait-based sentiment analysis and emotion perception is an emerging research area within Internet of Things (IoT), aiming to make smart systems more intuitive and responsive. Recognizing emotions from wearable inertial sensor data is challenging due to subtle and compound emotional cues, variability across individuals and contexts, and limited, imbalanced datasets. To address these challenges, we propose Jazbat-Net, a lightweight neural network that leverages Transfer Learning (TL). The model is first trained on a large-scale, publicly available multi-activity dataset collected using wearable inertial sensors, and then retrained on a multi-class emotion dataset, effectively transferring knowledge from the pretraining phase. We evaluate Jazbat-Net with and without TL, across both smartwatch and smartphone based data, and for input dimensions ranging from 1D to 6D. The best results are achieved when pretrained on smartphone-based activity data and retrained on smartphone-based emotion data using a 1D input size. The proposed model attains an average classification accuracy of 95%, with a precision score of 95%, a recall score of 97%, and an F1-score of 96%. Moreover, Jazbat-Net achieves a low theoretical time complexity and requires only ≈ 6.96 M Multiply–Accumulate Operations (MACs), which is about 95% fewer computations than the previous State-of-the-Art (SOTA) model. Its space complexity is also low, with a model size of only ≈ 110 KB and peak activation memory of ≈ 0.35 MB. On-device evaluation on a Xiaomi 13T smartphone demonstrates that Jazbat-Net achieves a median inference latency of only ≈ 90.96 ms with a TFLite 32-bit floating point precision (FP32) model size of just ≈ 0.158 MB, making it ≈ 20× smaller and ≈ 20% faster than the previous SOTA model while maintaining comparable accuracy.<br />
},<br />
keywords = {},<br />
pubstate = {published},<br />
tppubtype = {article}<br />
}<br />
</pre></div><p class="tp_close_menu"><a class="tp_close" onclick="teachpress_pub_showhide('122','tp_bibtex')">Close</a></p></div><div class="tp_abstract" id="tp_abstract_122" style="display:none;"><div class="tp_abstract_entry">Context-aware, gait-based sentiment analysis and emotion perception is an emerging research area within Internet of Things (IoT), aiming to make smart systems more intuitive and responsive. Recognizing emotions from wearable inertial sensor data is challenging due to subtle and compound emotional cues, variability across individuals and contexts, and limited, imbalanced datasets. To address these challenges, we propose Jazbat-Net, a lightweight neural network that leverages Transfer Learning (TL). The model is first trained on a large-scale, publicly available multi-activity dataset collected using wearable inertial sensors, and then retrained on a multi-class emotion dataset, effectively transferring knowledge from the pretraining phase. We evaluate Jazbat-Net with and without TL, across both smartwatch and smartphone based data, and for input dimensions ranging from 1D to 6D. The best results are achieved when pretrained on smartphone-based activity data and retrained on smartphone-based emotion data using a 1D input size. The proposed model attains an average classification accuracy of 95%, with a precision score of 95%, a recall score of 97%, and an F1-score of 96%. Moreover, Jazbat-Net achieves a low theoretical time complexity and requires only ≈ 6.96 M Multiply–Accumulate Operations (MACs), which is about 95% fewer computations than the previous State-of-the-Art (SOTA) model. Its space complexity is also low, with a model size of only ≈ 110 KB and peak activation memory of ≈ 0.35 MB. On-device evaluation on a Xiaomi 13T smartphone demonstrates that Jazbat-Net achieves a median inference latency of only ≈ 90.96 ms with a TFLite 32-bit floating point precision (FP32) model size of just ≈ 0.158 MB, making it ≈ 20× smaller and ≈ 20% faster than the previous SOTA model while maintaining comparable accuracy.<br />
</div><p class="tp_close_menu"><a class="tp_close" onclick="teachpress_pub_showhide('122','tp_abstract')">Close</a></p></div><div class="tp_links" id="tp_links_122" style="display:none;"><div class="tp_links_entry"><ul class="tp_pub_list"><li><i class="ai ai-doi"></i><a class="tp_pub_list" href="https://dx.doi.org/10.1109/JIOT.2026.3666469" title="Follow DOI:10.1109/JIOT.2026.3666469" target="_blank">doi:10.1109/JIOT.2026.3666469</a></li></ul></div><p class="tp_close_menu"><a class="tp_close" onclick="teachpress_pub_showhide('122','tp_links')">Close</a></p></div></div></div></div></div>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
