<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Beiträge von Melanie Maier - Mobile USTP MKL</title>
	<atom:link href="https://mobile.fhstp.ac.at/author/it251510/feed/" rel="self" type="application/rss+xml" />
	<link>https://mobile.fhstp.ac.at/author/it251510/</link>
	<description>Die &#34;Mobile Forschungsgruppe&#34; der USTP, sie  sammelt hier alles zu den Themen Design, UX und Entwicklung mobiler Applikationen</description>
	<lastBuildDate>Wed, 21 Jan 2026 10:42:12 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>My Life Planning App: Semester One Progress</title>
		<link>https://mobile.fhstp.ac.at/allgemein/my-life-planning-app-semester-one-progress/</link>
		
		<dc:creator><![CDATA[Melanie Maier]]></dc:creator>
		<pubDate>Wed, 21 Jan 2026 10:42:12 +0000</pubDate>
				<category><![CDATA[Allgemein]]></category>
		<category><![CDATA[firebase]]></category>
		<category><![CDATA[h2]]></category>
		<category><![CDATA[ionic]]></category>
		<category><![CDATA[mobile]]></category>
		<category><![CDATA[nest]]></category>
		<category><![CDATA[React]]></category>
		<category><![CDATA[Web-App]]></category>
		<guid isPermaLink="false">https://mobile.fhstp.ac.at/?p=15292</guid>

					<description><![CDATA[<p>As my project in the first semester, I set out to develop a life planning application that allows users to organize their schedules, manage events and track important dates. The scope for this initial phase was clear: implement a functional login and registration system, create a working calendar where you can add and delete events <a class="read-more" href="https://mobile.fhstp.ac.at/allgemein/my-life-planning-app-semester-one-progress/">[...]</a></p>
<p>The post <a href="https://mobile.fhstp.ac.at/allgemein/my-life-planning-app-semester-one-progress/">My Life Planning App: Semester One Progress</a> appeared first on <a href="https://mobile.fhstp.ac.at">Mobile USTP MKL</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>As my project in the first semester, I set out to develop a life planning application that allows users to organize their schedules, manage events and track important dates. The scope for this initial phase was clear: implement a functional login and registration system, create a working calendar where you can add and delete events and create the design &amp; operations concept for the app.</p>



<h2 class="wp-block-heading">Tools and Technologies</h2>



<p>To achieve this, I used a combination of modern frontend and backend technologies:</p>



<ul class="wp-block-list">
<li><strong>Frontend:</strong> Ionic + React – for building a responsive, mobile-friendly user interface.</li>



<li><strong>Backend:</strong> NestJS – providing a structured, scalable server-side environment.</li>



<li><strong>Database:</strong> H2 Database – a lightweight, in-memory database perfect for rapid development.</li>



<li><strong>Authentication:</strong> Firebase – managing user registration and login securely.</li>
</ul>



<h2 class="wp-block-heading">Semester One Scope</h2>



<p>The goals for the first semester were:</p>



<ol class="wp-block-list">
<li><strong>User Authentication:</strong><br>I implemented login and registration using Firebase, ensuring that users can securely create accounts and access the app.</li>



<li><strong>Calendar Functionality:</strong><br>The app includes a fully functional calendar where users can <strong>add and delete events</strong>. The frontend and backend are already connected, and event indicators show which days have events saved. While this feature is partially complete, it’s already providing visual cues in the calendar.</li>



<li><strong>Design and Operations Concept:</strong><br>I developed a design and operations concept to guide future development. While the app’s visual design is not yet finalized, the underlying structure is ready to support it.</li>
</ol>



<h2 class="wp-block-heading">Going Beyond the Initial Scope</h2>



<p>In addition to the semester goals, I implemented several features that extend the app’s functionality:</p>



<ul class="wp-block-list">
<li><strong>Edit Events:</strong> Users can now update existing events directly in the calendar.</li>



<li><strong>Navigation Bar:</strong> A fully implemented navigation bar allows seamless movement between different parts of the app.</li>



<li><strong>Indicators for Events:</strong> The calendar partially highlights days with saved events, providing a quick visual overview.</li>
</ul>



<h2 class="wp-block-heading">What’s Next</h2>



<p>While the first semester focused on functionality, the final design and advanced features are planned for upcoming phases. Future improvements will include:</p>



<ul class="wp-block-list">
<li>Completing the event indicators in the calendar.</li>



<li>Finalizing the app’s visual design to match the conceptual layout.</li>



<li>Adding additional features to enhance user experience and usability.</li>
</ul>



<p>Overall, the first semester successfully laid the foundation for a robust, scalable life planning application. With both front- and backend connected and key features implemented, the project is well on its way to becoming a full-fledged productivity tool.</p>
<p>The post <a href="https://mobile.fhstp.ac.at/allgemein/my-life-planning-app-semester-one-progress/">My Life Planning App: Semester One Progress</a> appeared first on <a href="https://mobile.fhstp.ac.at">Mobile USTP MKL</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>SOTA &#124; State of the art of Multimodal Mobile Stress Detection</title>
		<link>https://mobile.fhstp.ac.at/allgemein/sota-state-of-the-art-of-multimodal-mobile-stress-detection/</link>
		
		<dc:creator><![CDATA[Melanie Maier]]></dc:creator>
		<pubDate>Wed, 21 Jan 2026 10:22:19 +0000</pubDate>
				<category><![CDATA[Allgemein]]></category>
		<category><![CDATA[mobile]]></category>
		<category><![CDATA[multimodal]]></category>
		<category><![CDATA[SOTA]]></category>
		<category><![CDATA[stressdetection]]></category>
		<guid isPermaLink="false">https://mobile.fhstp.ac.at/?p=15234</guid>

					<description><![CDATA[<p>Evaluating Multimodal Sensor Fusion, Machine Learning Models and&#160;Mobile Feasibility in Stress Detection Systems&#160; Abstract &#160;Over the last few years, mobile and wearable stress detection has evolved rapidly. This is due to maturing of sensor technology, machine learning (ML) and multimodal fusion strategies. Todays fast paced world critically increases stress-induced illnesses, therefor stress detection becomes more <a class="read-more" href="https://mobile.fhstp.ac.at/allgemein/sota-state-of-the-art-of-multimodal-mobile-stress-detection/">[...]</a></p>
<p>The post <a href="https://mobile.fhstp.ac.at/allgemein/sota-state-of-the-art-of-multimodal-mobile-stress-detection/">SOTA | State of the art of Multimodal Mobile Stress Detection</a> appeared first on <a href="https://mobile.fhstp.ac.at">Mobile USTP MKL</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">Evaluating Multimodal Sensor Fusion, Machine Learning Models and&nbsp;Mobile Feasibility in Stress Detection Systems&nbsp;</h2>



<h3 class="wp-block-heading">Abstract</h3>



<p>&nbsp;Over the last few years, mobile and wearable stress detection has evolved rapidly. This is due to maturing of sensor technology, machine learning (ML) and multimodal fusion strategies. Todays fast paced world critically increases stress-induced illnesses, therefor stress detection becomes more and more important. This State-of-the-Art article critically compares 11 recent and influential papers (2022-2025) about multimodal stress detection.&nbsp;</p>



<p>The analysis examines datasets, preprocessing strategies, fusion architecture, machine learning approaches, evaluation protocols and the extent, to which current models are feasible for real-world mobile deployment. Findings reveal strong progress in multimodal fusion and deep learning, but limited attention to on-device constraints, poor cross-subject generalization, and an overreliance on laboratory datasets. Clear research gaps and design recommendations are identified to guide future work toward robust, scalable, and ecologically valid mobile stress-detection systems.&nbsp;</p>



<h2 class="wp-block-heading">&nbsp;<strong>KEYWORDS&nbsp;</strong></h2>



<p>Stress Detection, Mobile, Machine Learning, Sensors, Mobile Feasibility&nbsp;</p>



<h2 class="wp-block-heading"><strong>1 Introduction&nbsp;</strong></h2>



<p>Stress is a multidimensional physiological and psychological response involving changes in the autonomic nervous system, endocrine activity, metabolism, behavior, cognition and movement. As of today, wearables, like smartwatches, enabled continuous monitoring of the human physiological state (Zhao, Masood, &amp; Ning, 2025). Wearables and mobile sysrems have shown to be promising platforms for continuous stress monitoring due to high user acceptance, broad availability and the increasing fidelity of embedded sensors.&nbsp;</p>



<p>However, mobile stress detection in real-world settings remains challenging. Signals from wearable sensors are easily affected by motion artifacts, environment conditions, sensor placement variability and individual physiological differences. Approaches that rely on a single sensor, such as PPG (detects changes in blood volume using a pulse oximeter) or EDA (measures the electrical conductivity of the skin) generally struggle with robustness under real-world conditions.&nbsp;</p>



<p>Multimodal stress detection attempts to resolve this by combining heterogenous sensor modalities (e.g. PPG, EDA, ACC, Temp). The resulting sensor fusion pipelines can capture complementary aspects of the stress response), making them more reliable and generalizable. </p>



<p>This SOTA analyzes eleven key papers from 2022-2025, comparing their multimodality, ML models, fusion techniques, dataset characteristics and mobile feasibility. The goal is to build a structured understanding of the field’s strengths, limitations and future directions.&nbsp;</p>



<h2 class="wp-block-heading"><strong>2 Multimodal Datasets and Data Characteristics&nbsp;</strong></h2>



<p>Multimodal Datasets are datasets that contain multiple types of data (modalities) collected from different sources, in this case different sensors. Instead of relying on one kind of information multimodal datasets combine several complementary data types to provide a richer and more robust understanding of whatever is studied. Each analyzed paper was either based on an established or a custom dataset.&nbsp;</p>



<h3 class="wp-block-heading"><strong>2.1 WESAD as the benchmark&nbsp;</strong></h3>



<p>5 of the reviewed papers make use of the WESAD dataset. It’s a laboratory-based multimodal dataset capturing EDA, ECG, BVP/PPG, accelerometer data, respiration, EMG and skin temperature during a Trier Social Stress Test (TSST) scenario. </p>



<p>It provides a broad set of sensor data connected to stress and therefore allows for recognition of stress in individuals using a combination of data inputs.&nbsp;</p>



<p>Despite its popularity, WESAD has key limitations:&nbsp;</p>



<ul class="wp-block-list">
<li>Only 15 subjects </li>



<li>Heavily lab-controlled </li>



<li>Limited ecological validity </li>
</ul>



<p>Nevertheless, its multimodal nature makes it valuable for fusion research.&nbsp;</p>



<h3 class="wp-block-heading"><strong>2.2 Rare real-world datasets&nbsp;</strong></h3>



<p>Only two studies (Islam &amp; Washington, 2023) (Darwish, et al., 2025) include real-life data. These capture stress under natural conditions (e.g daily life, workload), but suffer from event uncertainty, self-report noise and device variability.&nbsp;</p>



<h3 class="wp-block-heading"><strong>2.3 Emerging multimodal datasets&nbsp;</strong></h3>



<p>A major 2025 contribution is EmpathicSchool (Hosseini, et al., 2025)which is a large multimodal dataset. It combines:&nbsp;</p>



<ul class="wp-block-list">
<li>Facial video </li>



<li>PPG, ECG, EDA </li>



<li>Accelerometer </li>



<li>Structured academic stressors </li>
</ul>



<p>This dataset expands the field toward a richer and visually enhanced multimodality, though it’s not yet optimized for mobile devices.&nbsp;</p>



<h2 class="wp-block-heading"><strong>3 Preprocessing and Feature Engineering&nbsp;</strong></h2>



<p>Preprocessing and Feature Engineering are two essential steps in machine learning, that transform raw data from datasets into a form that machine learning models can understand and learn from effectively. They happen before training the model and often determine whether the model performs poorly or achieves the desired results. </p>



<p>The analyzed papers have shown that most studies follow a consistent preprocessing pipeline:&nbsp;</p>



<ol class="wp-block-list">
<li>Signal cleaning through band-pass filtering or artifact removal: </li>



<li>Normalization: </li>



<li>Segmentation (typical window sizes 30-60 seconds) </li>



<li>Feature extraction: Physiological: HRV metrics, EDA peaks, spectral features </li>



<li>Behavioral: ACC-derived activity levels </li>



<li>Visual: CNN-extracted features </li>



<li>Contextual: metadata, event labels </li>



<li></li>
</ol>



<p>A reoccurring issue is inconsistent preprocessing documentation, which reduces transparency and reproducibility. Only a few papers (e.g. (Md Santo, et al., 2025) (Zhao, Masood, &amp; Ning, 2025)) describe their pipelines in sufficient detail for replication.&nbsp;</p>



<h2 class="wp-block-heading"><strong>4 Multimodal Fusion Strategies&nbsp;</strong></h2>



<p>Fusion strategies define how different modalities are combined to yield a stress prediction. Three main paradigms dominate.&nbsp;</p>



<h3 class="wp-block-heading"><strong>4.1 Early Fusion&nbsp;</strong></h3>



<p>Early fusion means that features or raw data are chained before feeding them to a model.&nbsp;</p>



<p>Advantages: simple, efficient&nbsp;</p>



<p>Disadvantages: sensitive to missing modalities&nbsp;</p>



<p>Used in:&nbsp;</p>



<ol class="wp-block-list">
<li>TEANet (Md Santo, et al., 2025) </li>



<li>Image-encoding CNN models (Ghosh, Kim, Ijaz, Singh, &amp; Mahmud, 2022) </li>
</ol>



<h3 class="wp-block-heading"><strong>4.2 Late Fusion&nbsp;</strong></h3>



<p>Late fusion means that each modality is processed independently and then predictions are combined.&nbsp;</p>



<p>Advantages: robust to missing or weak signals&nbsp;</p>



<p>Disadvantages: limited modeling of cross-modality relationships&nbsp;</p>



<p>Used in:&nbsp;</p>



<ul class="wp-block-list">
<li>From lab to real-life (Darwish, et al., 2025) </li>



<li>EmpathicSchool Dataset (Hosseini, et al., 2025) </li>
</ul>



<p><strong>4.3 Hybrid / Cross-Modality Fusion&nbsp;</strong></p>



<p>In hybrid or cross-modality fusion, ML models learn relationships between modalities through different methods, e.g.:&nbsp;</p>



<ul class="wp-block-list">
<li>Transformer cross-attention (Oliver &amp; Dakshit, 2025) </li>



<li>Privileged modality learning (Zhao, Masood, &amp; Ning, 2025) </li>



<li>Context-aware adaptive fusion (Rashid, Mortlock, &amp; Al Faruque, 2023) </li>
</ul>



<p>These methods achieve the best results but also have the highest computational cost, making mobile deployment difficult without compression techniques.&nbsp;</p>



<h2 class="wp-block-heading"><strong>5 Machine Learning Model Landscape&nbsp;</strong></h2>



<h3 class="wp-block-heading"><strong>5.1 Classical ML&nbsp;</strong></h3>



<p>Classical machine learning models are for example Random Forests, SVMs and logistic regression. They:&nbsp;</p>



<ul class="wp-block-list">
<li>Have low computational cost </li>



<li>Depend on hand-crafted features and </li>



<li>Are less effective on high-dimensional multimodal signals </li>
</ul>



<p>An example for such a ML model would be the Global HRV + RF model (Dahal, Bogue-Jimenez, &amp; Doblas, 2023). It demonstrates a high mobile feasibility but low multimodal richness.&nbsp;</p>



<h3 class="wp-block-heading"><strong>5.2 Deep Learning Models&nbsp;</strong></h3>



<p>The analyzed studies have shown different sorts of deep learning models:&nbsp;</p>



<p><em>5.2.1 CNNs and CNN-LSTM hybrids. </em>CNN (Convolutional Neural Network) is a deep learning model originally developed for images but widely used for signal processing. CNN-LSTM combines that with a Long Short-Term Memory network for modeling long-term time dependencies. That means that a CNN captures instant patterns and LSTM captures temporal evolution across windows. </p>



<p>Strength: automatic feature extraction&nbsp;</p>



<p>Weakness: mobile computational limitations&nbsp;</p>



<p><em>5.2.2 Transformers. </em>Transformers are a deep learning architecture designed to handle sequences of data (like language, signals, video, sensor time series). They learn global structure while CNNs learn local features and LSTMs learn sequential dependencies. They outperform RNNs and CNNs in cross-modality modeling (Oliver &amp; Dakshit, 2025). They are based on self attention, a mechanism that allows the model to look at any part of the input at any time, and parallel processing, which lets them process all time steps simultaneously, not one-by-one. This makes them extremely fast on GPUs and very good for long signals, multimodal fusion, cross-modal attention (Oliver &amp; Dakshit, 2025) and complex temporal patterns. However, they have a high memory footprint, high inference latency and therefore are unsuitable for on-device execution without quantization or pruning. </p>



<p><em>5.2.3 Self-Supervised Learning. </em>Self-Supervised Learning is an approach in which the model derives its own learning objectives from the data in order to learn useful representations, without relying on costly human-labeled data. It is particularly powerful when large amounts of unlabeled data are available, as is often the case with physiological or sensor-based stress data. Islam and Washington show that SSL pretraining significantly improves person-specific stress detection, reducing training data requirements (Islam &amp; Washington, 2023). </p>



<p><em>5.2.4 Autoencoders. </em>Autoencoders are a class of neural networks designed to learn efficient representations of data, typically for purposes like dimensionality reduction, feature learning, or denoising. They are self-supervised in nature because they use the input data itself as the target for training, meaning no external labels are required. TEANet (Md Santo, et al., 2025) uses a transpose-enhanced autoencoder for feature compression, promising for low-resource mobile inference. </p>



<h2 class="wp-block-heading"><strong>6 Mobile Feasibility Analysis&nbsp;</strong></h2>



<p>Mobile Feasibility refers to whether a machine-learning model and sensor setup, a stress-detection system, can realistically run on a mobile device such as:&nbsp;</p>



<ul class="wp-block-list">
<li>A smartphone </li>



<li>A smartwatch </li>



<li>A fitness tracker </li>



<li>An embedded IoT health device </li>
</ul>



<p>A crucial dimension often missing in published research is evaluation on actual mobile or wearable hardware.&nbsp;</p>



<p>From the papers analyzed:&nbsp;</p>



<ul class="wp-block-list">
<li>None provide full mobile benchmarking such as latency, battery and resource usage </li>



<li>Only 3 report any on-device considerations </li>
</ul>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td><strong>Paper / Study&nbsp;</strong></td><td><strong>Sensors Used for Training&nbsp;</strong></td><td><strong>Sensors Required at Deployment (Inference)&nbsp;</strong></td><td><strong>Comment&nbsp;</strong></td></tr><tr><td>PULSE (Zhao, Masood, &amp; Ning, 2025)&nbsp;</td><td>EDA, PPG/BVP, ECG, ACC, Temperature&nbsp;</td><td>PPG + ACC only (EDA used only during training)&nbsp;</td><td>Privileged knowledge transfer enables strong sensor reduction&nbsp;</td></tr><tr><td>TEANet (Md Santo, et al., 2025)&nbsp;</td><td>PPG/BVP, EDA, ACC&nbsp;</td><td>PPG/BVP + ACC&nbsp;</td><td>EDA used for reconstruction during training; not required at inference&nbsp;</td></tr><tr><td>Cross-Modality Transformer (Oliver &amp; Dakshit, 2025)&nbsp;</td><td>ECG, EDA, EMG, RESP, TEMP, ACC&nbsp;</td><td>All modalities required&nbsp;</td><td>No sensor reduction, computationally heavy&nbsp;</td></tr><tr><td>Individualized SSL (Islam &amp; Washington, 2023)&nbsp;</td><td>HRV (ECG/PPG), EDA, ACC&nbsp;</td><td>HRV + ACC&nbsp;</td><td>EDA optional; personalization reduces training burden&nbsp;</td></tr><tr><td>SELF-CARE (Rashid, Mortlock, &amp; Al Faruque, 2023)&nbsp;</td><td>PPG, EDA, ACC, contextual signals (GPS, phone logs)&nbsp;</td><td>PPG + ACC + contextual signals&nbsp;</td><td>EDA helpful but optional, high sensor complexity&nbsp;</td></tr><tr><td>From Lab to Real-Life (Darwish, et al., 2025)&nbsp;</td><td>PPG, EDA, ACC&nbsp;</td><td>PPG + ACC&nbsp;</td><td>Classical ML models, mobile-feasible&nbsp;</td></tr><tr><td>Image-Encoding CNN (Ghosh, Kim, Ijaz, Singh, &amp; Mahmud, 2022)&nbsp;</td><td>PPG, EDA, ECG&nbsp;</td><td>PPG + EDA + ECG&nbsp;</td><td>No reduction, image-encoding pipeline requires all modalities&nbsp;</td></tr><tr><td>Global HRV + RF (Dahal, Bogue-Jimenez, &amp; Doblas, 2023)&nbsp;</td><td>ECG-derived HRV features&nbsp;</td><td>ECG or PPG HRV only&nbsp;</td><td>Only true single-sensor solution&nbsp;</td></tr><tr><td>EmpathicSchool (Hosseini, et al., 2025)&nbsp;</td><td>Facial video, EDA, ECG, PPG, ACC&nbsp;</td><td>Video + physiological signals&nbsp;</td><td>Very sensor-intensive, not suitable for mobile deployment&nbsp;</td></tr><tr><td>Stressor Type Matters (Prajod, Mahesh, &amp; André, 2024)&nbsp;</td><td>ECG, EDA, ACC, RESP&nbsp;</td><td>All modalities required&nbsp;</td><td>Focus on generalization, no deployment optimization </td></tr></tbody></table></figure>



<p>Table 1: Training and Deployment Sensors </p>



<h3 class="wp-block-heading"><strong>6.1 Critical barriers&nbsp;</strong></h3>



<p>The analyzed studies have shown that there are the following critical barriers for the mobile feasibility of the presented stress-detection systems:&nbsp;</p>



<ol class="wp-block-list">
<li>Large model sizes (Transformers, CNN-LSTM hybrids)</li>



<li>High inference latency for multimodal pipelines</li>



<li>Sensor synchronization issues in mobile settings</li>



<li>Energy consumption rarely measured</li>



<li>Multimodal dropout (missing modalities in real life)</li>
</ol>



<h3 class="wp-block-heading"><strong>6.2 Promising mobile-oriented techniques&nbsp;</strong></h3>



<p>There are still some promising mobile-oriented techniques:&nbsp;</p>



<ol class="wp-block-list">
<li>Model compression (quantization, pruning)</li>



<li>Teacher-student learning (Zhao, Masood, &amp; Ning,2025)</li>



<li>Lightweight multimodal autoencoders (Md Santo, et al.,2025)</li>



<li>Contextual gating (Rashid, Mortlock, &amp; Al Faruque,2023)</li>
</ol>



<p>These approaches are promising but still underexplored.&nbsp;</p>



<h2 class="wp-block-heading"><strong>7 Key Comparative Insights&nbsp;</strong></h2>



<p>Across the reviewed studies, five clear patterns emerge:&nbsp;</p>



<ol class="wp-block-list">
<li>Multimodal fusion consistently outperformsunimodal approaches, but its computational cost isrestrictive for mobile implementation</li>



<li>Cross-subject generalization remains weak, person-specific models still perform better</li>



<li>Transformer architectures lead in accuracy, butremain unsuitable for real-time mobile inferencewithout compression</li>



<li>Real-world datasets are severely lacking, leading tolimiting ecological validity</li>



<li>Mobile feasibility is the largest research gap, almostentirely unaddressed in current literature</li>
</ol>



<h2 class="wp-block-heading"><strong>8 Research Gaps and Future Directions&nbsp;</strong></h2>



<ol class="wp-block-list">
<li>Multimodal real-world datasets. The field urgentlyneeds datasets collected under natural conditions, withmotion artifacts, lighting changes and real-worldstressors to test stress-detection systems under realconditions.</li>



<li>Standardization of preprocessing pipelines. Currentinconsistency makes cross-paper comparisonsunreliable.</li>



<li>Explicit modeling of modality dropout. In mobilecontexts, sensors fail frequently.</li>



<li>Energy-aware model design. Most publishedarchitectures are unfitting for wearables.</li>



<li>Fairness and personalization. Little work exists onhow stress-detection performance varies across gender,age, or physiology.</li>



<li>Cross-dataset generalizability. Transfer learning anddomain adaptation are required for practical real-worlddeployment.</li>
</ol>



<h2 class="wp-block-heading"><strong>9 Conclusion&nbsp;</strong></h2>



<p>Multimodal mobile stress detection is progressing rapidly, particularly in fusion architectures and learning models. However, the field remains dominated by laboratory datasets and computationally expensive models. Practical mobile feasibility, real-world robustness, subject variability and missing-modality sensitivity are insufficiently addressed. Future research must integrate lightweight multimodal models, mobile-optimized architectures and real-world datasets to build usable, scalable and ethically sound stress-detection systems.</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td><strong>Study&nbsp;</strong></td><td><strong>Dataset&nbsp;</strong></td><td><strong>Modalities Used&nbsp;</strong></td><td><strong>Fusion Type&nbsp;</strong></td><td><strong>ML Model&nbsp;</strong></td><td><strong>Metrics&nbsp;</strong></td><td><strong>Mobile Feasibility&nbsp;</strong></td><td><strong>Key Limitation&nbsp;</strong></td></tr><tr><td>PULSE (Zhao, Masood, &amp; Ning, 2025)&nbsp;</td><td>WESAD (and derivatives)&nbsp;</td><td>EDA, ECG, BVP/PPG, ACC, Temperature&nbsp;</td><td>Privileged modality (teacher-student) fusion&nbsp;</td><td>Deep Learning (teacher-student privileged knowledge transfer)&nbsp;</td><td>Accuracy, F1-score&nbsp;</td><td>Medium (model compression intended; not fully evaluated on-device)&nbsp;</td><td>Small lab dataset; EDA required during training; limited real-world evaluation&nbsp;</td></tr><tr><td>TEANet (Md Santo, et al., 2025)&nbsp;</td><td>WESAD&nbsp;</td><td>BVP/PPG, EDA, ACC&nbsp;</td><td>Early fusion (feature-level); autoencoder compression&nbsp;</td><td>Transpose-enhanced Autoencoder (DL)&nbsp;</td><td>Accuracy, F1-score, Kappa&nbsp;</td><td>Medium (feature compression promising; no full mobile benchmark)&nbsp;</td><td>Limited reporting on multimodality and energy/latency metrics&nbsp;</td></tr><tr><td>Cross-Modality Investigation (Oliver &amp; Dakshit, 2025)&nbsp;</td><td>WESAD&nbsp;</td><td>ECG, EDA, EMG, RESP, TEMP, ACC&nbsp;</td><td>Attention-based cross-modality fusion (Transformer)&nbsp;</td><td>Transformer (cross-modal)&nbsp;</td><td>Accuracy, F1-score&nbsp;</td><td>Low (high compute &amp; memory; not mobile-friendly)&nbsp;</td><td>High computational cost; lacks on-device evaluation&nbsp;</td></tr><tr><td>Individualized SSL (Islam &amp; Washington, 2023)&nbsp;</td><td>Custom wearable + phone (real-world)&nbsp;</td><td>HRV (from PPG/ECG), EDA, ACC&nbsp;</td><td>Late fusion (separate encoders, combined features)&nbsp;</td><td>Self-Supervised Learning encoder + classifier</td><td>F1-score, ROC-AUC&nbsp;</td><td>Medium (personalized models reduce data needs; mobile deployment possible but not fully demonstrated)&nbsp;</td><td>Less generalizable across users; requires SSL pretraining </td></tr><tr><td>SELF-CARE (Rashid, Mortlock, &amp; Al Faruque, 2023) <br></td><td>Custom context-aware dataset <br></td><td>PPG, EDA, ACC, contextual sensors (phone logs/GPS) <br></td><td>Hybrid/adaptive fusion (context-aware gating) <br></td><td>DNN + context module (hybrid) <br></td><td>Accuracy, F1-score <br></td><td>Accuracy, F1-score <br>Medium (adaptive methods promising; requires many sensors) </td><td>High system complexity; many sensors reduce deployability </td></tr><tr><td>From Lab to Real-Life (Darwish, et al., 2025) <br></td><td>Wearable study (lab + real-world) <br></td><td>PPG, EDA, ACC <br></td><td>Late fusion (decision-level aggregation) <br></td><td>Classical ML (Random Forest, SVM) and simple ensembles <br></td><td>Accuracy (real-world vs lab comparisons) <br></td><td>High (simpler models validated in real-world; low latency/energy reported) </td><td>Decrease in accuracy in wild conditions; device variability <br></td></tr><tr><td>Image-encoding CNN (Prajod, Mahesh, &amp; André, 2024) <br></td><td>WESAD + other datasets <br></td><td>PPG, EDA, ECG (time series encoded as images) <br></td><td>Early fusion via image-encoding (GAF/Recurrence plots) </td><td>CNN on image-encoded signals <br></td><td>Accuracy <br></td><td>Low (image encoding + CNN pipeline is compute-heavy) <br></td><td>Complex pipeline; not optimized for on-device inference </td></tr><tr><td>Global HRV + RF (Dahal, Bogue-Jimenez, &amp; Doblas, 2023) <br></td><td>Custom HRV dataset (real-world) <br></td><td>HRV only (ECG-derived features) <br></td><td>Single-modality (no fusion) <br></td><td>Random Forest (classical ML) <br></td><td>Accuracy, F1-score <br></td><td>High (lightweight; mobile-ready) <br></td><td>Limited to HRV signal; lacks multimodal robustness <br></td></tr><tr><td>Recent Advances Review (Ghonge, Shukla, Pradeep, &amp; Solanki, 2025) </td><td>Multiple (review) <br></td><td>Multiple (physiological, contextual, visual) <br></td><td>Survey of fusion strategies (various) <br></td><td>Survey (various ML/DL approaches) <br></td><td>N/A (review) <br></td><td>N/A (review paper; discusses feasibility conceptually) <br></td><td>Not empirical; synthesizes literature only <br></td></tr><tr><td>EmpathicSchool (Hosseini, et al., 2025) </td><td>EmpathicSchool (new multimodal dataset) </td><td>Facial video, EDA, ECG, PPG, ACC <br></td><td>Late multimodal fusion (visual + physiological) <br></td><td>CNN (visual) + physiological models; late fusion <br></td><td>Accuracy, F1-score <br></td><td>Low (video processing heavy; not optimized for wearables) </td><td>High sensor cost and compute; limited mobile applicability <br></td></tr><tr><td>Stressor Type Matters (Prajod, Mahesh, &amp; André, 2024) </td><td>WESAD + multiple datasets (cross-dataset) </td><td>ECG, EDA, ACC, RESP (varies) <br></td><td>Analysis of modelling factors; mixed approaches <br></td><td>Mixed (classical ML + DL experiments) <br></td><td>Accuracy, F1-score (cross-dataset evaluation) <br></td><td>Low (study highlights generalization issues; no mobile eval) </td><td>Poor cross-dataset generalization; stressor sensitivity <br></td></tr></tbody></table></figure>



<p>Table 1: Overview of reviewed Studies</p>



<h2 class="wp-block-heading"><strong>REFERENCES&nbsp;</strong></h2>



<p>[1]Dahal, K., Bogue-Jimenez, B., &amp; Doblas, A. (2023). Global Stress DetectionFramework Combining a Reduced Set of HRV Features and Random Forest Model. Sensors, Basel, Switzerland. doi:https://doi.org/10.3390/s23115220</p>



<p>[2] Darwish, B. A., Rehman, S. U., Sadek, I., Salem, N. M., Karee,, G., &amp; Mahmoud, L. N. (2025). From lab to real-life: A three-stage validation of wearable technology for stress monitoring. MethodsX. doi:https://doi.org/10.1016/j.mex.2025.103205&nbsp;</p>



<p>[3] Ghonge, M., Shukla, V. K., Pradeep, N., &amp; Solanki, R. K. (2025). Recent Advances in Multimodal Deep Learning for Stress Prediction: Toward Cycle-Aware and Gender-Sensitive Health Analytics. doi:https://doi.org/10.1051/epjconf/202534101059&nbsp;</p>



<p>[4] Ghosh, S., Kim, S., Ijaz, M. F., Singh, P. K., &amp; Mahmud, M. (2022). Classification of Mental Stress from Wearable Physiological Sensors Using Image-Encoding-Based Deep Neural Network. Biosensors. doi:https://doi.org/10.3390/bios12121153&nbsp;</p>



<p>[5] Hosseini, M., Sohrab, F., Gottumukkala, R., Bhupatiraju, R. T., Katragadda, S., Raitoharju, J., . . . Gabbouj, M. (2025). A multimodal stress detection dataset with facial expressions and physiological signals.doi:https://doi.org/10.1038/s41597-025-05812-0&nbsp;</p>



<p>[6] Islam, T., &amp; Washington, P. (2023). Individualized Stress Mobile Sensing Using Self-Supervised Pre-Training. Applied Sciences. doi:https://doi.org/10.3390/app132112035&nbsp;</p>



<p>[7] Md Santo, A., Sapnil Sarker , B., Mohammod , A. M., Sumaiya, K., Manish, S., &amp; Chowdhury, M. (2025). TEANet: A Transpose-Enhanced Autoencoder Network for Wearable Stress Monitoring. Von https://arxiv.org/abs/2503.12657 abgerufen&nbsp;</p>



<p>[8] Oliver, E., &amp; Dakshit, S. (2025). Cross-Modality Investigation on WESAD StressClassification. Von https://arxiv.org/abs/2502.18733 abgerufen</p>



<p>[9] Prajod, P., Mahesh, B., &amp; André, E. (2024). Stressor Type Matters! &#8212; ExploringFactors Influencing Cross-Dataset Generalizability of Physiological Stress Detection. doi:10.48550/arXiv.2405.09563</p>



<p>[10] Rashid, N., Mortlock, T., &amp; Al Faruque, M. A. (2023). Stress Detection using Context-Aware Sensor Fusion from Wearable Devices. Von https://arxiv.org/abs/2303.08215 abgerufen&nbsp;</p>



<p>[11] Zhao, Z., Masood, M., &amp; Ning, Y. (2025). PULSE: Privileged Knowledge Transfer from Electrodermal Activity to Low-Cost Sensors for Stress Monitoring. CA, USA. doi:10.48550/arXiv.2510.24058</p>
<p>The post <a href="https://mobile.fhstp.ac.at/allgemein/sota-state-of-the-art-of-multimodal-mobile-stress-detection/">SOTA | State of the art of Multimodal Mobile Stress Detection</a> appeared first on <a href="https://mobile.fhstp.ac.at">Mobile USTP MKL</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Predictive UI &#8211; Using AI to personalize user experience in real-time</title>
		<link>https://mobile.fhstp.ac.at/allgemein/predictive-ui-using-ai-to-personalize-user-experience-in-real-time/</link>
		
		<dc:creator><![CDATA[Melanie Maier]]></dc:creator>
		<pubDate>Wed, 21 Jan 2026 09:55:24 +0000</pubDate>
				<category><![CDATA[Allgemein]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[mobile]]></category>
		<category><![CDATA[UI-Design]]></category>
		<guid isPermaLink="false">https://mobile.fhstp.ac.at/?p=15280</guid>

					<description><![CDATA[<p>Hello dear blog readers! This blog post is about predictive UI, the use of AI to personalize user experience and therefor user interfaces in real-time, depending on user behavior. With AI gaining more and more interest common users and the digital-first era we are living in right now, user expectations are evolving rapidly. Today, it&#8217;s <a class="read-more" href="https://mobile.fhstp.ac.at/allgemein/predictive-ui-using-ai-to-personalize-user-experience-in-real-time/">[...]</a></p>
<p>The post <a href="https://mobile.fhstp.ac.at/allgemein/predictive-ui-using-ai-to-personalize-user-experience-in-real-time/">Predictive UI &#8211; Using AI to personalize user experience in real-time</a> appeared first on <a href="https://mobile.fhstp.ac.at">Mobile USTP MKL</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Hello dear blog readers! This blog post is about predictive UI, the use of AI to personalize user experience and therefor user interfaces in real-time, depending on user behavior. </p>



<p> With AI gaining more and more interest common users and the digital-first era we are living in right now, user expectations are evolving rapidly. Today, it&#8217;s no longer enough for user interfaces to <strong>respond</strong> to user interaction, but they need to be able to <strong>anticipate</strong>, <strong>personalize</strong> and even <strong>predict</strong> user needs. This transformative shift in UI/UX design is driven by AI-powered personalization and predictive interface. </p>



<p>Predictive user interfaces do not only respond faster to user input, but actively anticipate it. By analyzing past interactions and behavioral patterns, AI-powered systems can pre-render interface elements before users explicitly request them. This proactive approach significantly reduces perceived latency and creates smoother, more seamless interactions, especially in complex or content-heavy applications.</p>



<h2 class="wp-block-heading">What Is AI-Powered Personalization?</h2>



<p>AI-driven personalization uses machine learning to tailor every aspect of a user interface to the individual user. This includes everything from content to layout. Based on behavior patterns, every element of the UI adapts in real time to preferences and context.</p>



<p id="c859">This kind of personalization is called <strong>hyper-personalization</strong>. Interfaces are now designed to operate at the level of the &#8220;user of one&#8221;. This means that the content, tone, layout and even imagery are uniquely for each individual. An example for such a hyper-personalized interface would be Netflix. Netflix tailors their streaming suggestions and even their thumbnails to each user, providing a unique user interface that fits perfectly to the users personal preferences.</p>



<figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="2732" height="2048" src="https://mobile.fhstp.ac.at/wp-content/uploads/2026/01/IMG_3911.jpg" alt="Highly customized Netflix interface" class="wp-image-15283" srcset="https://mobile.fhstp.ac.at/wp-content/uploads/2026/01/IMG_3911.jpg 2732w, https://mobile.fhstp.ac.at/wp-content/uploads/2026/01/IMG_3911-1536x1151.jpg 1536w, https://mobile.fhstp.ac.at/wp-content/uploads/2026/01/IMG_3911-2048x1535.jpg 2048w" sizes="(max-width: 2732px) 100vw, 2732px" /><figcaption class="wp-element-caption">Highly customized Netflix interface</figcaption></figure>



<p>This level of personalization goes far beyond visual customization. Streaming platforms like Netflix also use predictive models to preload and buffer content they expect a user to watch next. By anticipating user behavior at the system level, these interfaces feel instant and highly responsive, reinforcing the impression of an experience that is tailored not only to preferences, but also to performance expectations.</p>



<h2 class="wp-block-heading">Predictive Interfaces: Designing What Comes Next</h2>



<p>While UI designer always aim to create (static) interfaces that make interaction as easy as possible, predictive interfaces go a step further. With using AI, they aim to forecast user intent and proactive adapt to it.</p>



<p>Traditional user interfaces are usually event-driven: the system waits for an explicit action before reacting. Predictive interfaces, in contrast, rely on AI models to infer probable next steps and adapt proactively. Layouts, navigation structures, and interface elements can dynamically recalibrate based on predicted user intent, rather than remaining fixed until a user interaction occurs.</p>



<p id="2dc7">One example of forecasting user intent are smart suggestions and pre-filling input. Pre-filling includes predictive search bars and auto-filled forms. Most users are also already familiar with smart suggestions, such as context-aware menus that surface for example the most relevant options based on time of day, recent interactions or location.</p>



<p>These predictions are often based on sequential interaction patterns, such as frequently used navigation paths or repeated task flows. By learning from session-based behavior, predictive interfaces can surface the most relevant options at the right moment, reducing cognitive load and helping users reach their goals faster and with fewer interactions.</p>



<p>One real-world case of such an interface behavior involved an e-commerce platform, that dynamically re-arranged its navigation menu during high-traffic periods to yield significantly higher click-through rates.</p>



<p>Another concept of predictive interfaces is Zero-Click Predictive UI. It&#8217;s changing the rules of interaction. Instead of guiding users through multiple clicks or menus, modern AI-powered websites can anticipate user needs and deliver content, recommendations, or actions instantly—often before the user even asks. In practice, this means the site predicts what a visitor is looking for using data like browsing behavior, location, or even voice and gesture input, and presents the right information immediately. This approach reduces friction, shortens user journeys, increases engagement, and marks a new era in human-centered digital experiences.</p>



<h2 class="wp-block-heading">Technology &amp; Research</h2>



<p id="1a27">Modern adaptive and predictive user interfaces are being shaped by rapid advances in AI research and technology. Instead of static layouts, today’s interfaces can adjust themselves in real time based on who the user is, what they are trying to do, and how experienced they are. Adaptive User Interfaces, for example, can simplify the experience for newcomers by hiding advanced features, while power users might see shortcuts or navigation structures tailored to their workflows.</p>



<p>Many of these adaptive behaviors are powered by machine learning models that analyze user interactions over time. Techniques such as behavioral pattern recognition, session-based learning, and feedback-driven optimization allow interfaces to continuously improve. Reinforcement learning, in particular, enables systems to experiment with small UI changes and learn which design decisions lead to higher engagement or better usability outcomes.</p>



<p>Personalization plays a key role in this evolution. Recommender systems—powered by techniques such as collaborative filtering, embeddings, and hybrid AI models—analyze user behavior to surface content that feels relevant and timely. These systems are already familiar from streaming and e-commerce platforms, but similar ideas are now influencing how entire interfaces are structured and presented.</p>



<p>On the research side, reinforcement learning opens up new possibilities for interfaces that improve themselves over time. By continuously experimenting with small UI changes and measuring user engagement, these systems can learn which design decisions work best in different situations. This process is guided by predictive human–computer interaction models, allowing interfaces to adapt in a more informed and data-driven way.</p>



<p>Another important aspect of predictive interfaces is their ability to optimize performance in the background. By intelligently preloading and pre-rendering interface components that are likely to be needed next, AI-driven systems can minimize unnecessary computations and avoid redundant rendering cycles. This not only improves responsiveness, but also makes more efficient use of system resources.</p>



<p id="1a27">Generative AI adds another powerful layer. Diffusion-based models can create personalized interface designs from simple inputs like text descriptions or rough sketches, then refine those designs through automated feedback loops. Finally, human–AI collaborative design agents, such as tools like “PrototypeAgent,” are beginning to support designers directly by translating intent into UI components through iterative, multi-agent workflows. Together, these technologies point toward a future where interfaces are not just designed once, but continuously learned, generated, and optimized.</p>



<h2 class="wp-block-heading">Design Principles and Ethical Considerations </h2>



<p>As AI becomes more deeply embedded in user experiences, strong design principles and ethical considerations are essential to ensure these systems remain helpful, trustworthy, and inclusive. One key element is the use of feedback loops that actively involve users in the evolution of AI-driven interfaces. By allowing people to rate, adjust, or refine AI suggestions, systems can better align their behavior with real user needs instead of making opaque decisions in the background.</p>



<p>Transparency is equally important. Users should be able to clearly recognize when and how AI is influencing the interface, whether through labeled suggestions, adaptive layouts, or automated recommendations. Making AI-driven actions visible—similar to how tools like Grammarly label AI-generated suggestions—helps build trust and sets appropriate expectations. Alongside transparency, user control must remain a central design goal. Rather than enforcing AI-generated decisions, interfaces should allow users to customize, override, or completely reject predictive suggestions and layout changes.</p>



<p>In addition to transparency and control, predictive interfaces must be resilient to incorrect assumptions. AI models can misinterpret user intent, which makes robust fallback mechanisms essential. When predictions fail, interfaces should gracefully revert to standard interaction patterns instead of forcing users into confusing or irreversible UI states.</p>



<p>Finally, ethical AI design requires a strong focus on bias mitigation and inclusivity. AI systems learn from data, and if that data is limited or unbalanced, personalization can quickly become skewed or unfair. Ensuring diverse training data and regularly evaluating outcomes across different user groups helps create experiences that are not only intelligent, but also equitable and accessible for everyone.</p>



<h2 class="wp-block-heading">Final Thoughts</h2>



<p>AI-powered personalization and predictive interfaces aren&#8217;t futuristic anymore. They are already widely used and even somehow expected by the user. As AI-powered personalization and predictive interfaces continue to evolve, user interfaces will no longer be static artifacts designed once and shipped. Instead, they will become adaptive systems that continuously learn, generate, and optimize themselves over time. This shift will not only redefine user expectations, but also fundamentally change how designers and developers think about creating truly human-centric digital experiences.</p>



<div class="wp-block-group"><div class="wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained">
<h2 class="wp-block-heading">Sources</h2>
</div></div>



<ul class="wp-block-list">
<li>https://medium.com/@harsh.mudgal_27075/ai-powered-personalization-predictive-interfaces-in-ui-ux-design-a16259916663</li>



<li>https://dev.to/raajaryan/advanced-ai-strategies-for-predictive-ui-component-rendering-in-react-3a01</li>



<li>https://www.fullstack.com/labs/resources/blog/ai-powered-user-interfaces-how-machine-learning-and-react-shape-web-apps</li>
</ul>



<p></p>
<p>The post <a href="https://mobile.fhstp.ac.at/allgemein/predictive-ui-using-ai-to-personalize-user-experience-in-real-time/">Predictive UI &#8211; Using AI to personalize user experience in real-time</a> appeared first on <a href="https://mobile.fhstp.ac.at">Mobile USTP MKL</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Print2Mobile &#124; Pa(w)ls &#8211; Pawsome new social media</title>
		<link>https://mobile.fhstp.ac.at/allgemein/print2mobile-pawls-pawsome-new-social-media/</link>
		
		<dc:creator><![CDATA[Melanie Maier]]></dc:creator>
		<pubDate>Sun, 19 Oct 2025 20:08:56 +0000</pubDate>
				<category><![CDATA[Allgemein]]></category>
		<category><![CDATA[Print-to-mobile]]></category>
		<category><![CDATA[QR-Code]]></category>
		<guid isPermaLink="false">https://mobile.fhstp.ac.at/?p=14990</guid>

					<description><![CDATA[<p>Have you ever moved to a new city and suddenly missed the presence of your childhood pet? Maybe you love animals but can’t have one in your apartment or you just wish you could meet people who share your love to animals. Idea That’s where Pa(w)ls comes in.The idea behind Pa(w)ls is to create a place where <a class="read-more" href="https://mobile.fhstp.ac.at/allgemein/print2mobile-pawls-pawsome-new-social-media/">[...]</a></p>
<p>The post <a href="https://mobile.fhstp.ac.at/allgemein/print2mobile-pawls-pawsome-new-social-media/">Print2Mobile | Pa(w)ls &#8211; Pawsome new social media</a> appeared first on <a href="https://mobile.fhstp.ac.at">Mobile USTP MKL</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Have you ever moved to a new city and suddenly missed the presence of your childhood pet? Maybe you love animals but can’t have one in your apartment or you just wish you could meet people who share your love to animals.</p>



<h2 class="wp-block-heading">Idea</h2>



<p>That’s where Pa(w)ls comes in.<br>The idea behind Pa(w)ls is to create a place where people can connect through animals &#8211; whether they want to meet pets from others, let people meet their own animals or visit animal shelters to spend time with furry friends.</p>



<p>To promote the concept, I also created a poster campaign. Each poster includes a QR code that directly links to the website, allowing people to quickly learn more about the concept and sign up. Matching stickers could be used to advertise Pa(w)ls around cities.</p>



<figure class="wp-block-gallery aligncenter has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-full is-style-default"><img decoding="async" width="595" height="842" data-id="15001" src="https://mobile.fhstp.ac.at/wp-content/uploads/2025/10/Plakat_klein-2.jpg" alt="" class="wp-image-15001"/></figure>



<figure class="wp-block-image size-full"><img decoding="async" width="2372" height="3162" data-id="15007" src="https://mobile.fhstp.ac.at/wp-content/uploads/2025/10/IMG_4346-2-1.jpg" alt="" class="wp-image-15007" srcset="https://mobile.fhstp.ac.at/wp-content/uploads/2025/10/IMG_4346-2-1.jpg 2372w, https://mobile.fhstp.ac.at/wp-content/uploads/2025/10/IMG_4346-2-1-1152x1536.jpg 1152w, https://mobile.fhstp.ac.at/wp-content/uploads/2025/10/IMG_4346-2-1-1536x2048.jpg 1536w" sizes="(max-width: 2372px) 100vw, 2372px" /><figcaption class="wp-element-caption">Pa(w)ls Sticker</figcaption></figure>
</figure>



<h2 class="wp-block-heading">Implementation</h2>



<p>I designed and built the website prototype using Figma Sites, focusing on creating a soft, friendly and inviting design, something that reflects the emotional and community-driven nature of the idea. Using Figma Slides allowed me to design a fully responsive web application without coding, concentrating on the design, concept and a visually pleasing appearance over all devices. </p>



<p>The website allows users to learn more about this new kind of social media. It provides information about how it works, what the motivation behind is and what benefits it brings for its users. </p>



<p>Pa(w)ls visual identity revolves around bright turquoise tones, rounded shapes and a playful layout, emphasizing friendliness and comfort. The slogan &#8220;People. Pets. Pawsome memories.&#8221; is intended to captivate the reader and already convey a connection to a pet-related topic.</p>



<p>The poster design plays a key role: it visually introduces the project and makes it easy for people to access the website instantly by scanning the QR code with their phone. Paw prints on the poster and the website enhance the recognition value of the project.</p>



<p>Pa(w)ls is more than just a website concept &#8211; it’s about building connections and sharing joy, one paw at a time. The combination of design, emotional storytelling and accessibility aims to create a space where people and animals bring out the best in each other.</p>



<p> </p>



<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" data-id="15009" src="https://mobile.fhstp.ac.at/wp-content/uploads/2025/10/Post_img_1-1.jpg" alt="" class="wp-image-15009" srcset="https://mobile.fhstp.ac.at/wp-content/uploads/2025/10/Post_img_1-1.jpg 1920w, https://mobile.fhstp.ac.at/wp-content/uploads/2025/10/Post_img_1-1-1536x864.jpg 1536w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>
</figure>



<p></p>



<p>Link to the website: <a href="https://ginger-money-77817699.figma.site">https://ginger-money-77817699.figma.site</a> </p>



<div class="wp-block-buttons is-vertical is-content-justification-center is-nowrap is-layout-flex wp-container-core-buttons-is-layout-534da0b3 wp-block-buttons-is-layout-flex">
<div class="wp-block-button has-custom-width wp-block-button__width-50 is-style-fill"><a class="wp-block-button__link has-background has-medium-font-size has-text-align-center has-custom-font-size wp-element-button" href="https://ginger-money-77817699.figma.site" style="background-color:#00bccb" rel="https://ginger-money-77817699.figma.site">Visite Website</a></div>
</div>



<p>Sources:</p>



<ul class="wp-block-list">
<li>Cat image: <a href="https://unsplash.com/de/fotos/weisse-und-braune-langfellkatze-ZCHj_2lJP00">https://unsplash.com/de/fotos/weisse-und-braune-langfellkatze-ZCHj_2lJP00</a></li>



<li>Image in Section Pet Companions: <a href="https://unsplash.com/de/fotos/person-mit-goldring-in-dunklem-raum-XB_yndXE4ks">https://unsplash.com/de/fotos/person-mit-goldring-in-dunklem-raum-XB_yndXE4ks</a></li>



<li>Image in Section Pet Hosts: <a href="https://unsplash.com/de/fotos/getigerte-katze-beruhrt-die-handflache-einer-person-xulIYVIbYIc">https://unsplash.com/de/fotos/getigerte-katze-beruhrt-die-handflache-einer-person-xulIYVIbYIc</a></li>



<li>Image in Section Shelter Connections: <a href="https://unsplash.com/de/fotos/flachfokusfotografie-von-schwarzem-katzchen-On6bRQRn5lY">https://unsplash.com/de/fotos/flachfokusfotografie-von-schwarzem-katzchen-On6bRQRn5lY</a></li>



<li>Dog image: <a href="https://unsplash.com/de/fotos/weisses-und-braunes-langes-fell-grosser-hund-U3aF7hgUSrk">https://unsplash.com/de/fotos/weisses-und-braunes-langes-fell-grosser-hund-U3aF7hgUSrk</a></li>
</ul>



<p><br><em>Legal Notice:</em> The platform presented here does not represent a real or functioning service. All concepts, designs, texts and ideas shown on this website are the intellectual property of the creator and are protected under applicable copyright and intellectual property laws. Any reproduction, distribution, or use of this material without explicit permission from the author is strictly prohibited.</p>



<p></p>
<p>The post <a href="https://mobile.fhstp.ac.at/allgemein/print2mobile-pawls-pawsome-new-social-media/">Print2Mobile | Pa(w)ls &#8211; Pawsome new social media</a> appeared first on <a href="https://mobile.fhstp.ac.at">Mobile USTP MKL</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
