Project description
Current e-commerce chatbots often suffer from "conversational coldness" and high friction during complex tasks. This research shifts the focus from purely verbal logic to Visual Interaction Cues (VICs) and Avatar Embodiment.
The goal
Transform technical AI capabilities into predictable business value by examining how visual cues impacts user performance and emotional comfort during tracking and voucher redemption flows
Research Questions
Precision-Led Inquiry: Defining the Scope of AI UX.
RQ1 (Functional):
RQ2 (Emotional):
How do these elements influence emotion-oriented metrics such as satisfaction, trust, and emotional comfort?
Experiment Design
Experimental Framework
Conducted a rigorous 2×2 between-subjects experiment with 32 participants across four high-fidelity prototype variations (VA, VB, OA, OB)

Wrong Path Design
To avoid "process theater," I embedded a simulated system failure (undisclosed voucher entry error). This allowed for the authentic observation of user frustration and recovery behavior in "non-ideal" scenarios, a key senior-level signal for risk reduction

Technical Realization: LLM-Integrated High-Fidelity Prototyping
Moving beyond static mockups, I architected a functional conversational system using Voiceflow integrated via API with GPT-3.5 Turbo.

The Validation Framework
Specialized Benchmarking
Utilized the CUQ (Chatbot Usability Questionnaire) developed by Ulster University
High-Fidelity Behavioral Tracking
Utilized the CUQ (Chatbot Usability Questionnaire) developed by Ulster University
Qualitative insight discovering
Conducted direct observation and Think Aloud protocols during an undisclosed the "Wrong Path"
Key Results

+24% Task Efficiency
Systemic visual interaction cues (VICs) significantly accelerated task duration and reduced interaction friction.
-65% Error Reduction
High user satisfaction ratings and positive reviews highlight the app's intuitive interface and powerful AI capabilities.
Affective Nuance
Avatar embodiment significantly enhanced initial "welcomingness" (p=.008) but showed a "ceiling effect" on establishing long-term trust.
Cognitive Threshold
Identified a "visual noise" limit where excessive scaffolding without precise verbal guidance increases load during error recovery.
6 . Retrospective: Learnings & Growth

Advanced Trust Calibration
Moving toward 7-point measurement scales and high-stakes tasks to detect nuanced shifts beyond the current "ceiling effect"
Adaptive Interaction Logic:
Transitioning from static scaffolding to adaptive AI interfaces that dynamically balance visual richness with verbal guidance
Inclusive System Scaling
Prioritizing platform-neutral UI and demographic parity to ensure interaction robustness across both digital-native and novice populations






