Skip to main content
AI in Automated and Remote UX Evaluation: A Systematic Review (2014–2024)

AI in Automated and Remote UX Evaluation: A Systematic Review (2014–2024)

Jia Liu

00
2025-01-01
Computer ScienceJournalArticleReview

Abstract

This systematic literature review examines the integration of artificial intelligence (AI) into automated and remote usability and user experience (UX) evaluation methods. Synthesizing insights from 55 peer‐reviewed articles published between 2014 and 2024, the review identifies key AI technologies, such as machine learning, large language models (LLMs), generative AI (GenAI), and ChatGPT, and their roles in enhancing UX evaluation practices. While these technologies contribute to behavior modeling, sentiment analysis, feedback generation, and user simulation, their increasing use also introduces critical challenges. AI models often function as opaque “black boxes,” raising concerns about transparency, hallucinated outputs, contextual misinterpretation, and data bias. The review underscores the need for explainable, human‐in‐the‐loop AI systems, standardized evaluation frameworks, and responsible deployment practices. Specific implications for practice include integrating explainable AI (XAI) methods such as SHAP and counterfactual explanations to improve the transparency of UX insights; adopting domain adaptation and transfer learning to improve generalizability across platforms, demographics, and task contexts; and ensuring human oversight through human‐in‐the‐loop mechanisms to mitigate risks such as hallucinations and contextual misinterpretation. This study offers a comprehensive foundation for both future research and informed adoption of AI technologies in UX workflows, supporting more efficient, data‐driven, and human‐centered evaluation strategies.