Want to create interactive content? It’s easy in Genially!
Copy - VidPP
Saffron Zainchkovskaya
Created on November 8, 2023
Start designing with a free template
Discover more than 1500 professional designs like these:
Transcript
Recommender systems: exploring the effects of explainability on user trust
Saffron Zainchkovskaya
Dr Sunčica Hadžidedić
BACKGROUND
- Recommender Systems (RSs) are algorithms that provide personalised recommendations to users.
- RSs analyse patterns and infer preferences from historical user data.
- past purchases
- viewing histories
- ratings
- Commoly used in large producs used on a daily basis:
- e-commerce ( Amazon )
- entertainment ( Spotify )
- content consumption ( TikTok )
ISSUES
- RS operate as "black boxes"
- making it challenging to understand how recommendations are generated.
- The lack of explainability in RSs could lower user trust and satisfaction.
- RSs have far-reaching implications for people's daily lives, in domains such as job searching
- Central problem addressed by this project
- the lack of knowledge surrounding the impact of explainability in RSs on user trust
- Uniquely tackles the widespread issue within literature where the effects of explainability on user trust are only examined within isolated domains
LACK OF TRANSPARENCY
USER TRUST
RSs HAVE HIGH IMPACT ON LIFE
Project Importance
- RSs are becoming widespread in various industries, affecting daily decisions
- Lack of explainability currently errodes user trust
- Critical Needs in todays digital era:
- Responsible AI: Prioritising human-centric design to provide users with informed choices.
- The Challenge:
- Many models lack explainability, leading to user hesitation & lost trust.
RSs USED OFTEN & INFLUENCE US DAILY
USERS HESITATE TO RELY ON RSs
VITAL TO BOTH USERS AND ORGANISATIONS
MOTIVATION
- Motivation behind this project:
- Adressing unsolved challenge:
- Multiple RS models lack clear explainability.
- Understand the impact of explainability on the end user on a deeper level.
- WHY
- By addressing user trust concerns we can enhance user satisfaction and system adoption
- benefitting both users and organisations.
RESEARCH QUESTION
To what extent does the integration of explainability into a recommendation system enhance the level of user trust, spanning both high and low-impact domains?
How It Will Be Addressed
- Develop 4 Hybrid switching RS
- Content Based filtering with Collaborative filtering [1]
- For each impact level (low/high)
- 1 base and 1 explainable RS.
- Unique project features:
- Pioneering a multi-domain investigation
- Integration of user-centric design principles
- To tailor explainability features that resonate with user intuition and understanding.
- Thorough research through both online and offline studies.
DEVELOP 4 RS BACK END
DEVELOP THE FRONT END
Multi-faceted approach
CONDUCT USER STUDIES
CONDUCT OFFLINE TESTS
[ R. Burke, “Hybrid Web Recommender Systems,” The Adaptive Web, pp. 377–408, 2019, doi: 10.1007/978-3-540-72079-9-12.]
DOMAINS
LOW IMPACT
HIGH IMPACT
- Low stakes domain
- Decisions / recommendations are less critical.
- Typically involves areas that affect personal preferences and leisure.
- In this study high impact domain chosen
- ENTERTAINMENT
- High stakes domain
- Decisions / recommendations can have significant consequences.
- These domains often involve critical services that affect people's lives and well-being
- In this study high impact domain chosen
- JOB SEARCH
DELIVERABLES
ADVANCED
INTERMEDIATE
BASIC
- Launch a complete mobile application for the RSs in the high impact domain.
- Execute a comprehensive user study to refine explainability and trust in the RS.
- Expand the RS & ERS to a mobile application in a low impact domain and implement the back end for the high impact domain (RS*& ERS*).
- Perform offline comparisons to assess the impact of domain on explainability and trust.
- Develop a baseline recommendation system (RS) and a explinable RS (ERS) in the low impact domain.
- Conduct offline tests to compare the explainable and trustworthy RS against the baseline
PLANNING
PROJECT EVALUATION
Product Evaluation Methodologies:
- User Study: 20+ individuals
- Offline Test: Full system testing on metrics below
- Metrics:
- Standard: RMSE, F1, Novelty
- Explainability and Trust: Microsoft Guidelines for Human AI interaction, trust-based weights.
- Comparison & Conclusion:
- Analyse and compare study results across domains to assess project impact and methodology effectiveness.
USER STUDY
OFFLINE TEST
COMPARISON
Project Evlauation - USER STUDY
- User study performed
- 20+ participants
- Process:
- Let user interact with the RS (Observed)
- Questions are permitted but users arent guided through the process
- Users interact base and ETRS in one of the two domains (randomised order)
- Questionnaire
- Break
- Users then interact with the base and ETRS in the other domain
- Questionnaire
THANK YOU!
Recommender systems: exploring the effects of explainability on user trust
SAFFRON ZAINCHKOVSKAYA