Copy - VidPP
Saffron Zainchkovskaya
Created on November 8, 2023
More creations to inspire you
SUMMER ZINE 2018
Presentation
FALL ZINE 2018
Presentation
INTERNATIONAL EVENTS
Presentation
MASTER'S THESIS ENGLISH
Presentation
49ERS GOLD RUSH PRESENTATION
Presentation
3 TIPS FOR AN INTERACTIVE PRESENTATION
Presentation
RACISM AND HEALTHCARE
Presentation
Transcript
Dr Sunčica Hadžidedić
Saffron Zainchkovskaya
Recommender systems: exploring the effects of explainability on user trust
BACKGROUND
- Recommender Systems (RSs) are algorithms that provide personalised recommendations to users.
- RSs analyse patterns and infer preferences from historical user data.
- past purchases
- viewing histories
- ratings
- Commoly used in large producs used on a daily basis:
- e-commerce ( Amazon )
- entertainment ( Spotify )
- content consumption ( TikTok )
- RS operate as "black boxes"
- making it challenging to understand how recommendations are generated.
- The lack of explainability in RSs could lower user trust and satisfaction.
- RSs have far-reaching implications for people's daily lives, in domains such as job searching
- Central problem addressed by this project
- the lack of knowledge surrounding the impact of explainability in RSs on user trust
- Uniquely tackles the widespread issue within literature where the effects of explainability on user trust are only examined within isolated domains
USER TRUST
RSs HAVE HIGH IMPACT ON LIFE
LACK OF TRANSPARENCY
ISSUES
USERS HESITATE TO RELY ON RSs
VITAL TO BOTH USERS AND ORGANISATIONS
RSs USED OFTEN & INFLUENCE US DAILY
Project Importance
- RSs are becoming widespread in various industries, affecting daily decisions
- Lack of explainability currently errodes user trust
- Critical Needs in todays digital era:
- Responsible AI: Prioritising human-centric design to provide users with informed choices.
- The Challenge:
- Many models lack explainability, leading to user hesitation & lost trust.
MOTIVATION
- Motivation behind this project:
- Adressing unsolved challenge:
- Multiple RS models lack clear explainability.
- Understand the impact of explainability on the end user on a deeper level.
- WHY
- By addressing user trust concerns we can enhance user satisfaction and system adoption
- benefitting both users and organisations.
RESEARCH QUESTION
To what extent does the integration of explainability into a recommendation system enhance the level of user trust, spanning both high and low-impact domains?
[ R. Burke, “Hybrid Web Recommender Systems,” The Adaptive Web, pp. 377–408, 2019, doi: 10.1007/978-3-540-72079-9-12.]
DEVELOP 4 RS BACK END
CONDUCT OFFLINE TESTS
Multi-faceted approach
CONDUCT USER STUDIES
DEVELOP THE FRONT END
- Develop 4 Hybrid switching RS
- Content Based filtering with Collaborative filtering [1]
- For each impact level (low/high)
- 1 base and 1 explainable RS.
- Unique project features:
- Pioneering a multi-domain investigation
- Integration of user-centric design principles
- To tailor explainability features that resonate with user intuition and understanding.
- Thorough research through both online and offline studies.
How It Will Be Addressed
- Low stakes domain
- Decisions / recommendations are less critical.
- Typically involves areas that affect personal preferences and leisure.
- In this study high impact domain chosen
- ENTERTAINMENT
- High stakes domain
- Decisions / recommendations can have significant consequences.
- These domains often involve critical services that affect people's lives and well-being
- In this study high impact domain chosen
- JOB SEARCH
LOW IMPACT
HIGH IMPACT
DOMAINS
- Launch a complete mobile application for the RSs in the high impact domain.
- Execute a comprehensive user study to refine explainability and trust in the RS.
- Expand the RS & ERS to a mobile application in a low impact domain and implement the back end for the high impact domain (RS*& ERS*).
- Perform offline comparisons to assess the impact of domain on explainability and trust.
- Develop a baseline recommendation system (RS) and a explinable RS (ERS) in the low impact domain.
- Conduct offline tests to compare the explainable and trustworthy RS against the baseline
INTERMEDIATE
ADVANCED
BASIC
DELIVERABLES
PLANNING
Product Evaluation Methodologies:
- User Study: 20+ individuals
- Offline Test: Full system testing on metrics below
- Metrics:
- Standard: RMSE, F1, Novelty
- Explainability and Trust: Microsoft Guidelines for Human AI interaction, trust-based weights.
- Comparison & Conclusion:
- Analyse and compare study results across domains to assess project impact and methodology effectiveness.
OFFLINE TEST
COMPARISON
USER STUDY
PROJECT EVALUATION
- User study performed
- 20+ participants
- Process:
- Let user interact with the RS (Observed)
- Questions are permitted but users arent guided through the process
- Users interact base and ETRS in one of the two domains (randomised order)
- Questionnaire
- Break
- Users then interact with the base and ETRS in the other domain
- Questionnaire
Project Evlauation - USER STUDY
Recommender systems: exploring the effects of explainability on user trust
SAFFRON ZAINCHKOVSKAYA
THANK YOU!
SUCCESS METRICS- PRODUCT
Evaluation Criteria:
- Accuracy: How accurate is the RSs? Has the incorporation of explainability lowered the accuracy?
- Explainability: Is the incorporation of explainable recommendations within systems beneficial to users?
- User Trust: Does this incorporation enhance the trust users have with the system?
- Domains: Is the incorporation of explainations more important in a high/low impact domain?
- Project Completion: How effectively was the project completed within the stipulated timeframe and project scope? Does the completion status of the project reflect on the quality and reliability of the RSs?
- The project will be evaluated on a bi-weekly basis to ensure it is consistently on track.
- During this evaluation the project components below will be examined
SUCCESS METRICS- PROJECT
PROJECT EVALUATION
PLANNING
SUCCESS METRICS- PRODUCT
Evaluation Criteria:
- Accuracy: How accurate is the RSs? Has the incorporation of explainability lowered the accuracy?
- Explainability: Is the incorporation of explainable recommendations within systems beneficial to users?
- User Trust: Does this incorporation enhance the trust users have with the system?
- Domains: Is the incorporation of explainations more important in a high/low impact domain?
- Project Completion: How effectively was the project completed within the stipulated timeframe and project scope? Does the completion status of the project reflect on the quality and reliability of the RSs?
- The project will be evaluated on a bi-weekly basis to ensure it is consistently on track.
- During this evaluation the project components below will be examined
SUCCESS METRICS- PROJECT
PROJECT EVALUATION
PROJECT PROBLEM
- Central problem addressed by this project
- the lack of knowledge surrounding the effects of the incorporation of explainability within RSs
- and the impact this has on a user’s trust level with the system
- Also addresses the problem seen commonly within literature
- where only one domain is being investigated