Problem finding
The Interpretability of Artificial Intelligence and the Impact of Outcome Feedback on Trust: A Comparative Study | Xue Zhirong's knowledge base
Original address:
C
- Robots and digital humans
- Draw inferences
- The results show that feedback has a more significant impact on improving users' trust in AI than explainability, but this enhanced trust does not lead to a corresponding performance improvement. Further exploration suggests that feedback induces users to over-trust (i.e., accept the AI's suggestions when it is wrong) or distrust (ignore the AI's suggestions when it is correct), which may negate the benefits of increased trust, leading to a "trust-performance paradox". The researchers call for future research to focus on how to design strategies to ensure that explanations foster appropriate trust to improve the efficiency of human-robot collaboration.
- Conference
- To assess trust more accurately, the researchers used behavioral trust (WoA), a measure that takes into account the difference between the user's predictions and the AI's recommendations, and is independent of the model's accuracy. By comparing WoA under different conditions, researchers can analyze the relationship between trust and performance.
- solution
- artificial intelligence
- interview
- Q1: How does feedback affect users' trust in AI?