Zur Kurzanzeige

dc.creatorBauer, Kevin
dc.creatorZahn, Moritzvon
dc.creatorHinz, Oliver
dc.date.accessioned2022-01-31T10:42:52Z
dc.date.available2022-01-31T10:42:52Z
dc.date.issued2021-06-25
dc.identifier.urihttps://fif.hebis.de/xmlui/handle/123456789/2421
dc.description.abstractThis paper explores the interplay of feature-based explainable AI (XAI) techniques, information processing, and human beliefs. Using a novel experimental protocol, we study the impact of providing users with explanations about how an AI system weighs inputted information to produce individual predictions (LIME) on users' weighting of information and beliefs about the task-relevance of information. On the one hand, we find that feature-based explanations cause users to alter their mental weighting of available information according to observed explanations. On the other hand, explanations lead to asymmetric belief adjustments that we interpret as a manifestation of the confirmation bias. Trust in the prediction accuracy plays an important moderating role for XAI-enabled belief adjustments. Our results show that feature-based XAI does not only superficially influence decisions but really change internal cognitive processes, bearing the potential to manipulate human beliefs and reinforce stereotypes. Hence, the current regulatory efforts that aim at enhancing algorithmic transparency may benefit from going hand in hand with measures ensuring the exclusion of sensitive personal information in XAI systems. Overall, our findings put assertions that XAI is the silver bullet solving all of AI systems' (black box) problems into perspective.
dc.rightsAttribution-ShareAlike 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by-sa/4.0/
dc.subjectFinancial Markets
dc.subjectExperiment Center
dc.titleExpl(AI)ned: The impact of explainable artificial intelligence on cognitive processes
dc.typeWorking Paper
dcterms.referenceshttps://fif.hebis.de/xmlui/handle/123456789/2436?Survey_WP315_BZH_2021
dc.source.filename315_SSRN-id3872711
dc.identifier.safeno315
dc.subject.keywordsxai
dc.subject.keywordsexplainable machine learning
dc.subject.keywordsinformation processing
dc.subject.keywordsbelief updating
dc.subject.keywordsalgorithmic transparency
dc.subject.topic1player
dc.subject.topic1daniel
dc.subject.topic1interaction
dc.subject.topic2influence
dc.subject.topic2opponent
dc.subject.topic2julie
dc.subject.topic3wellDesigned
dc.subject.topic3proposal
dc.subject.topic3gdpr
dc.subject.topic1nameInvestor Behaviour
dc.subject.topic2nameSaving and Borrowing
dc.subject.topic3nameCorporate Governance
dc.identifier.doi10.2139/ssrn.3872711


Dateien zu dieser Ressource

Thumbnail

Das Dokument erscheint in:

Zur Kurzanzeige

Attribution-ShareAlike 4.0 International
Solange nicht anders angezeigt, wird die Lizenz wie folgt beschrieben: Attribution-ShareAlike 4.0 International