Folgen
Been Kim
Been Kim
Google DeepMind
Bestätigte E-Mail-Adresse bei csail.mit.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Towards a rigorous science of interpretable machine learning
F Doshi-Velez, B Kim
arXiv preprint arXiv:1702.08608, 2017
50762017
Smoothgrad: removing noise by adding noise
D Smilkov, N Thorat, B Kim, F Viégas, M Wattenberg
arXiv preprint arXiv:1706.03825, 2017
25382017
Sanity checks for saliency maps
J Adebayo, J Gilmer, M Muelly, I Goodfellow, M Hardt, B Kim
Advances in Neural Information Processing Systems, 9505-9515, 2018
23792018
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
RS Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler ...
https://arxiv.org/abs/1711.11279, https://arxiv.org/abs/1711.11279, 2018
2156*2018
Examples are not enough, learn to criticize! criticism for interpretability
B Kim, R Khanna, OO Koyejo
Advances in Neural Information Processing Systems, 2280-2288, 2016
11112016
Concept bottleneck models
PW Koh, T Nguyen, YS Tang, S Mussmann, E Pierson, B Kim, P Liang
International Conference on Machine Learning, 5338-5348, 2020
8302020
The (un) reliability of saliency methods
PJ Kindermans, S Hooker, J Adebayo, M Alber, KT Schütt, S Dähne, ...
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 267-280, 2019
7982019
A benchmark for interpretability methods in deep neural networks
S Hooker, D Erhan, PJ Kindermans, B Kim
Advances in Neural Information Processing Systems, 9737-9748, 2019
7762019
Towards automatic concept-based explanations
A Ghorbani, J Wexler, JY Zou, B Kim
Advances in Neural Information Processing Systems, 9277-9286, 2019
7482019
To trust or not to trust a classifier
H Jiang, B Kim, M Guan, M Gupta
Advances in neural information processing systems, 5541-5552, 2018
5442018
Human-centered tools for coping with imperfect algorithms during medical decision-making
CJ Cai, E Reif, N Hegde, J Hipp, B Kim, D Smilkov, M Wattenberg, ...
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems …, 2019
4992019
Visualizing and Measuring the Geometry of BERT
E Reif, A Yuan, M Wattenberg, FB Viegas, A Coenen, A Pearce, B Kim
Advances in Neural Information Processing Systems, 8594-8603, 2019
4732019
Learning how to explain neural networks: PatternNet and PatternAttribution
PJ Kindermans, KT Schütt, M Alber, KR Müller, D Erhan, B Kim, S Dähne
arXiv preprint arXiv:1705.05598, 2017
4152017
Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification
B Kim, C Rudin, J Shah
Neural Information Processing Systems (NIPS), 2014
4072014
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
CK Yeh, B Kim, S Arik, CL Li, T Pfister, P Ravikumar
Advances in Neural Information Processing Systems 33, 2020
3272020
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
M Narayanan, E Chen, J He, B Kim, S Gershman, F Doshi-Velez
arXiv preprint arXiv:1802.00682, 2018
2842018
Multiple relative pose graphs for robust cooperative mapping
B Kim, M Kaess, L Fletcher, J Leonard, A Bachrach, N Roy, S Teller
Robotics and Automation (ICRA), 2010 IEEE International Conference on, 3185-3192, 2010
2562010
An Evaluation of the Human-Interpretability of Explanation
I Lage, E Chen, J He, M Narayanan, B Kim, S Gershman, F Doshi-Velez
arXiv preprint arXiv:1902.00006, 2019
2382019
Considerations for evaluation and generalization in interpretable machine learning
F Doshi-Velez, B Kim
Explainable and Interpretable Models in Computer Vision and Machine Learning …, 2018
2272018
Debugging Tests for Model Explanations
J Adebayo, M Muelly, I Liccardi, B Kim
Neurips 2020, 2020
2022020
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20