Folgen
Michael Saxon
Michael Saxon
PhD Student, NLP Lab, UC Santa Barbara
Bestätigte E-Mail-Adresse bei ucsb.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Automatically Correcting Large Language Models: Surveying the Landscape of Diverse Automated Correction Strategies
L Pan, M Saxon, W Xu, D Nathani, X Wang, WY Wang
Transactions of the Association for Computational Linguistics 12, 484-506, 2024
192*2024
Large language models are latent variable models: Explaining and finding good demonstrations for in-context learning
X Wang, W Zhu, M Saxon, M Steyvers, WY Wang
Advances in Neural Information Processing Systems 36, 2024
164*2024
Causal Balancing for Domain Generalization
X Wang, M Saxon, J Li, H Zhang, K Zhang, WY Wang
The Eleventh International Conference on Learning Representations, https …, 2023
292023
Visual chain of thought: bridging logical gaps with multimodal infillings
D Rose, V Himakunthala, A Ouyang, R He, A Mei, Y Lu, M Saxon, ...
arXiv preprint arXiv:2305.02317, 2023
28*2023
End-to-End Spoken Language Understanding for Generalized Voice Assistants
M Saxon, S Choudhary, JP McKenna, A Mouchtaris
Interspeech 2021, 4738-4742, 2021
282021
Investigating Memorization of Conspiracy Theories in Text Generation
S Levy, M Saxon, WY Wang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 …, 2021
23*2021
Objective measures of plosive nasalization in hypernasal speech
M Saxon, J Liss, V Berisha
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
212019
Wikiwhy: Answering and explaining cause-and-effect questions
M Ho, A Sharma, J Chang, M Saxon, S Levy, Y Lu, WY Wang
The Eleventh International Conference on Learning Representations, 2023
162023
Robust Estimation of Hypernasality in Dysarthria with Acoustic Model Likelihood Features
M Saxon, A Tripathi, Y Jiao, J Liss, V Berisha
IEEE/ACM Transactions on Audio, Speech, and Language Processing 28, 2511-2522, 2020
16*2020
Say What? A Dataset for Exploring the Error Patterns That Two ASR Engines Make
M Moore, M Saxon, H Venkateswara, V Berisha, S Panchanathan
Proc. Interspeech 2019, 2528-2532, 2019
162019
Semantic Complexity in End-to-End Spoken Language Understanding
JP McKenna, S Choudhary, M Saxon, GP Strimel, A Mouchtaris
Proc. Interspeech 2020, 4273-4277, 2020
142020
Multilingual Conceptual Coverage in Text-to-Image Models
M Saxon, WY Wang
Proceedings of the 61st Annual Meeting of the Association for Computational …, 2023
122023
Not All Errors are Equal: Learning Text Generation Metrics using Stratified Error Synthesis
W Xu, Y Tuan, Y Lu, M Saxon, L Li, WY Wang
Findings of the Association for Computational Linguistics: EMNLP 2022, 6559–6574, 2022
122022
Let's Think Frame by Frame with VIP: A Video Infilling and Prediction Dataset for Evaluating Video Chain-of-Thought
V Himakunthala, A Ouyang, D Rose, R He, A Mei, Y Lu, C Sonar, ...
arXiv preprint arXiv:2305.13903, 2023
11*2023
Users are the north star for ai transparency
A Mei, M Saxon, S Chang, ZC Lipton, WY Wang
arXiv preprint arXiv:2303.05500, 2023
102023
Word pair convolutional model for happy moment classification
M Saxon, S Bhandari, L Ruskin, G Honda
Proceedings of the 2nd Workshop on Affective Content Analysis@ AAAI …, 2019
82019
PECO: Examining Single Sentence Label Leakage in Natural Language Inference Datasets through Progressive Evaluation of Cluster Outliers
M Saxon, X Wang, W Xu, WY Wang
Proceedings of the 17th Conference of the European Chapter of the …, 2023
72023
Self-supervised knowledge assimilation for expert-layman text style transfer
W Xu, M Saxon, M Sra, WY Wang
Proceedings of the AAAI Conference on Artificial Intelligence 36 (10), 11566 …, 2022
72022
Counterfactual maximum likelihood estimation for training deep networks
X Wang, W Chen, M Saxon, WY Wang
Advances in Neural Information Processing Systems 34, 25072-25085, 2021
62021
UncommonVoice: A Crowdsourced Dataset of Dysphonic Speech
M Moore, P Papreja, M Saxon, V Berisha, S Panchanathan
Proc. Interspeech 2020, 2532-2536, 2020
62020
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20