Folgen
Xinyun Chen
Xinyun Chen
Google DeepMind
Bestätigte E-Mail-Adresse bei berkeley.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
Journal of Machine Learning Research 25 (70), 1-53, 2024
2038*2024
Delving into transferable adversarial examples and black-box attacks
Y Liu, X Chen, C Liu, D Song
arXiv preprint arXiv:1611.02770, 2016
19372016
Targeted backdoor attacks on deep learning systems using data poisoning
X Chen, C Liu, B Li, K Lu, D Song
arXiv preprint arXiv:1712.05526, 2017
17152017
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
7822023
Competition-level code generation with alphacode
Y Li, D Choi, J Chung, N Kushman, J Schrittwieser, R Leblond, T Eccles, ...
Science 378 (6624), 1092-1097, 2022
7062022
Adversarial example defense: Ensembles of weak defenses are not strong
W He, J Wei, X Chen, N Carlini, D Song
11th USENIX workshop on offensive technologies (WOOT 17), 2017
4322017
Learning to perform local rewriting for combinatorial optimization
X Chen, Y Tian
Advances in neural information processing systems 32, 2019
3652019
Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses
M Goldblum, D Tsipras, C Xie, X Chen, A Schwarzschild, D Song, ...
IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (2), 1563-1580, 2022
284*2022
Tree-to-tree neural networks for program translation
X Chen, C Liu, D Song
Advances in neural information processing systems 31, 2018
2842018
Teaching large language models to self-debug
X Chen, M Lin, N Schärli, D Zhou
arXiv preprint arXiv:2304.05128, 2023
2632023
Large language models as optimizers
C Yang, X Wang, Y Lu, H Liu, QV Le, D Zhou, X Chen
arXiv preprint arXiv:2309.03409, 2023
2382023
Large language models can be easily distracted by irrelevant context
F Shi, X Chen, K Misra, N Scales, D Dohan, EH Chi, N Schärli, D Zhou
International Conference on Machine Learning, 31210-31227, 2023
215*2023
Larger language models do in-context learning differently
J Wei, J Wei, Y Tay, D Tran, A Webson, Y Lu, X Chen, H Liu, D Huang, ...
arXiv preprint arXiv:2303.03846, 2023
1802023
Execution-guided neural program synthesis
X Chen, C Liu, D Song
International Conference on Learning Representations, 2018
1502018
Refit: a unified watermark removal framework for deep learning systems with limited data
X Chen, W Wang, C Bender, Y Ding, R Jia, B Li, D Song
Proceedings of the 2021 ACM Asia Conference on Computer and Communications …, 2021
120*2021
Large language models cannot self-correct reasoning yet
J Huang, X Chen, S Mishra, HS Zheng, AW Yu, X Song, D Zhou
arXiv preprint arXiv:2310.01798, 2023
1192023
Fooling vision and language models despite localization and attention mechanism
X Xu, X Chen, C Liu, A Rohrbach, T Darrell, D Song
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018
116*2018
Neural symbolic reader: Scalable integration of distributed and symbolic representations for reading comprehension
X Chen, C Liang, AW Yu, D Zhou, D Song, QV Le
International Conference on Learning Representations, 2019
1122019
Latent attention for if-then program synthesis
C Liu, X Chen, EC Shin, M Chen, D Song
Advances in Neural Information Processing Systems 29, 2016
1032016
Large language models as tool makers
T Cai, X Wang, T Ma, X Chen, D Zhou
arXiv preprint arXiv:2305.17126, 2023
982023
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20