Folgen
Tom B Brown
Tom B Brown
Anthropic
Bestätigte E-Mail-Adresse bei anthropic.com - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Language models are few-shot learners
TB Brown
arXiv preprint arXiv:2005.14165, 2020
40352*2020
Deep reinforcement learning from human preferences
PF Christiano, J Leike, T Brown, M Martic, S Legg, D Amodei
Advances in neural information processing systems 30, 2017
28442017
Scaling laws for neural language models
J Kaplan, S McCandlish, T Henighan, TB Brown, B Chess, R Child, ...
arXiv preprint arXiv:2001.08361, 2020
2391*2020
Extracting training data from large language models
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
30th USENIX Security Symposium (USENIX Security 21), 2633-2650, 2021
16122021
Fine-tuning language models from human preferences
DM Ziegler, N Stiennon, J Wu, TB Brown, A Radford, D Amodei, ...
arXiv preprint arXiv:1909.08593, 2019
12432019
Training a helpful and harmless assistant with reinforcement learning from human feedback
Y Bai, A Jones, K Ndousse, A Askell, A Chen, N DasSarma, D Drain, ...
arXiv preprint arXiv:2204.05862, 2022
12052022
Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models
J Kaplan, S McCandlish, T Henighan, TB Brown, B Chess
arXiv preprint arXiv:2001.08361 1 (2), 4, 2020
11932020
Adversarial patch
TB Brown, D Mané, A Roy, M Abadi, J Gilmer
arXiv preprint arXiv:1712.09665, 2017
10352017
Constitutional ai: Harmlessness from ai feedback
Y Bai, S Kadavath, S Kundu, A Askell, J Kernion, A Jones, A Chen, ...
arXiv preprint arXiv:2212.08073, 2022
9522022
Technical report on the cleverhans v2. 1.0 adversarial examples library
N Papernot, F Faghri, N Carlini, I Goodfellow, R Feinman, A Kurakin, ...
arXiv preprint arXiv:1610.00768, 2016
581*2016
Aurko Roy, Martín Abadi, and Justin Gilmer. Adversarial patch
TB Brown, D Mané
31st Conference on Neural Information Processing Systems (NIPS 2017), 2017
5182017
Language Models are Few-Shot Learners, arxiv, 2020
TB Brown
4632005
Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned
D Ganguli, L Lovitt, J Kernion, A Askell, Y Bai, S Kadavath, B Mann, ...
arXiv preprint arXiv:2209.07858, 2022
3572022
cleverhans v2. 0.0: an adversarial machine learning library
N Papernot, I Goodfellow, R Sheatsley, R Feinman, P McDaniel
arXiv preprint arXiv:1610.00768 10, 2016
3282016
A general language assistant as a laboratory for alignment
A Askell, Y Bai, A Chen, D Drain, D Ganguli, T Henighan, A Jones, ...
arXiv preprint arXiv:2112.00861, 2021
3252021
Scaling laws for autoregressive generative modeling
T Henighan, J Kaplan, M Katz, M Chen, C Hesse, J Jackson, H Jun, ...
arXiv preprint arXiv:2010.14701, 2020
3162020
In-context learning and induction heads
C Olsson, N Elhage, N Nanda, N Joseph, N DasSarma, T Henighan, ...
arXiv preprint arXiv:2209.11895, 2022
2782022
Predictability and surprise in large generative models
D Ganguli, D Hernandez, L Lovitt, A Askell, Y Bai, A Chen, T Conerly, ...
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and …, 2022
2542022
A mathematical framework for transformer circuits
N Elhage, N Nanda, C Olsson, T Henighan, N Joseph, B Mann, A Askell, ...
Transformer Circuits Thread 1 (1), 12, 2021
2382021
Language Models are Few-Shot Learners. 2020. doi: 10.48550
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
arxiv, 5-7, 2005
2052005
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20