Folgen
Jason Wei
Jason Wei
OpenAI
Bestätigte E-Mail-Adresse bei google.com - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Chain-of-thought prompting elicits reasoning in large language models
J Wei, X Wang, D Schuurmans, M Bosma, B Itcher, F Xia, E Chi, Q Le, ...
NeurIPS 2022, 2022
10153*2022
GPT-4 technical report
J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ...
arXiv preprint arXiv:2303.08774, 2023
61032023
PaLM: Scaling language modeling with Pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
JMLR 2023, 2022
51672022
Finetuned language models are zero-shot learners
J Wei, M Bosma, V Zhao, K Guu, A Yu, B Lester, N Du, A Dai, Q Le
ICLR 2022, 2022
31982022
Emergent abilities of large language models
J Wei, Y Tay, R Bommasani, C Raffel, B Zoph, S Borgeaud, D Yogatama, ...
TMLR 2022, 2022
3081*2022
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, E Li, X Wang, ...
JMLR 2024, 2022
30662022
Easy data augmentation techniques for boosting performance on text classification tasks
J Wei, K Zou
EMNLP 2019, 2019
25612019
Self-consistency improves chain-of-thought reasoning in language models
X Wang, J Wei, D Schuurmans, Q Le, E Chi, D Zhou
ICLR 2023, 2023
2397*2023
Large language models encode clinical knowledge
K Singhal, S Azizi, T Tu, SS Mahdavi, J Wei, HW Chung, N Scales, ...
Nature 2023, 2023
20362023
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
TMLR 2023, 2022
11692022
Least-to-most prompting enables complex reasoning in large language models
D Zhou, N Schärli, L Hou, J Wei, N Scales, X Wang, D Schuurmans, ...
ICLR 2023, 2023
10842023
A survey of data augmentation approaches for NLP
S Feng, V Gangal, J Wei, S Chandar, S Vosoughi, T Mitamura, E Hovy
ACL Findings 2021, 2021
8842021
The Flan Collection: Designing data and methods for effective instruction tuning
S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ...
ICML 2023, 2023
6062023
Challenging BIG-bench tasks and whether chain-of-thought can solve them
M Suzgun, N Scales, N Schärli, S Gehrmann, Y Tay, HW Chung, ...
ACL Findings 2023, 2022
5582022
Unifying language learning paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, D Bahri, T Schuster, HS Zheng, ...
ICLR 2023, 2023
420*2023
Pathologist-level classification of histologic patterns on resected lung adenocarcinoma slides with deep neural networks
J Wei, L Tafe, Y Linnik, L Vaickus, N Tomita, S Hassanpour
Scientific Reports, 2019
3272019
Larger language models do in-context learning differently
J Wei, J Wei, Y Tay, D Tran, A Webson, Y Lu, X Chen, H Liu, D Huang, ...
arXiv preprint arXiv:2303.03846, 2023
2712023
Language models are multilingual chain-of-thought reasoners
F Shi, M Suzgun, M Freitag, X Wang, S Srivats, S Vosoughi, HW Chung, ...
ICLR 2023, 2023
2382023
Attention-based deep neural networks for detection of cancerous and precancerous esophagus tissue on histopathological slides
N Tomita, B Abdollahi, J Wei, B Ren, A Suriawinata, S Hassanpour
JAMA Network Open, 2019
1842019
A recipe for arbitrary text style transfer with large language models
E Reif, D Ippolito, A Yuan, A Coenen, C Callison-Burch, J Wei
ACL 2022, 2022
1572022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20