Folgen
Nan Duan
Nan Duan
Senior Principal Research Manager, Microsoft Research
Bestätigte E-Mail-Adresse bei microsoft.com - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Codebert: A pre-trained model for programming and natural languages
Z Feng, D Guo, D Tang, N Duan, X Feng, M Gong, L Shou, B Qin, T Liu, ...
arXiv preprint arXiv:2002.08155, 2020
26862020
Graphcodebert: Pre-training code representations with data flow
D Guo, S Ren, S Lu, Z Feng, D Tang, S Liu, L Zhou, N Duan, ...
arXiv preprint arXiv:2009.08366, 2020
9922020
Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training
G Li, N Duan, Y Fang, M Gong, D Jiang
Proceedings of the AAAI Conference on Artificial Intelligence 34 (07), 11336 …, 2020
9592020
Codexglue: A machine learning benchmark dataset for code understanding and generation
S Lu, D Guo, S Ren, J Huang, A Svyatkovskiy, A Blanco, C Clement, ...
arXiv preprint arXiv:2102.04664, 2021
8132021
Visual chatgpt: Talking, drawing and editing with visual foundation models
C Wu, S Yin, W Qi, X Wang, Z Tang, N Duan
arXiv preprint arXiv:2303.04671, 2023
6372023
K-adapter: Infusing knowledge into pre-trained models with adapters
R Wang, D Tang, N Duan, Z Wei, X Huang, G Cao, D Jiang, M Zhou
arXiv preprint arXiv:2002.01808, 2020
5922020
Unixcoder: Unified cross-modal pre-training for code representation
D Guo, S Lu, N Duan, Y Wang, M Zhou, J Yin
arXiv preprint arXiv:2203.03850, 2022
5292022
CLIP4Clip: An empirical study of CLIP for end to end video clip retrieval and captioning
H Luo, L Ji, M Zhong, Y Chen, W Lei, N Duan, T Li
Neurocomputing 508, 293-304, 2022
5072022
Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training
W Qi, Y Yan, Y Gong, D Liu, N Duan, J Chen, R Zhang, M Zhou
arXiv preprint arXiv:2001.04063, 2020
4832020
Univl: A unified video and language pre-training model for multimodal understanding and generation
H Luo, L Ji, B Shi, H Huang, N Duan, T Li, J Li, T Bharti, M Zhou
arXiv preprint arXiv:2002.06353, 2020
4822020
Codexglue: A machine learning benchmark dataset for code understanding and generation
S Lu, D Guo, S Ren, J Huang, A Svyatkovskiy, A Blanco, C Clement, ...
arXiv preprint arXiv:2102.04664, 2021
358*2021
Question generation for question answering
N Duan, D Tang, P Chen, M Zhou
Proceedings of the 2017 conference on empirical methods in natural language …, 2017
3422017
Xglue: A new benchmark dataset for cross-lingual pre-training, understanding and generation
Y Liang, N Duan, Y Gong, N Wu, F Guo, W Qi, M Gong, L Shou, D Jiang, ...
arXiv preprint arXiv:2004.01401, 2020
3252020
Agieval: A human-centric benchmark for evaluating foundation models
W Zhong, R Cui, Y Guo, Y Liang, S Lu, Y Wang, A Saied, W Chen, ...
arXiv preprint arXiv:2304.06364, 2023
3222023
Clip4clip: An empirical study of clip for end to end video clip retrieval
H Luo, L Ji, M Zhong, Y Chen, W Lei, N Duan, T Li
arXiv preprint arXiv:2104.08860, 2021
3182021
Nüwa: Visual synthesis pre-training for neural visual world creation
C Wu, J Liang, L Ji, F Yang, Y Fang, D Jiang, N Duan
European conference on computer vision, 720-736, 2022
3062022
Imagebert: Cross-modal pre-training with large-scale weak-supervised image-text data
D Qi, L Su, J Song, E Cui, T Bharti, A Sacheti
arXiv preprint arXiv:2001.07966, 2020
2962020
Constraint-based question answering with knowledge graph
J Bao, N Duan, Z Yan, M Zhou, T Zhao
Proceedings of COLING 2016, the 26th international conference on …, 2016
2962016
Pretraining-based natural language generation for text summarization
H Zhang, J Xu, J Wang
arXiv preprint arXiv:1902.09243, 2019
2802019
Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, and Ming Zhou. 2021. GraphCodeBERT: Pre-training Code Representations with Data Flow
D Guo, S Ren, S Lu, Z Feng, D Tang, S Liu
9th International Conference on Learning Representations, ICLR, 3-7, 2021
2742021
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20