Folgen
Wei Han
Wei Han
OpenAI
Bestätigte E-Mail-Adresse bei illinois.edu
Titel
Zitiert von
Zitiert von
Jahr
Conformer: Convolution-augmented Transformer for Speech Recognition
A Gulati, J Qin, CC Chiu, N Parmar, Y Zhang, J Yu, W Han, S Wang, ...
arXiv preprint arXiv:2005.08100, 2020
33412020
Scalability in perception for autonomous driving: Waymo open dataset
P Sun, H Kretzschmar, X Dotiwalla, A Chouard, V Patnaik, P Tsui, J Guo, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
31532020
Ntire 2017 challenge on single image super-resolution: Methods and results
R Timofte, E Agustsson, L Van Gool, MH Yang, L Zhang
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2017
21482017
Scaling Autoregressive Models for Content-Rich Text-to-Image Generation
J Yu, Y Xu, JY Koh, T Luong, G Baid, Z Wang, V Vasudevan, A Ku, Y Yang, ...
arXiv preprint arXiv:2206.10789, 2022
10132022
Deep networks for image super-resolution with sparse prior
Z Wang, D Liu, J Yang, W Han, T Huang
Proceedings of the IEEE International Conference on Computer Vision, 370-378, 2015
9842015
Heterogeneous network embedding via deep architectures
S Chang, W Han, J Tang, GJ Qi, CC Aggarwal, TS Huang
Proceedings of the 21th ACM SIGKDD International Conference on Knowledge …, 2015
7172015
W2v-bert: Combining contrastive learning and masked language modeling for self-supervised speech pre-training
YA Chung, Y Zhang, W Han, CC Chiu, J Qin, R Pang, Y Wu
2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU …, 2021
4292021
Seq-nms for video object detection
W Han, P Khorrami, TL Paine, P Ramachandran, M Babaeizadeh, H Shi, ...
arXiv preprint arXiv:1602.08465, 2016
3842016
Dilated recurrent neural networks
S Chang, Y Zhang, W Han, M Yu, X Guo, W Tan, X Cui, M Witbrock, ...
Advances in Neural Information Processing Systems, 77-87, 2017
3802017
Pushing the Limits of Semi-Supervised Learning for Automatic Speech Recognition
Y Zhang, J Qin, DS Park, W Han, CC Chiu, R Pang, QV Le, Y Wu
arXiv preprint arXiv:2010.10504, 2020
3742020
ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context
W Han, Z Zhang, Y Zhang, J Yu, CC Chiu, J Qin, A Gulati, R Pang, Y Wu
arXiv preprint arXiv:2005.03191, 2020
3232020
Robust single image super-resolution via deep networks with sparse prior
D Liu, Z Wang, B Wen, J Yang, W Han, TS Huang
IEEE Transactions on Image Processing 25 (7), 3194-3207, 2016
3022016
Improved Noisy Student Training for Automatic Speech Recognition
DS Park, Y Zhang, Y Jia, W Han, CC Chiu, B Li, Y Wu, QV Le
arXiv preprint arXiv:2005.09629, 2020
2722020
Image super-resolution via dual-state recurrent networks
W Han, S Chang, D Liu, M Yu, M Witbrock, TS Huang
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018
2702018
Google USM: Scaling Automatic Speech Recognition Beyond 100 Languages
Y Zhang, W Han, J Qin, Y Wang, A Bapna, Z Chen, N Chen, B Li, ...
arXiv preprint arXiv:2303.01037, 2023
2612023
Bigssl: Exploring the frontier of large-scale semi-supervised learning for automatic speech recognition
Y Zhang, DS Park, W Han, J Qin, A Gulati, J Shor, A Jansen, Y Xu, ...
IEEE Journal of Selected Topics in Signal Processing 16 (6), 1519-1532, 2022
1892022
Noise2music: Text-conditioned music generation with diffusion models
Q Huang, DS Park, T Wang, TI Denk, A Ly, N Chen, Z Zhang, Z Zhang, ...
arXiv preprint arXiv:2302.03917, 2023
1672023
AudioPaLM: A Large Language Model That Can Speak and Listen
PK Rubenstein, C Asawaroengchai, DD Nguyen, A Bapna, Z Borsos, ...
arXiv preprint arXiv:2306.12925, 2023
1532023
Learning semantic relationships for better action retrieval in images
V Ramanathan, C Li, J Deng, W Han, Z Li, K Gu, Y Song, S Bengio, ...
Proceedings of the IEEE conference on computer vision and pattern …, 2015
1452015
Balanced two-stage residual networks for image super-resolution
Y Fan, H Shi, J Yu, D Liu, W Han, H Yu, Z Wang, X Wang, TS Huang
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2017
1332017
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20