Folgen
Yangyang Shi
Yangyang Shi
Meta
Bestätigte E-Mail-Adresse bei fb.com
Titel
Zitiert von
Zitiert von
Jahr
Recurrent neural networks for language understanding.
K Yao, G Zweig, MY Hwang, Y Shi, D Yu
In Fourteenth Annual Conference of the International Speech Communication …, 2013
4202013
Spoken language understanding using long short-term memory neural networks
K Yao, B Peng, Y Zhang, D Yu, G Zweig, Y Shi
2014 IEEE Spoken Language Technology Workshop (SLT), 189-194, 2014
4092014
Torchaudio: Building blocks for audio and speech processing
YY Yang, M Hira, Z Ni, A Astafurov, C Chen, C Puhrsch, D Pollack, ...
ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022
2082022
Llm-qat: Data-free quantization aware training for large language models
Z Liu, B Oguz, C Zhao, E Chang, P Stock, Y Mehdad, Y Shi, ...
arXiv preprint arXiv:2305.17888, 2023
2002023
Emformer: Efficient memory transformer based acoustic model for low latency streaming speech recognition
Y Shi, Y Wang, C Wu, CF Yeh, J Chan, F Zhang, D Le, M Seltzer
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
1812021
Contextual Spoken Language Understanding Using Recurrent Neural Networks
Y Shi, H Yao, Kaisheng, Chen, YC Pan, MY Hwang, B Peng
IEEE International Conference on Acoustics, Speech and Signal Processing, 2015
932015
Contextualized streaming end-to-end speech recognition with trie-based deep biasing and shallow fusion
D Le, M Jain, G Keren, S Kim, Y Shi, J Mahadeokar, J Chan, ...
arXiv preprint arXiv:2104.02194, 2021
842021
Streaming transformer-based acoustic models using self-attention with augmented memory
C Wu, Y Wang, Y Shi, CF Yeh, F Zhang
arXiv preprint arXiv:2005.08042, 2020
732020
Deep lstm based feature mapping for query classification
Y Shi, K Yao, L Tian, D Jiang
Proceedings of the 2016 Conference of the North American Chapter of the …, 2016
732016
Mobilellm: Optimizing sub-billion parameter language models for on-device use cases
Z Liu, C Zhao, F Iandola, C Lai, Y Tian, I Fedorov, Y Xiong, E Chang, ...
arXiv preprint arXiv:2402.14905, 2024
512024
Recurrent neural network language model adaptation with curriculum learning
Y Shi, M Larson, CM Jonker
Computer Speech & Language 33 (1), 136-154, 2015
512015
Towards Recurrent Neural Networks Language Models with Linguistic and Contextual Features.
Y Shi, P Wiggers, CM Jonker
Interspeech 12, 1664-1667, 2012
502012
Weak-attention suppression for transformer based speech recognition
Y Shi, Y Wang, C Wu, C Fuegen, F Zhang, D Le, CF Yeh, ML Seltzer
arXiv preprint arXiv:2005.09137, 2020
312020
Mining effective negative training samples for keyword spotting
J Hou, Y Shi, M Ostendorf, MY Hwang, L Xie
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
282020
Region proposal network based small-footprint keyword spotting
J Hou, Y Shi, M Ostendorf, MY Hwang, L Xie
IEEE Signal Processing Letters 26 (10), 1471-1475, 2019
282019
Dissecting user-perceived latency of on-device E2E speech recognition
Y Shangguan, R Prabhavalkar, H Su, J Mahadeokar, Y Shi, J Zhou, C Wu, ...
arXiv preprint arXiv:2104.02207, 2021
262021
Knowledge distillation for recurrent neural network language modeling with trust regularization
Y Shi, MY Hwang, X Lei, H Sheng
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
262019
Higher order iteration schemes for unconstrained optimization
Y Shi, P Pan
American Journal of Operations Research 1 (03), 73, 2011
242011
Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Y Wang, Y Shi, F Zhang, C Wu, J Chan, CF Yeh, A Xiao
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
232021
Evaluations of interventions using mathematical models with exponential and non-exponential distributions for disease stages: the case of Ebola
X Wang, Y Shi, Z Feng, J Cui
Bulletin of mathematical biology 79, 2149-2173, 2017
192017
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20