Deep and sparse learning in speech and language processing: An overview
Springer ; Cham, Switzerland, 2016
Online
Konferenz
Zugriff:
Large-scale deep neural models, e.g., deep neural networks (DNN) and recurrent neural networks (RNN), have demonstrated significant success in solving various challenging tasks of speech and language processing (SLP), including speech recognition, speech synthesis, document classification and question answering. This growing impact corroborates the neurobiological evidence concerning the presence of layer-wise deep processing in the human brain. On the other hand, sparse coding representation has also gained similar success in SLP, particularly in signal processing, demonstrating sparsity as another important neurobiological characteristic. Recently, research in these two directions is leading to increasing cross-fertlisation of ideas, thus a unified Sparse Deep or Deep Sparse learning framework warrants much attention. This paper aims to provide an overview of growing interest in this unified framework, and also outlines future research possibilities in this multi-disciplinary area.
Titel: |
Deep and sparse learning in speech and language processing: An overview
|
---|---|
Autor/in / Beteiligte Person: | Wang, Dong ; Zhou, Qiang ; Hussain, Amir ; Liu, CL ; Hussain, A ; Luo, B ; Tan, KC ; Zeng, Y ; Zhang, Z ; Engineering and Physical Sciences Research Council ; University, Tsinghua ; Science, Computing ; orcid:0000-0002-8080-082X |
Link: | |
Veröffentlichung: | Springer ; Cham, Switzerland, 2016 |
Medientyp: | Konferenz |
DOI: | 10.1007/978-3-319-49685-6_16 |
Schlagwort: |
|
Sonstiges: |
|