This page is an on-line demo of our recent research results on singing voice separation with recurrent neural networks.
Manuscript and complete results can be found in our paper entitled "A Recurrent Encoder-decoder Approach with Skip-filtering connections for Monaural Singing Voice Separation" submitted to MLSP 2017.
Code can be found here.
Trained models can be found here.
Paper can be found here.
[1] A. Liutkus, F.-R. Stoter, Z. Rafii, D. Kitamura, B. Rivet, N. Ito, N. Ono, and J. Fontecave, ``The 2016 signal separation evaluation campaign,'' in Latent Variable Analysis and Signal Separation: 13th International Conference, LVA/ICA 2017, 2017 pp. 323-332.
[2] E.-M. Grais, G. Roma, A.J.R. Simpson, and M.-D. Plumbley, ``Single-channel audio source separation using deep neural network ensembles,'' in Audio Engineering Society Convention 140, May 2016.
[3] P. Chandna, M. Miron, J. Janer, and E. Gomez, ``Monoaural audio source separation using deep convolutional neural networks,'' in Latent Variable Analysis and Signal Separation: 13th International Conference, LVA/ICA 2017, 2017, pp. 258-266.
[4] F.-R. Stoter, A. Liutkus, R. Badeau, B. Edler, and P. Magron, ``Common fate model for unison source separation,'' in International Conference on Acoustics, Speech and Signal Processing (ICASSP 2016), 2016, pp. 126-130.