Difference between revisions of "2014:Singing Voice Separation"
(→Evaluation) |
(→Data) |
||
(16 intermediate revisions by the same user not shown) | |||
Line 9: | Line 9: | ||
== Data == | == Data == | ||
− | A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms. | + | A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms (these are the hidden parts of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset). If your algorithm is a supervised one, you are welcome to use the public part of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset for training. |
Collection statistics: | Collection statistics: | ||
# Size of collection: 100 clips | # Size of collection: 100 clips | ||
− | # Audio details: 16-bit, mono, | + | # Audio details: 16-bit, mono, 44.1kHz, WAV |
# Duration of each clip: 30 seconds | # Duration of each clip: 30 seconds | ||
+ | |||
+ | For more information about the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset, see T.-S. Chan, T.-C. Yeh, Z.-C. Fan, H.-W. Chen, L. Su, Y.-H. Yang, and R. Jang, "Vocal activity informed singing voice separation with the iKala dataset," in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., 2015, pp. 718-722. | ||
== Evaluation == | == Evaluation == | ||
− | For evaluation we use Vincent ''et al.'''s ( | + | For evaluation we use [http://hal.inria.fr/inria-00630985/PDF/vincent_SigPro11.pdf Vincent ''et al.'''s (2012)] Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by [http://bass-db.gforge.inria.fr/bss_eval/bss_eval_sources.m bss_eval_sources.m] in [http://bass-db.gforge.inria.fr/bss_eval/ BSS Eval Version 3.0]. Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows: |
− | + | >> trueVoice = wavread('trueVoice.wav'); | |
+ | >> trueKaraoke = wavread('trueKaraoke.wav'); | ||
+ | >> trueMixed = trueVoice + trueKaraoke; | ||
+ | >> [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed); | ||
+ | >> [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke)); | ||
+ | >> [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke)); | ||
+ | >> NSDR = SDR - NSDR; | ||
+ | >> NSIR = SIR - NSIR; | ||
+ | >> NSAR = SAR - NSAR; | ||
− | + | The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized): | |
− | <math> | + | <math>GNSDR=\frac{\sum_{i=1}^{100}NSDR_i}{100}</math>, |
− | + | <math>GSIR=\frac{\sum_{i=1}^{100}SIR_i}{100}</math>, | |
− | <math> | + | <math>GSAR=\frac{\sum_{i=1}^{100}SAR_i}{100}</math>. |
− | + | In addition, sd, min, max and median will also be reported. | |
== Submission format == | == Submission format == | ||
− | Participants are required to submit an entry that takes in | + | Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example: |
+ | |||
+ | function singing_voice_separation(infile, outdir) | ||
+ | [~, name, ext] = fileparts(infile); | ||
+ | your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext])); | ||
+ | |||
+ | function your_algorithm(infile, voiceoutfile, musicoutfile) | ||
+ | mixed = wavread(infile); | ||
+ | |||
+ | % Insert your algorithm here | ||
+ | |||
+ | wavwrite(voice, 44100, voiceoutfile); | ||
+ | wavwrite(music, 44100, musicoutfile); | ||
+ | |||
+ | If scratch space is required, please use the three-argument format instead: | ||
+ | |||
+ | function singing_voice_separation(infile, outdir, tmpdir) | ||
+ | |||
+ | Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2014 Submission Instructions below). | ||
== Packaging submissions == | == Packaging submissions == | ||
− | + | All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed). | |
− | All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be | + | # Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]]. |
+ | # Be sure to follow the [[MIREX 2014 Submission Instructions]]. For example, under '''Very Important Things to Note''', Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.] | ||
All submissions should include a README file including the following the information: | All submissions should include a README file including the following the information: | ||
− | # Command line calling format for all executables and an example formatted set of commands | + | # Command line calling format for all executables and an example formatted set of commands |
# Number of threads/cores used or whether this should be specified on the command line | # Number of threads/cores used or whether this should be specified on the command line | ||
# Expected memory footprint | # Expected memory footprint | ||
# Expected runtime | # Expected runtime | ||
− | # Any required environments (and versions), e.g. python, java, bash, matlab. | + | # Approximately how much scratch disk space will the submission need to store any feature/cache files? |
+ | # Any required environments/architectures (and versions), e.g. python, java, bash, matlab. | ||
+ | # Any special notice regarding to running your algorithm | ||
+ | |||
+ | Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly. | ||
== Time and hardware limits == | == Time and hardware limits == |
Latest revision as of 01:42, 29 April 2016
Contents
Description
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).
Task specific mailing list
All discussions take place on the MIREX "EvalFest" list. If you have an question or comment, simply include the task name in the subject heading.
Data
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms (these are the hidden parts of the iKala dataset). If your algorithm is a supervised one, you are welcome to use the public part of the iKala dataset for training.
Collection statistics:
- Size of collection: 100 clips
- Audio details: 16-bit, mono, 44.1kHz, WAV
- Duration of each clip: 30 seconds
For more information about the iKala dataset, see T.-S. Chan, T.-C. Yeh, Z.-C. Fan, H.-W. Chen, L. Su, Y.-H. Yang, and R. Jang, "Vocal activity informed singing voice separation with the iKala dataset," in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., 2015, pp. 718-722.
Evaluation
For evaluation we use Vincent et al.'s (2012) Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by bss_eval_sources.m in BSS Eval Version 3.0. Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:
>> trueVoice = wavread('trueVoice.wav'); >> trueKaraoke = wavread('trueKaraoke.wav'); >> trueMixed = trueVoice + trueKaraoke; >> [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed); >> [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke)); >> [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke)); >> NSDR = SDR - NSDR; >> NSIR = SIR - NSIR; >> NSAR = SAR - NSAR;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):
,
,
.
In addition, sd, min, max and median will also be reported.
Submission format
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:
function singing_voice_separation(infile, outdir) [~, name, ext] = fileparts(infile); your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext])); function your_algorithm(infile, voiceoutfile, musicoutfile) mixed = wavread(infile); % Insert your algorithm here wavwrite(voice, 44100, voiceoutfile); wavwrite(music, 44100, musicoutfile);
If scratch space is required, please use the three-argument format instead:
function singing_voice_separation(infile, outdir, tmpdir)
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2014 Submission Instructions below).
Packaging submissions
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).
- Be sure to follow the Best Coding Practices for MIREX.
- Be sure to follow the MIREX 2014 Submission Instructions. For example, under Very Important Things to Note, Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]
All submissions should include a README file including the following the information:
- Command line calling format for all executables and an example formatted set of commands
- Number of threads/cores used or whether this should be specified on the command line
- Expected memory footprint
- Expected runtime
- Approximately how much scratch disk space will the submission need to store any feature/cache files?
- Any required environments/architectures (and versions), e.g. python, java, bash, matlab.
- Any special notice regarding to running your algorithm
Note that the information that you place in the README file is extremely important in ensuring that your submission is evaluated properly.
Time and hardware limits
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result.
Potential Participants
name / email