<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Tak-Shing+Chan</id>
	<title>MIREX Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Tak-Shing+Chan"/>
	<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/wiki/Special:Contributions/Tak-Shing_Chan"/>
	<updated>2026-04-13T19:51:23Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=User:Tak-Shing_Chan&amp;diff=12066</id>
		<title>User:Tak-Shing Chan</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=User:Tak-Shing_Chan&amp;diff=12066"/>
		<updated>2017-06-21T11:00:08Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: Reverted edits by Tak-Shing Chan (talk) to last revision by Kahyun Choi&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tak-Shing Chan received his PhD from the University of London. In 2011, he worked as a Research Associate at the Hong Kong Polytechnic University. He is currently a Postdoctoral Fellow at the Academia Sinica. His research interests include sparse coding, signal processing, music cognition, and distributed systems.&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=User:Tak-Shing_Chan&amp;diff=12065</id>
		<title>User:Tak-Shing Chan</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=User:Tak-Shing_Chan&amp;diff=12065"/>
		<updated>2017-06-21T10:56:37Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tak-Shing T. Chan received the Ph.D. degree in computing from the University of London in 2008. From 2006 to 2008, he was a Scientific Programmer at the University of Sheffield. In 2011, he worked as a Research Associate at the Hong Kong Polytechnic University. He is currently a Postdoctoral Fellow at Academia Sinica, Taiwan.&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=User:Tak-Shing_Chan&amp;diff=12064</id>
		<title>User:Tak-Shing Chan</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=User:Tak-Shing_Chan&amp;diff=12064"/>
		<updated>2017-06-21T10:56:14Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tak-Shing T. Chan received the Ph.D. degree in cmoputing from the University of London in 2008. From 2006 to 2008, he was a Scientific Programmer at the University of Sheffield. In 2011, he worked as a Research Associate at the Hong Kong Polytechnic University. He is currently a Postdoctoral Fellow at Academia Sinica, Taiwan.&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation&amp;diff=11845</id>
		<title>2016:Singing Voice Separation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation&amp;diff=11845"/>
		<updated>2016-08-04T05:09:12Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
&lt;br /&gt;
All discussions take place on the MIREX &amp;quot;EvalFest&amp;quot; list. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms (these are the hidden parts of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset). If your algorithm is a supervised one, you are welcome to use the public part of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset for training. In addition, you can train with 30-second segments from the SiSEC [https://sisec.inria.fr/home/2016-professionally-produced-music-recordings/ MUS] challenge.&lt;br /&gt;
&lt;br /&gt;
Collection statistics:&lt;br /&gt;
&lt;br /&gt;
# Size of collection: 100 clips&lt;br /&gt;
# Audio details: 16-bit, mono, 44.1kHz, WAV&lt;br /&gt;
# Duration of each clip: 30 seconds&lt;br /&gt;
&lt;br /&gt;
For more information about the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset, see T.-S. Chan, T.-C. Yeh, Z.-C. Fan, H.-W. Chen, L. Su, Y.-H. Yang, and R. Jang, &amp;quot;Vocal activity informed singing voice separation with the iKala dataset,&amp;quot; in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., 2015, pp. 718-722.&lt;br /&gt;
&lt;br /&gt;
Remark. The hidden parts of the iKala dataset are quite similar to the public set, but the training set from SiSEC is quite different.&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
&lt;br /&gt;
For evaluation we use [http://hal.inria.fr/inria-00630985/PDF/vincent_SigPro11.pdf Vincent ''et al.'''s (2012)] Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by [http://bass-db.gforge.inria.fr/bss_eval/bss_eval_sources.m bss_eval_sources.m] in [http://bass-db.gforge.inria.fr/bss_eval/ BSS Eval Version 3.0] (but with the permutation part removed, as the ability to classify signals is part of the singing voice separation challenge). Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; trueVoice = wavread('trueVoice.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueKaraoke = wavread('trueKaraoke.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueMixed = trueVoice + trueKaraoke;&lt;br /&gt;
 &amp;gt;&amp;gt; [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed);&lt;br /&gt;
 &amp;gt;&amp;gt; [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; NSDR = SDR - NSDR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSIR = SIR - NSIR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSAR = SAR - NSAR;&lt;br /&gt;
&lt;br /&gt;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GNSDR=\frac{\sum_{i=1}^{100}NSDR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSIR=\frac{\sum_{i=1}^{100}SIR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSAR=\frac{\sum_{i=1}^{100}SAR_i}{100}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, sd, min, max and median will also be reported.&lt;br /&gt;
&lt;br /&gt;
Remark. You may assume that trueMixed is always in the range of [-1,1].&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
&lt;br /&gt;
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir)&lt;br /&gt;
 [~, name, ext] = fileparts(infile);&lt;br /&gt;
 your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext]));&lt;br /&gt;
 &lt;br /&gt;
 function your_algorithm(infile, voiceoutfile, musicoutfile)&lt;br /&gt;
 mixed = wavread(infile);&lt;br /&gt;
 &lt;br /&gt;
 % Insert your algorithm here&lt;br /&gt;
 &lt;br /&gt;
 wavwrite(voice, 44100, voiceoutfile);&lt;br /&gt;
 wavwrite(music, 44100, musicoutfile);&lt;br /&gt;
&lt;br /&gt;
If scratch space is required, please use the three-argument format instead:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir, tmpdir)&lt;br /&gt;
&lt;br /&gt;
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2015 Submission Instructions below). For supervised submissions, please provide training details in the extended abstract (e.g. datasets used).&lt;br /&gt;
&lt;br /&gt;
== Packaging submissions ==&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).&lt;br /&gt;
# Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]].&lt;br /&gt;
# Be sure to follow the [[MIREX 2016 Submission Instructions]]. For example, under '''Very Important Things to Note''', Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the information:&lt;br /&gt;
# Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
# Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
# Expected memory footprint&lt;br /&gt;
# Expected runtime&lt;br /&gt;
# Approximately how much scratch disk space will the submission need to store any feature/cache files?&lt;br /&gt;
# Any required environments/architectures (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
# Any special notice regarding to running your algorithm&lt;br /&gt;
&lt;br /&gt;
Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result. &lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation&amp;diff=11844</id>
		<title>2016:Singing Voice Separation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation&amp;diff=11844"/>
		<updated>2016-08-04T04:50:09Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Evaluation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
&lt;br /&gt;
All discussions take place on the MIREX &amp;quot;EvalFest&amp;quot; list. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms (these are the hidden parts of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset). If your algorithm is a supervised one, you are welcome to use the public part of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset for training. In addition, you can train with 30-second segments from the SiSEC [https://sisec.inria.fr/home/2016-professionally-produced-music-recordings/ MUS] challenge.&lt;br /&gt;
&lt;br /&gt;
Collection statistics:&lt;br /&gt;
&lt;br /&gt;
# Size of collection: 100 clips&lt;br /&gt;
# Audio details: 16-bit, mono, 44.1kHz, WAV&lt;br /&gt;
# Duration of each clip: 30 seconds&lt;br /&gt;
&lt;br /&gt;
For more information about the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset, see T.-S. Chan, T.-C. Yeh, Z.-C. Fan, H.-W. Chen, L. Su, Y.-H. Yang, and R. Jang, &amp;quot;Vocal activity informed singing voice separation with the iKala dataset,&amp;quot; in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., 2015, pp. 718-722.&lt;br /&gt;
&lt;br /&gt;
This is a comment by Marius Miron: how is it different the hidden dataset from the known one. Supervised approaches need to know this in order to know which transformations should include into the training part. Does it have different tempo, voices, timbres, genre, audio amplitudes? What are the factors that you need to make your algorithm robust to?&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
&lt;br /&gt;
For evaluation we use [http://hal.inria.fr/inria-00630985/PDF/vincent_SigPro11.pdf Vincent ''et al.'''s (2012)] Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by [http://bass-db.gforge.inria.fr/bss_eval/bss_eval_sources.m bss_eval_sources.m] in [http://bass-db.gforge.inria.fr/bss_eval/ BSS Eval Version 3.0] (but with the permutation part removed, as the ability to classify signals is part of the singing voice separation challenge). Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; trueVoice = wavread('trueVoice.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueKaraoke = wavread('trueKaraoke.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueMixed = trueVoice + trueKaraoke;&lt;br /&gt;
 &amp;gt;&amp;gt; [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed);&lt;br /&gt;
 &amp;gt;&amp;gt; [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; NSDR = SDR - NSDR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSIR = SIR - NSIR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSAR = SAR - NSAR;&lt;br /&gt;
&lt;br /&gt;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GNSDR=\frac{\sum_{i=1}^{100}NSDR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSIR=\frac{\sum_{i=1}^{100}SIR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSAR=\frac{\sum_{i=1}^{100}SAR_i}{100}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, sd, min, max and median will also be reported.&lt;br /&gt;
&lt;br /&gt;
Remark. You may assume that trueMixed is always in the range of [-1,1].&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
&lt;br /&gt;
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir)&lt;br /&gt;
 [~, name, ext] = fileparts(infile);&lt;br /&gt;
 your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext]));&lt;br /&gt;
 &lt;br /&gt;
 function your_algorithm(infile, voiceoutfile, musicoutfile)&lt;br /&gt;
 mixed = wavread(infile);&lt;br /&gt;
 &lt;br /&gt;
 % Insert your algorithm here&lt;br /&gt;
 &lt;br /&gt;
 wavwrite(voice, 44100, voiceoutfile);&lt;br /&gt;
 wavwrite(music, 44100, musicoutfile);&lt;br /&gt;
&lt;br /&gt;
If scratch space is required, please use the three-argument format instead:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir, tmpdir)&lt;br /&gt;
&lt;br /&gt;
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2015 Submission Instructions below). For supervised submissions, please provide training details in the extended abstract (e.g. datasets used).&lt;br /&gt;
&lt;br /&gt;
== Packaging submissions ==&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).&lt;br /&gt;
# Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]].&lt;br /&gt;
# Be sure to follow the [[MIREX 2016 Submission Instructions]]. For example, under '''Very Important Things to Note''', Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the information:&lt;br /&gt;
# Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
# Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
# Expected memory footprint&lt;br /&gt;
# Expected runtime&lt;br /&gt;
# Approximately how much scratch disk space will the submission need to store any feature/cache files?&lt;br /&gt;
# Any required environments/architectures (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
# Any special notice regarding to running your algorithm&lt;br /&gt;
&lt;br /&gt;
Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result. &lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation_Results&amp;diff=11838</id>
		<title>2016:Singing Voice Separation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation_Results&amp;diff=11838"/>
		<updated>2016-08-03T08:56:21Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Legend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
These are the results for the 2016 running of the Singing Voice Separation task set. The evaluation set is kindly provided by [http://mac.citi.sinica.edu.tw/ikala/ iKala]. If you need to cite this page, please also cite T.-S. Chan, T.-C. Yeh, Z.-C. Fan, H.-W. Chen, L. Su, Y.-H. Yang, and R. Jang, &amp;quot;Vocal activity informed singing voice separation with the iKala dataset,&amp;quot; in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., 2015, pp. 718-722. For more information about this task set please refer to the [[2016:Singing Voice Separation]] page.&lt;br /&gt;
&lt;br /&gt;
=== Legend ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Submission code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract PDF&lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Training set&lt;br /&gt;
	|-&lt;br /&gt;
	! GD1&lt;br /&gt;
	| Harmonic Modeling of Singing Voice for Source Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/GD1.pdf PDF] || Georgi Dzhambazov || Unknown&lt;br /&gt;
        |-&lt;br /&gt;
	! HC1&lt;br /&gt;
	| MIREX 2016 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/HC1.pdf PDF] || Yi-Chun Huang, Tai-Shih Chi || iKala&lt;br /&gt;
        |-&lt;br /&gt;
	! LCP1&lt;br /&gt;
	| Deep Clustering for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/LCP1.pdf PDF] || Yi Luo, Zhuo Chen, John R. Hershey, Jonathan Le Roux, Daniel P. W. Ellis || SiSEC&lt;br /&gt;
        |-&lt;br /&gt;
	! LCP2&lt;br /&gt;
	| Deep Clustering for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/LCP2.pdf PDF] || Yi Luo, Zhuo Chen, John R. Hershey, Jonathan Le Roux, Daniel P. W. Ellis || SiSEC&lt;br /&gt;
        |-&lt;br /&gt;
	! MC2&lt;br /&gt;
	| MIREX 2016 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Marius Miron, Pritish Chandna || Unknown&lt;br /&gt;
        |-&lt;br /&gt;
	! MC3&lt;br /&gt;
	| MIREX 2016 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Marius Miron, Pritish Chandna || Unknown&lt;br /&gt;
        |-&lt;br /&gt;
	! RSGP1&lt;br /&gt;
	| Singing Voice Separation Using Deep Neural Networks and F0 Estimation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/RSGP1.pdf PDF] || Gerard Roma, Emad M. Grais, Andrew J. R. Simpson, Mark D. Plumbley || iKala&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
==== Evaluation Criteria ====&lt;br /&gt;
&lt;br /&gt;
'''GNSDR''' = Global Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''NSDR''' = Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SIR''' = Signal-to-Interference Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SAR''' = Signal-to-Artifacts Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Results ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! Algorithm !! Voice GNSDR (dB) !! Music GNSDR (dB) !! Runtime (m)&lt;br /&gt;
	|-&lt;br /&gt;
	| GD1 || -2.2810 || 0.3954 || 26.4413&lt;br /&gt;
	|-&lt;br /&gt;
	| HC1 || 4.6309 || 7.8180 || 28.9727&lt;br /&gt;
	|-&lt;br /&gt;
	| LCP1 || 6.0726 || 10.9256 || 37.8235&lt;br /&gt;
	|-&lt;br /&gt;
	| LCP2 || 6.3414 || 11.1878 || 32.4800&lt;br /&gt;
	|-&lt;br /&gt;
	| MC2 || 5.2891 || 9.6678 || 34.8084&lt;br /&gt;
	|-&lt;br /&gt;
	| MC3 || 5.4920 || 9.8049 || 36.7194&lt;br /&gt;
	|-&lt;br /&gt;
	| RSGP1 || 3.2589 || 8.7664 || 32.3578&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
== NSDR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/nsdr-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/nsdr-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2016-svs-nsdr.png]]&lt;br /&gt;
&lt;br /&gt;
== SIR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sir-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sir-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2016-svs-sir.png]]&lt;br /&gt;
&lt;br /&gt;
== SAR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sar-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sar-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2016-svs-sar.png]]&lt;br /&gt;
&lt;br /&gt;
== Runtime Data ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation_Results&amp;diff=11837</id>
		<title>2014:Singing Voice Separation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation_Results&amp;diff=11837"/>
		<updated>2016-08-03T08:42:54Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
These are the results for the 2014 running of the Singing Voice Separation task set. The evaluation set is kindly provided by [http://mac.citi.sinica.edu.tw/ikala/ iKala]. If you need to cite this page, please also cite T.-S. Chan, T.-C. Yeh, Z.-C. Fan, H.-W. Chen, L. Su, Y.-H. Yang, and R. Jang, &amp;quot;Vocal activity informed singing voice separation with the iKala dataset,&amp;quot; in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., 2015, pp. 718-722. For more information about this task set please refer to the [[2014:Singing Voice Separation]] page.&lt;br /&gt;
&lt;br /&gt;
=== Legend ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Submission code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract PDF&lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	|-&lt;br /&gt;
	! GW1&lt;br /&gt;
	| Bayesian Singing-Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GW1.pdf PDF] || Guan-Xiang Wang, Po-Kai Yang, Chung-Chien Hsu, Jen-Tzung Chien&lt;br /&gt;
        |-&lt;br /&gt;
	! HKHS1&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS1.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
	|-&lt;br /&gt;
	! HKHS2&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS2.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
        |-&lt;br /&gt;
	! HKHS3&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS3.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
	|-&lt;br /&gt;
	! IIY1&lt;br /&gt;
	| Singing Voice Separation and Vocal F0 Estimation based on Robust PCA and Subharmonic Summation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/IIY1.pdf PDF] || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! IIY2&lt;br /&gt;
	| Singing Voice Separation and Vocal F0 Estimation based on Robust PCA and Subharmonic Summation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/IIY2.pdf PDF] || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! JL1&lt;br /&gt;
	| Singing Voice Separation Based on Sparse Nature and Spectral/Temporal Discontinuity || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JL1.pdf PDF] || Il-Young Jeong, Kyogu Lee&lt;br /&gt;
        |-&lt;br /&gt;
	! LFR1&lt;br /&gt;
	| Kernel Additive Modelling with light models || style=&amp;quot;text-align: center;&amp;quot; | [http://dx.doi.org/10.1109/ICASSP.2015.7177935 PDF] || Antoine Liutkus, Derry Fitzgerald, Zafar Rafii&lt;br /&gt;
        |-&lt;br /&gt;
	! RNA1&lt;br /&gt;
	| Singing Voice Separation using Adaptive Window Harmonic Sinusoidal Modeling || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/RNA1.pdf PDF] || Preeti Rao, Nagesh Nayak, Sharath Adavanne&lt;br /&gt;
        |-&lt;br /&gt;
	! RP1&lt;br /&gt;
	| REPET-SIM for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/RP1.pdf PDF] || Zafar Rafii, Bryan Pardo&lt;br /&gt;
        |-&lt;br /&gt;
	! YC1&lt;br /&gt;
	| MIREX 2014 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/YC1.pdf PDF] || Frederick Yen, Tai-Shih Chi&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
==== Evaluation Criteria ====&lt;br /&gt;
&lt;br /&gt;
'''GNSDR''' = Global Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''NSDR''' = Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SIR''' = Signal-to-Interference Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SAR''' = Signal-to-Artifacts Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Results ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! Algorithm !! Voice GNSDR (dB) !! Music GNSDR (dB) !! Runtime (hh)&lt;br /&gt;
	|-&lt;br /&gt;
	| GW1 || 2.8861 || 5.2549 || 24&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS1 || -1.3988 || 0.3483 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS2 || -1.9413 || 0.5239 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS3 || -2.4807 || 0.1414 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY1 || 4.2190 || 7.7893 || 02&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY2 || 4.4764 || 7.8661 || 02&lt;br /&gt;
	|-&lt;br /&gt;
	| JL1 || 4.1564 || 5.6304 || 01&lt;br /&gt;
	|-&lt;br /&gt;
	| LFR1 || 0.6499 || 3.0867 || 03&lt;br /&gt;
	|-&lt;br /&gt;
	| RNA1 || 3.6915 || 7.3153 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| RP1 || 2.8602 || 5.0306 || 01&lt;br /&gt;
	|-&lt;br /&gt;
	| YC1 || -0.8202 || -3.1150 || 13&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
== NSDR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/nsdr-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/nsdr-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-nsdr.png]]&lt;br /&gt;
&lt;br /&gt;
== SIR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sir-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sir-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-sir.png]]&lt;br /&gt;
&lt;br /&gt;
== SAR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sar-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sar-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-sar.png]]&lt;br /&gt;
&lt;br /&gt;
== Individual Spectrograms ==&lt;br /&gt;
&lt;br /&gt;
As the MIREX test set is private, we use three other songs with similar characteristics to demonstrate the algorithms.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-gw1.png|thumb|Spectrograms for GW1]]&lt;br /&gt;
	| [[File:2014-svs-hkhs1.png|thumb|Spectrograms for HKHS1]]&lt;br /&gt;
	| [[File:2014-svs-hkhs2.png|thumb|Spectrograms for HKHS2]]&lt;br /&gt;
	| [[File:2014-svs-hkhs3.png|thumb|Spectrograms for HKHS3]]&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-iiy1.png|thumb|Spectrograms for IIY1]]&lt;br /&gt;
	| [[File:2014-svs-iiy2.png|thumb|Spectrograms for IIY2]]&lt;br /&gt;
	| [[File:2014-svs-jl1.png|thumb|Spectrograms for JL1]]&lt;br /&gt;
	| [[File:2014-svs-lfr1.png|thumb|Spectrograms for LFR1]]&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-rna1.png|thumb|Spectrograms for RNA1]]&lt;br /&gt;
	| [[File:2014-svs-rp1.png|thumb|Spectrograms for RP1]]&lt;br /&gt;
	| [[File:2014-svs-yc1.png|thumb|Spectrograms for YC1]]&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
=== Labels ===&lt;br /&gt;
&lt;br /&gt;
'''a''' = input mixture ''x'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''b''' = ground truth voice for ''x'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''c''' = extracted voice from ''x'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''d''' = input mixture ''y'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''e''' = ground truth voice for ''y'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''f''' = extracted voice from ''y'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''g''' = input mixture ''z'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''h''' = ground truth voice for ''z'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''i''' = extracted voice from ''z'' &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Runtime Data ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation_Results&amp;diff=11836</id>
		<title>2015:Singing Voice Separation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation_Results&amp;diff=11836"/>
		<updated>2016-08-03T08:42:14Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
These are the results for the 2015 running of the Singing Voice Separation task set. The evaluation set is kindly provided by [http://mac.citi.sinica.edu.tw/ikala/ iKala]. If you need to cite this page, please also cite T.-S. Chan, T.-C. Yeh, Z.-C. Fan, H.-W. Chen, L. Su, Y.-H. Yang, and R. Jang, &amp;quot;Vocal activity informed singing voice separation with the iKala dataset,&amp;quot; in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., 2015, pp. 718-722. For more information about this task set please refer to the [[2015:Singing Voice Separation]] page.&lt;br /&gt;
&lt;br /&gt;
=== Legend ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Submission code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract PDF&lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	|-&lt;br /&gt;
	! FJ1&lt;br /&gt;
	| MIREX 2015 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Zhe-Cheng Fan, Jyh-Shing Roger Jang&lt;br /&gt;
        |-&lt;br /&gt;
	! FJ2&lt;br /&gt;
	| MIREX 2015 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Zhe-Cheng Fan, Jyh-Shing Roger Jang&lt;br /&gt;
        |-&lt;br /&gt;
	! IIY3&lt;br /&gt;
	| MIREX2015: Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/IIY3.pdf PDF] || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! IIY4&lt;br /&gt;
	| MIREX2015: Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/IIY4.pdf PDF] || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! MD3&lt;br /&gt;
	| An Ensemble Method for Learning to Extract Vocals from Polyphonic Musical Audio || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/MD3.pdf PDF] || Matt McVicar, Tijl De Bie&lt;br /&gt;
        |-&lt;br /&gt;
	! MD4&lt;br /&gt;
	| An Ensemble Method for Learning to Extract Vocals from Polyphonic Musical Audio || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/MD4.pdf PDF] || Matt McVicar, Tijl De Bie&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
==== Evaluation Criteria ====&lt;br /&gt;
&lt;br /&gt;
'''GNSDR''' = Global Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''NSDR''' = Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SIR''' = Signal-to-Interference Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SAR''' = Signal-to-Artifacts Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Results ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! Algorithm !! Voice GNSDR (dB) !! Music GNSDR (dB) !! Runtime (m)&lt;br /&gt;
	|-&lt;br /&gt;
	| FJ1 || 6.8236 || 10.135 || 3.4014&lt;br /&gt;
	|-&lt;br /&gt;
	| FJ2 || 6.3487 || 9.8678 || 2.9135&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY3 || 4.9862 || 8.2138 || 99.1737&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY4 || 5.3953 || 8.77 || 46.6621&lt;br /&gt;
	|-&lt;br /&gt;
	| MD3 || 2.9831 || 6.3671 || 121.6655&lt;br /&gt;
	|-&lt;br /&gt;
	| MD4 || 3.1022 || 7.4657 || 121.1348&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
== NSDR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/nsdr-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/nsdr-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2015-svs-nsdr.png]]&lt;br /&gt;
&lt;br /&gt;
== SIR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sir-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sir-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2015-svs-sir.png]]&lt;br /&gt;
&lt;br /&gt;
== SAR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sar-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sar-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2015-svs-sar.png]]&lt;br /&gt;
&lt;br /&gt;
== Runtime Data ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation_Results&amp;diff=11835</id>
		<title>2016:Singing Voice Separation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation_Results&amp;diff=11835"/>
		<updated>2016-08-03T08:41:29Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
These are the results for the 2016 running of the Singing Voice Separation task set. The evaluation set is kindly provided by [http://mac.citi.sinica.edu.tw/ikala/ iKala]. If you need to cite this page, please also cite T.-S. Chan, T.-C. Yeh, Z.-C. Fan, H.-W. Chen, L. Su, Y.-H. Yang, and R. Jang, &amp;quot;Vocal activity informed singing voice separation with the iKala dataset,&amp;quot; in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., 2015, pp. 718-722. For more information about this task set please refer to the [[2016:Singing Voice Separation]] page.&lt;br /&gt;
&lt;br /&gt;
=== Legend ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Submission code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract PDF&lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	|-&lt;br /&gt;
	! GD1&lt;br /&gt;
	| Harmonic Modeling of Singing Voice for Source Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/GD1.pdf PDF] || Georgi Dzhambazov&lt;br /&gt;
        |-&lt;br /&gt;
	! HC1&lt;br /&gt;
	| MIREX 2016 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/HC1.pdf PDF] || Yi-Chun Huang, Tai-Shih Chi&lt;br /&gt;
        |-&lt;br /&gt;
	! LCP1&lt;br /&gt;
	| Deep Clustering for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/LCP1.pdf PDF] || Yi Luo, Zhuo Chen, John R. Hershey, Jonathan Le Roux, Daniel P. W. Ellis&lt;br /&gt;
        |-&lt;br /&gt;
	! LCP2&lt;br /&gt;
	| Deep Clustering for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/LCP2.pdf PDF] || Yi Luo, Zhuo Chen, John R. Hershey, Jonathan Le Roux, Daniel P. W. Ellis&lt;br /&gt;
        |-&lt;br /&gt;
	! MC2&lt;br /&gt;
	| MIREX 2016 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Marius Miron, Pritish Chandna&lt;br /&gt;
        |-&lt;br /&gt;
	! MC3&lt;br /&gt;
	| MIREX 2016 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Marius Miron, Pritish Chandna&lt;br /&gt;
        |-&lt;br /&gt;
	! RSGP1&lt;br /&gt;
	| Singing Voice Separation Using Deep Neural Networks and F0 Estimation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/RSGP1.pdf PDF] || Gerard Roma, Emad M. Grais, Andrew J. R. Simpson, Mark D. Plumbley&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
==== Evaluation Criteria ====&lt;br /&gt;
&lt;br /&gt;
'''GNSDR''' = Global Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''NSDR''' = Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SIR''' = Signal-to-Interference Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SAR''' = Signal-to-Artifacts Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Results ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! Algorithm !! Voice GNSDR (dB) !! Music GNSDR (dB) !! Runtime (m)&lt;br /&gt;
	|-&lt;br /&gt;
	| GD1 || -2.2810 || 0.3954 || 26.4413&lt;br /&gt;
	|-&lt;br /&gt;
	| HC1 || 4.6309 || 7.8180 || 28.9727&lt;br /&gt;
	|-&lt;br /&gt;
	| LCP1 || 6.0726 || 10.9256 || 37.8235&lt;br /&gt;
	|-&lt;br /&gt;
	| LCP2 || 6.3414 || 11.1878 || 32.4800&lt;br /&gt;
	|-&lt;br /&gt;
	| MC2 || 5.2891 || 9.6678 || 34.8084&lt;br /&gt;
	|-&lt;br /&gt;
	| MC3 || 5.4920 || 9.8049 || 36.7194&lt;br /&gt;
	|-&lt;br /&gt;
	| RSGP1 || 3.2589 || 8.7664 || 32.3578&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
== NSDR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/nsdr-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/nsdr-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2016-svs-nsdr.png]]&lt;br /&gt;
&lt;br /&gt;
== SIR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sir-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sir-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2016-svs-sir.png]]&lt;br /&gt;
&lt;br /&gt;
== SAR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sar-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sar-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2016-svs-sar.png]]&lt;br /&gt;
&lt;br /&gt;
== Runtime Data ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation_Results&amp;diff=11801</id>
		<title>2016:Singing Voice Separation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation_Results&amp;diff=11801"/>
		<updated>2016-08-01T08:23:42Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
These are the results for the 2016 running of the Singing Voice Separation task set. For more information about this task set please refer to the [[2016:Singing Voice Separation]] page.&lt;br /&gt;
&lt;br /&gt;
=== Legend ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Submission code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract PDF&lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	|-&lt;br /&gt;
	! GD1&lt;br /&gt;
	| Harmonic Modeling of Singing Voice for Source Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/GD1.pdf PDF] || Georgi Dzhambazov&lt;br /&gt;
        |-&lt;br /&gt;
	! HC1&lt;br /&gt;
	| MIREX 2016 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/HC1.pdf PDF] || Yi-Chun Huang, Tai-Shih Chi&lt;br /&gt;
        |-&lt;br /&gt;
	! LCP1&lt;br /&gt;
	| Deep Clustering for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/LCP1.pdf PDF] || Yi Luo, Zhuo Chen, John R. Hershey, Jonathan Le Roux, Daniel P. W. Ellis&lt;br /&gt;
        |-&lt;br /&gt;
	! LCP2&lt;br /&gt;
	| Deep Clustering for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/LCP2.pdf PDF] || Yi Luo, Zhuo Chen, John R. Hershey, Jonathan Le Roux, Daniel P. W. Ellis&lt;br /&gt;
        |-&lt;br /&gt;
	! MC2&lt;br /&gt;
	| MIREX 2016 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Marius Miron, Pritish Chandna&lt;br /&gt;
        |-&lt;br /&gt;
	! MC3&lt;br /&gt;
	| MIREX 2016 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Marius Miron, Pritish Chandna&lt;br /&gt;
        |-&lt;br /&gt;
	! RSGP1&lt;br /&gt;
	| Singing Voice Separation Using Deep Neural Networks and F0 Estimation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/RSGP1.pdf PDF] || Gerard Roma, Emad M. Grais, Andrew J. R. Simpson, Mark D. Plumbley&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
==== Evaluation Criteria ====&lt;br /&gt;
&lt;br /&gt;
'''GNSDR''' = Global Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''NSDR''' = Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SIR''' = Signal-to-Interference Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SAR''' = Signal-to-Artifacts Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Results ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! Algorithm !! Voice GNSDR (dB) !! Music GNSDR (dB) !! Runtime (m)&lt;br /&gt;
	|-&lt;br /&gt;
	| GD1 || -2.2810 || 0.3954 || 26.4413&lt;br /&gt;
	|-&lt;br /&gt;
	| HC1 || 4.6309 || 7.8180 || 28.9727&lt;br /&gt;
	|-&lt;br /&gt;
	| LCP1 || 6.0726 || 10.9256 || 37.8235&lt;br /&gt;
	|-&lt;br /&gt;
	| LCP2 || 6.3414 || 11.1878 || 32.4800&lt;br /&gt;
	|-&lt;br /&gt;
	| MC2 || 5.2891 || 9.6678 || 34.8084&lt;br /&gt;
	|-&lt;br /&gt;
	| MC3 || 5.4920 || 9.8049 || 36.7194&lt;br /&gt;
	|-&lt;br /&gt;
	| RSGP1 || 3.2589 || 8.7664 || 32.3578&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
== NSDR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/nsdr-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/nsdr-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2016-svs-nsdr.png]]&lt;br /&gt;
&lt;br /&gt;
== SIR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sir-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sir-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2016-svs-sir.png]]&lt;br /&gt;
&lt;br /&gt;
== SAR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sar-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sar-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2016-svs-sar.png]]&lt;br /&gt;
&lt;br /&gt;
== Runtime Data ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation_Results&amp;diff=11800</id>
		<title>2016:Singing Voice Separation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation_Results&amp;diff=11800"/>
		<updated>2016-08-01T08:20:49Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
These are the results for the 2016 running of the Singing Voice Separation task set. For more information about this task set please refer to the [[2016:Singing Voice Separation]] page.&lt;br /&gt;
&lt;br /&gt;
=== Legend ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Submission code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract PDF&lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	|-&lt;br /&gt;
	! GD1&lt;br /&gt;
	| Harmonic Modeling of Singing Voice for Source Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/GD1.pdf PDF] || Georgi Dzhambazov&lt;br /&gt;
        |-&lt;br /&gt;
	! HC1&lt;br /&gt;
	| MIREX 2016 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/HC1.pdf PDF] || Yi-Chun Huang, Tai-Shih Chi&lt;br /&gt;
        |-&lt;br /&gt;
	! LCP1&lt;br /&gt;
	| Deep Clustering for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/LCP1.pdf PDF] || Yi Luo, Zhuo Chen, John R. Hershey, Jonathan Le Roux, Daniel P. W. Ellis&lt;br /&gt;
        |-&lt;br /&gt;
	! LCP2&lt;br /&gt;
	| Deep Clustering for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/LCP2.pdf PDF] || Yi Luo, Zhuo Chen, John R. Hershey, Jonathan Le Roux, Daniel P. W. Ellis&lt;br /&gt;
        |-&lt;br /&gt;
	! MC2&lt;br /&gt;
	| MIREX 2016 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Marius Miron, Pritish Chandna&lt;br /&gt;
        |-&lt;br /&gt;
	! MC3&lt;br /&gt;
	| MIREX 2016 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Marius Miron, Pritish Chandna&lt;br /&gt;
        |-&lt;br /&gt;
	! RSGP1&lt;br /&gt;
	| Singing Voice Separation using Deep Neural Networks and F0 Estimation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/RSGP1.pdf PDF] || Gerard Roma, Emad M. Grais, Andrew J. R. Simpson, Mark D. Plumbley&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
==== Evaluation Criteria ====&lt;br /&gt;
&lt;br /&gt;
'''GNSDR''' = Global Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''NSDR''' = Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SIR''' = Signal-to-Interference Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SAR''' = Signal-to-Artifacts Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Results ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! Algorithm !! Voice GNSDR (dB) !! Music GNSDR (dB) !! Runtime (m)&lt;br /&gt;
	|-&lt;br /&gt;
	| GD1 || -2.2810 || 0.3954 || 26.4413&lt;br /&gt;
	|-&lt;br /&gt;
	| HC1 || 4.6309 || 7.8180 || 28.9727&lt;br /&gt;
	|-&lt;br /&gt;
	| LCP1 || 6.0726 || 10.9256 || 37.8235&lt;br /&gt;
	|-&lt;br /&gt;
	| LCP2 || 6.3414 || 11.1878 || 32.4800&lt;br /&gt;
	|-&lt;br /&gt;
	| MC2 || 5.2891 || 9.6678 || 34.8084&lt;br /&gt;
	|-&lt;br /&gt;
	| MC3 || 5.4920 || 9.8049 || 36.7194&lt;br /&gt;
	|-&lt;br /&gt;
	| RSGP1 || 3.2589 || 8.7664 || 32.3578&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
== NSDR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/nsdr-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/nsdr-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2016-svs-nsdr.png]]&lt;br /&gt;
&lt;br /&gt;
== SIR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sir-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sir-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2016-svs-sir.png]]&lt;br /&gt;
&lt;br /&gt;
== SAR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sar-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sar-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2016-svs-sar.png]]&lt;br /&gt;
&lt;br /&gt;
== Runtime Data ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:2016-svs-sar.png&amp;diff=11799</id>
		<title>File:2016-svs-sar.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:2016-svs-sar.png&amp;diff=11799"/>
		<updated>2016-08-01T08:04:16Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: SAR&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SAR&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:2016-svs-sir.png&amp;diff=11798</id>
		<title>File:2016-svs-sir.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:2016-svs-sir.png&amp;diff=11798"/>
		<updated>2016-08-01T08:03:51Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: SIR&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SIR&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:2016-svs-nsdr.png&amp;diff=11797</id>
		<title>File:2016-svs-nsdr.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:2016-svs-nsdr.png&amp;diff=11797"/>
		<updated>2016-08-01T08:03:22Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: NSDR&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;NSDR&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation_Results&amp;diff=11796</id>
		<title>2016:Singing Voice Separation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation_Results&amp;diff=11796"/>
		<updated>2016-08-01T08:01:50Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: Created page with &amp;quot;== Introduction ==  === Description ===  These are the results for the 2016 running of the Singing Voice Separation task set. For more information about this task set please refe...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
These are the results for the 2016 running of the Singing Voice Separation task set. For more information about this task set please refer to the [[2016:Singing Voice Separation]] page.&lt;br /&gt;
&lt;br /&gt;
=== Legend ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Submission code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract PDF&lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	|-&lt;br /&gt;
	! GD1&lt;br /&gt;
	| HARMONIC MODELING OF SINGING VOICE FOR SOURCE SEPARATION || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/GD1.pdf PDF] || Georgi Dzhambazov&lt;br /&gt;
        |-&lt;br /&gt;
	! HC1&lt;br /&gt;
	| MIREX 2016 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/HC1.pdf PDF] || Yi-Chun Huang, Tai-Shih Chi&lt;br /&gt;
        |-&lt;br /&gt;
	! LCP1&lt;br /&gt;
	| DEEP CLUSTERING FOR SINGING VOICE SEPARATION || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/LCP1.pdf PDF] || Yi Luo, Zhuo Chen, John R. Hershey, Jonathan Le Roux, Daniel P. W. Ellis&lt;br /&gt;
        |-&lt;br /&gt;
	! LCP2&lt;br /&gt;
	| DEEP CLUSTERING FOR SINGING VOICE SEPARATION || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/LCP2.pdf PDF] || Yi Luo, Zhuo Chen, John R. Hershey, Jonathan Le Roux, Daniel P. W. Ellis&lt;br /&gt;
        |-&lt;br /&gt;
	! MC2&lt;br /&gt;
	| MIREX 2016 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Marius Miron, Pritish Chandna&lt;br /&gt;
        |-&lt;br /&gt;
	! MC3&lt;br /&gt;
	| MIREX 2016 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Marius Miron, Pritish Chandna&lt;br /&gt;
        |-&lt;br /&gt;
	! RSGP1&lt;br /&gt;
	| MIREX || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/RSGP1.pdf PDF] || Gerard Roma, Emad M. Grais, Andrew J. R. Simpson, Mark D. Plumbley&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
==== Evaluation Criteria ====&lt;br /&gt;
&lt;br /&gt;
'''GNSDR''' = Global Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''NSDR''' = Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SIR''' = Signal-to-Interference Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SAR''' = Signal-to-Artifacts Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Results ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! Algorithm !! Voice GNSDR (dB) !! Music GNSDR (dB) !! Runtime (m)&lt;br /&gt;
	|-&lt;br /&gt;
	| GD1 || -2.2810 || 0.3954 || 26.4413&lt;br /&gt;
	|-&lt;br /&gt;
	| HC1 || 4.6309 || 7.8180 || 28.9727&lt;br /&gt;
	|-&lt;br /&gt;
	| LCP1 || 6.0726 || 10.9256 || 37.8235&lt;br /&gt;
	|-&lt;br /&gt;
	| LCP2 || 6.3414 || 11.1878 || 32.4800&lt;br /&gt;
	|-&lt;br /&gt;
	| MC2 || 5.2891 || 9.6678 || 34.8084&lt;br /&gt;
	|-&lt;br /&gt;
	| MC3 || 5.4920 || 9.8049 || 36.7194&lt;br /&gt;
	|-&lt;br /&gt;
	| RSGP1 || 3.2589 || 8.7664 || 32.3578&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
== NSDR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/nsdr-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/nsdr-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2016-svs-nsdr.png]]&lt;br /&gt;
&lt;br /&gt;
== SIR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sir-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sir-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2016-svs-sir.png]]&lt;br /&gt;
&lt;br /&gt;
== SAR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sar-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/sar-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2016-svs-sar.png]]&lt;br /&gt;
&lt;br /&gt;
== Runtime Data ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/svs/runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation&amp;diff=11711</id>
		<title>2016:Singing Voice Separation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation&amp;diff=11711"/>
		<updated>2016-04-29T07:54:45Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
&lt;br /&gt;
All discussions take place on the MIREX &amp;quot;EvalFest&amp;quot; list. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms (these are the hidden parts of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset). If your algorithm is a supervised one, you are welcome to use the public part of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset for training. In addition, you can train with 30-second segments from the SiSEC [https://sisec.inria.fr/home/2016-professionally-produced-music-recordings/ MUS] challenge.&lt;br /&gt;
&lt;br /&gt;
Collection statistics:&lt;br /&gt;
&lt;br /&gt;
# Size of collection: 100 clips&lt;br /&gt;
# Audio details: 16-bit, mono, 44.1kHz, WAV&lt;br /&gt;
# Duration of each clip: 30 seconds&lt;br /&gt;
&lt;br /&gt;
For more information about the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset, see T.-S. Chan, T.-C. Yeh, Z.-C. Fan, H.-W. Chen, L. Su, Y.-H. Yang, and R. Jang, &amp;quot;Vocal activity informed singing voice separation with the iKala dataset,&amp;quot; in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., 2015, pp. 718-722.&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
&lt;br /&gt;
For evaluation we use [http://hal.inria.fr/inria-00630985/PDF/vincent_SigPro11.pdf Vincent ''et al.'''s (2012)] Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by [http://bass-db.gforge.inria.fr/bss_eval/bss_eval_sources.m bss_eval_sources.m] in [http://bass-db.gforge.inria.fr/bss_eval/ BSS Eval Version 3.0] (but with the permutation part removed, as the ability to classify signals is part of the singing voice separation challenge). Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; trueVoice = wavread('trueVoice.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueKaraoke = wavread('trueKaraoke.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueMixed = trueVoice + trueKaraoke;&lt;br /&gt;
 &amp;gt;&amp;gt; [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed);&lt;br /&gt;
 &amp;gt;&amp;gt; [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; NSDR = SDR - NSDR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSIR = SIR - NSIR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSAR = SAR - NSAR;&lt;br /&gt;
&lt;br /&gt;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GNSDR=\frac{\sum_{i=1}^{100}NSDR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSIR=\frac{\sum_{i=1}^{100}SIR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSAR=\frac{\sum_{i=1}^{100}SAR_i}{100}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, sd, min, max and median will also be reported.&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
&lt;br /&gt;
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir)&lt;br /&gt;
 [~, name, ext] = fileparts(infile);&lt;br /&gt;
 your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext]));&lt;br /&gt;
 &lt;br /&gt;
 function your_algorithm(infile, voiceoutfile, musicoutfile)&lt;br /&gt;
 mixed = wavread(infile);&lt;br /&gt;
 &lt;br /&gt;
 % Insert your algorithm here&lt;br /&gt;
 &lt;br /&gt;
 wavwrite(voice, 44100, voiceoutfile);&lt;br /&gt;
 wavwrite(music, 44100, musicoutfile);&lt;br /&gt;
&lt;br /&gt;
If scratch space is required, please use the three-argument format instead:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir, tmpdir)&lt;br /&gt;
&lt;br /&gt;
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2015 Submission Instructions below). For supervised submissions, please provide training details in the extended abstract (e.g. datasets used).&lt;br /&gt;
&lt;br /&gt;
== Packaging submissions ==&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).&lt;br /&gt;
# Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]].&lt;br /&gt;
# Be sure to follow the [[MIREX 2016 Submission Instructions]]. For example, under '''Very Important Things to Note''', Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the information:&lt;br /&gt;
# Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
# Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
# Expected memory footprint&lt;br /&gt;
# Expected runtime&lt;br /&gt;
# Approximately how much scratch disk space will the submission need to store any feature/cache files?&lt;br /&gt;
# Any required environments/architectures (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
# Any special notice regarding to running your algorithm&lt;br /&gt;
&lt;br /&gt;
Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result. &lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11710</id>
		<title>2015:Singing Voice Separation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11710"/>
		<updated>2016-04-29T07:52:53Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
&lt;br /&gt;
All discussions take place on the MIREX &amp;quot;EvalFest&amp;quot; list. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms (these are the hidden parts of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset). If your algorithm is a supervised one, you are welcome to use the public part of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset for training. In addition, you can train with 30-second segments from the SiSEC [https://sisec.inria.fr/sisec-2015/2015-professionally-produced-music-recordings/ MUS] challenge.&lt;br /&gt;
&lt;br /&gt;
Collection statistics:&lt;br /&gt;
&lt;br /&gt;
# Size of collection: 100 clips&lt;br /&gt;
# Audio details: 16-bit, mono, 44.1kHz, WAV&lt;br /&gt;
# Duration of each clip: 30 seconds&lt;br /&gt;
&lt;br /&gt;
For more information about the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset, see T.-S. Chan, T.-C. Yeh, Z.-C. Fan, H.-W. Chen, L. Su, Y.-H. Yang, and R. Jang, &amp;quot;Vocal activity informed singing voice separation with the iKala dataset,&amp;quot; in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., 2015, pp. 718-722.&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
&lt;br /&gt;
For evaluation we use [http://hal.inria.fr/inria-00630985/PDF/vincent_SigPro11.pdf Vincent ''et al.'''s (2012)] Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by [http://bass-db.gforge.inria.fr/bss_eval/bss_eval_sources.m bss_eval_sources.m] in [http://bass-db.gforge.inria.fr/bss_eval/ BSS Eval Version 3.0] (but with the permutation part removed, as the ability to classify signals is part of the singing voice separation challenge). Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; trueVoice = wavread('trueVoice.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueKaraoke = wavread('trueKaraoke.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueMixed = trueVoice + trueKaraoke;&lt;br /&gt;
 &amp;gt;&amp;gt; [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed);&lt;br /&gt;
 &amp;gt;&amp;gt; [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; NSDR = SDR - NSDR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSIR = SIR - NSIR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSAR = SAR - NSAR;&lt;br /&gt;
&lt;br /&gt;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GNSDR=\frac{\sum_{i=1}^{100}NSDR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSIR=\frac{\sum_{i=1}^{100}SIR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSAR=\frac{\sum_{i=1}^{100}SAR_i}{100}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, sd, min, max and median will also be reported.&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
&lt;br /&gt;
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir)&lt;br /&gt;
 [~, name, ext] = fileparts(infile);&lt;br /&gt;
 your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext]));&lt;br /&gt;
 &lt;br /&gt;
 function your_algorithm(infile, voiceoutfile, musicoutfile)&lt;br /&gt;
 mixed = wavread(infile);&lt;br /&gt;
 &lt;br /&gt;
 % Insert your algorithm here&lt;br /&gt;
 &lt;br /&gt;
 wavwrite(voice, 44100, voiceoutfile);&lt;br /&gt;
 wavwrite(music, 44100, musicoutfile);&lt;br /&gt;
&lt;br /&gt;
If scratch space is required, please use the three-argument format instead:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir, tmpdir)&lt;br /&gt;
&lt;br /&gt;
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2015 Submission Instructions below). For supervised submissions, please provide training details in the extended abstract (e.g. datasets used).&lt;br /&gt;
&lt;br /&gt;
== Packaging submissions ==&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).&lt;br /&gt;
# Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]].&lt;br /&gt;
# Be sure to follow the [[MIREX 2015 Submission Instructions]]. For example, under '''Very Important Things to Note''', Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the information:&lt;br /&gt;
# Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
# Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
# Expected memory footprint&lt;br /&gt;
# Expected runtime&lt;br /&gt;
# Approximately how much scratch disk space will the submission need to store any feature/cache files?&lt;br /&gt;
# Any required environments/architectures (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
# Any special notice regarding to running your algorithm&lt;br /&gt;
&lt;br /&gt;
Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result. &lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation&amp;diff=11709</id>
		<title>2016:Singing Voice Separation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Singing_Voice_Separation&amp;diff=11709"/>
		<updated>2016-04-29T07:44:11Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
&lt;br /&gt;
All discussions take place on the MIREX &amp;quot;EvalFest&amp;quot; list. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms (these are the hidden parts of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset). If your algorithm is a supervised one, you are welcome to use the public part of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset for training. In addition, you can train with 30-second segments from the SiSEC [https://sisec.inria.fr/professionally-produced-music-recordings/ MUS] challenge.&lt;br /&gt;
&lt;br /&gt;
Collection statistics:&lt;br /&gt;
&lt;br /&gt;
# Size of collection: 100 clips&lt;br /&gt;
# Audio details: 16-bit, mono, 44.1kHz, WAV&lt;br /&gt;
# Duration of each clip: 30 seconds&lt;br /&gt;
&lt;br /&gt;
For more information about the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset, see T.-S. Chan, T.-C. Yeh, Z.-C. Fan, H.-W. Chen, L. Su, Y.-H. Yang, and R. Jang, &amp;quot;Vocal activity informed singing voice separation with the iKala dataset,&amp;quot; in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., 2015, pp. 718-722.&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
&lt;br /&gt;
For evaluation we use [http://hal.inria.fr/inria-00630985/PDF/vincent_SigPro11.pdf Vincent ''et al.'''s (2012)] Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by [http://bass-db.gforge.inria.fr/bss_eval/bss_eval_sources.m bss_eval_sources.m] in [http://bass-db.gforge.inria.fr/bss_eval/ BSS Eval Version 3.0] (but with the permutation part removed, as the ability to classify signals is part of the singing voice separation challenge). Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; trueVoice = wavread('trueVoice.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueKaraoke = wavread('trueKaraoke.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueMixed = trueVoice + trueKaraoke;&lt;br /&gt;
 &amp;gt;&amp;gt; [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed);&lt;br /&gt;
 &amp;gt;&amp;gt; [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; NSDR = SDR - NSDR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSIR = SIR - NSIR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSAR = SAR - NSAR;&lt;br /&gt;
&lt;br /&gt;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GNSDR=\frac{\sum_{i=1}^{100}NSDR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSIR=\frac{\sum_{i=1}^{100}SIR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSAR=\frac{\sum_{i=1}^{100}SAR_i}{100}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, sd, min, max and median will also be reported.&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
&lt;br /&gt;
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir)&lt;br /&gt;
 [~, name, ext] = fileparts(infile);&lt;br /&gt;
 your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext]));&lt;br /&gt;
 &lt;br /&gt;
 function your_algorithm(infile, voiceoutfile, musicoutfile)&lt;br /&gt;
 mixed = wavread(infile);&lt;br /&gt;
 &lt;br /&gt;
 % Insert your algorithm here&lt;br /&gt;
 &lt;br /&gt;
 wavwrite(voice, 44100, voiceoutfile);&lt;br /&gt;
 wavwrite(music, 44100, musicoutfile);&lt;br /&gt;
&lt;br /&gt;
If scratch space is required, please use the three-argument format instead:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir, tmpdir)&lt;br /&gt;
&lt;br /&gt;
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2015 Submission Instructions below). For supervised submissions, please provide training details in the extended abstract (e.g. datasets used).&lt;br /&gt;
&lt;br /&gt;
== Packaging submissions ==&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).&lt;br /&gt;
# Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]].&lt;br /&gt;
# Be sure to follow the [[MIREX 2016 Submission Instructions]]. For example, under '''Very Important Things to Note''', Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the information:&lt;br /&gt;
# Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
# Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
# Expected memory footprint&lt;br /&gt;
# Expected runtime&lt;br /&gt;
# Approximately how much scratch disk space will the submission need to store any feature/cache files?&lt;br /&gt;
# Any required environments/architectures (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
# Any special notice regarding to running your algorithm&lt;br /&gt;
&lt;br /&gt;
Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result. &lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11708</id>
		<title>2015:Singing Voice Separation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11708"/>
		<updated>2016-04-29T07:43:32Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
&lt;br /&gt;
All discussions take place on the MIREX &amp;quot;EvalFest&amp;quot; list. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms (these are the hidden parts of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset). If your algorithm is a supervised one, you are welcome to use the public part of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset for training. In addition, you can train with 30-second segments from the SiSEC [https://sisec.inria.fr/professionally-produced-music-recordings/ MUS] challenge.&lt;br /&gt;
&lt;br /&gt;
Collection statistics:&lt;br /&gt;
&lt;br /&gt;
# Size of collection: 100 clips&lt;br /&gt;
# Audio details: 16-bit, mono, 44.1kHz, WAV&lt;br /&gt;
# Duration of each clip: 30 seconds&lt;br /&gt;
&lt;br /&gt;
For more information about the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset, see T.-S. Chan, T.-C. Yeh, Z.-C. Fan, H.-W. Chen, L. Su, Y.-H. Yang, and R. Jang, &amp;quot;Vocal activity informed singing voice separation with the iKala dataset,&amp;quot; in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., 2015, pp. 718-722.&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
&lt;br /&gt;
For evaluation we use [http://hal.inria.fr/inria-00630985/PDF/vincent_SigPro11.pdf Vincent ''et al.'''s (2012)] Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by [http://bass-db.gforge.inria.fr/bss_eval/bss_eval_sources.m bss_eval_sources.m] in [http://bass-db.gforge.inria.fr/bss_eval/ BSS Eval Version 3.0] (but with the permutation part removed, as the ability to classify signals is part of the singing voice separation challenge). Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; trueVoice = wavread('trueVoice.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueKaraoke = wavread('trueKaraoke.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueMixed = trueVoice + trueKaraoke;&lt;br /&gt;
 &amp;gt;&amp;gt; [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed);&lt;br /&gt;
 &amp;gt;&amp;gt; [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; NSDR = SDR - NSDR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSIR = SIR - NSIR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSAR = SAR - NSAR;&lt;br /&gt;
&lt;br /&gt;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GNSDR=\frac{\sum_{i=1}^{100}NSDR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSIR=\frac{\sum_{i=1}^{100}SIR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSAR=\frac{\sum_{i=1}^{100}SAR_i}{100}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, sd, min, max and median will also be reported.&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
&lt;br /&gt;
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir)&lt;br /&gt;
 [~, name, ext] = fileparts(infile);&lt;br /&gt;
 your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext]));&lt;br /&gt;
 &lt;br /&gt;
 function your_algorithm(infile, voiceoutfile, musicoutfile)&lt;br /&gt;
 mixed = wavread(infile);&lt;br /&gt;
 &lt;br /&gt;
 % Insert your algorithm here&lt;br /&gt;
 &lt;br /&gt;
 wavwrite(voice, 44100, voiceoutfile);&lt;br /&gt;
 wavwrite(music, 44100, musicoutfile);&lt;br /&gt;
&lt;br /&gt;
If scratch space is required, please use the three-argument format instead:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir, tmpdir)&lt;br /&gt;
&lt;br /&gt;
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2015 Submission Instructions below). For supervised submissions, please provide training details in the extended abstract (e.g. datasets used).&lt;br /&gt;
&lt;br /&gt;
== Packaging submissions ==&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).&lt;br /&gt;
# Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]].&lt;br /&gt;
# Be sure to follow the [[MIREX 2015 Submission Instructions]]. For example, under '''Very Important Things to Note''', Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the information:&lt;br /&gt;
# Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
# Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
# Expected memory footprint&lt;br /&gt;
# Expected runtime&lt;br /&gt;
# Approximately how much scratch disk space will the submission need to store any feature/cache files?&lt;br /&gt;
# Any required environments/architectures (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
# Any special notice regarding to running your algorithm&lt;br /&gt;
&lt;br /&gt;
Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result. &lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation&amp;diff=11707</id>
		<title>2014:Singing Voice Separation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation&amp;diff=11707"/>
		<updated>2016-04-29T07:42:04Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
&lt;br /&gt;
All discussions take place on the MIREX &amp;quot;EvalFest&amp;quot; list. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms (these are the hidden parts of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset). If your algorithm is a supervised one, you are welcome to use the public part of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset for training.&lt;br /&gt;
&lt;br /&gt;
Collection statistics:&lt;br /&gt;
&lt;br /&gt;
# Size of collection: 100 clips&lt;br /&gt;
# Audio details: 16-bit, mono, 44.1kHz, WAV&lt;br /&gt;
# Duration of each clip: 30 seconds&lt;br /&gt;
&lt;br /&gt;
For more information about the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset, see T.-S. Chan, T.-C. Yeh, Z.-C. Fan, H.-W. Chen, L. Su, Y.-H. Yang, and R. Jang, &amp;quot;Vocal activity informed singing voice separation with the iKala dataset,&amp;quot; in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., 2015, pp. 718-722.&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
&lt;br /&gt;
For evaluation we use [http://hal.inria.fr/inria-00630985/PDF/vincent_SigPro11.pdf Vincent ''et al.'''s (2012)] Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by [http://bass-db.gforge.inria.fr/bss_eval/bss_eval_sources.m bss_eval_sources.m] in [http://bass-db.gforge.inria.fr/bss_eval/ BSS Eval Version 3.0]. Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; trueVoice = wavread('trueVoice.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueKaraoke = wavread('trueKaraoke.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueMixed = trueVoice + trueKaraoke;&lt;br /&gt;
 &amp;gt;&amp;gt; [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed);&lt;br /&gt;
 &amp;gt;&amp;gt; [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; NSDR = SDR - NSDR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSIR = SIR - NSIR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSAR = SAR - NSAR;&lt;br /&gt;
&lt;br /&gt;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GNSDR=\frac{\sum_{i=1}^{100}NSDR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSIR=\frac{\sum_{i=1}^{100}SIR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSAR=\frac{\sum_{i=1}^{100}SAR_i}{100}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, sd, min, max and median will also be reported.&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
&lt;br /&gt;
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir)&lt;br /&gt;
 [~, name, ext] = fileparts(infile);&lt;br /&gt;
 your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext]));&lt;br /&gt;
 &lt;br /&gt;
 function your_algorithm(infile, voiceoutfile, musicoutfile)&lt;br /&gt;
 mixed = wavread(infile);&lt;br /&gt;
 &lt;br /&gt;
 % Insert your algorithm here&lt;br /&gt;
 &lt;br /&gt;
 wavwrite(voice, 44100, voiceoutfile);&lt;br /&gt;
 wavwrite(music, 44100, musicoutfile);&lt;br /&gt;
&lt;br /&gt;
If scratch space is required, please use the three-argument format instead:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir, tmpdir)&lt;br /&gt;
&lt;br /&gt;
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2014 Submission Instructions below).&lt;br /&gt;
&lt;br /&gt;
== Packaging submissions ==&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).&lt;br /&gt;
# Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]].&lt;br /&gt;
# Be sure to follow the [[MIREX 2014 Submission Instructions]]. For example, under '''Very Important Things to Note''', Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the information:&lt;br /&gt;
# Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
# Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
# Expected memory footprint&lt;br /&gt;
# Expected runtime&lt;br /&gt;
# Approximately how much scratch disk space will the submission need to store any feature/cache files?&lt;br /&gt;
# Any required environments/architectures (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
# Any special notice regarding to running your algorithm&lt;br /&gt;
&lt;br /&gt;
Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result. &lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation&amp;diff=11706</id>
		<title>2014:Singing Voice Separation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation&amp;diff=11706"/>
		<updated>2016-04-29T07:37:04Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
&lt;br /&gt;
All discussions take place on the MIREX &amp;quot;EvalFest&amp;quot; list. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms.&lt;br /&gt;
&lt;br /&gt;
Collection statistics:&lt;br /&gt;
&lt;br /&gt;
# Size of collection: 100 clips&lt;br /&gt;
# Audio details: 16-bit, mono, 44.1kHz, WAV&lt;br /&gt;
# Duration of each clip: 30 seconds&lt;br /&gt;
&lt;br /&gt;
This collection comprises of the hidden part of the [http://mac.citi.sinica.edu.tw/ikala/ iKala dataset]. See T.-S. Chan, T.-C. Yeh, Z.-C. Fan, H.-W. Chen, L. Su, Y.-H. Yang, and R. Jang,&lt;br /&gt;
&amp;quot;Vocal activity informed singing voice separation with the iKala dataset,&amp;quot; in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., 2015, pp. 718-722.&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
&lt;br /&gt;
For evaluation we use [http://hal.inria.fr/inria-00630985/PDF/vincent_SigPro11.pdf Vincent ''et al.'''s (2012)] Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by [http://bass-db.gforge.inria.fr/bss_eval/bss_eval_sources.m bss_eval_sources.m] in [http://bass-db.gforge.inria.fr/bss_eval/ BSS Eval Version 3.0]. Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; trueVoice = wavread('trueVoice.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueKaraoke = wavread('trueKaraoke.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueMixed = trueVoice + trueKaraoke;&lt;br /&gt;
 &amp;gt;&amp;gt; [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed);&lt;br /&gt;
 &amp;gt;&amp;gt; [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; NSDR = SDR - NSDR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSIR = SIR - NSIR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSAR = SAR - NSAR;&lt;br /&gt;
&lt;br /&gt;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GNSDR=\frac{\sum_{i=1}^{100}NSDR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSIR=\frac{\sum_{i=1}^{100}SIR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSAR=\frac{\sum_{i=1}^{100}SAR_i}{100}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, sd, min, max and median will also be reported.&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
&lt;br /&gt;
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir)&lt;br /&gt;
 [~, name, ext] = fileparts(infile);&lt;br /&gt;
 your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext]));&lt;br /&gt;
 &lt;br /&gt;
 function your_algorithm(infile, voiceoutfile, musicoutfile)&lt;br /&gt;
 mixed = wavread(infile);&lt;br /&gt;
 &lt;br /&gt;
 % Insert your algorithm here&lt;br /&gt;
 &lt;br /&gt;
 wavwrite(voice, 44100, voiceoutfile);&lt;br /&gt;
 wavwrite(music, 44100, musicoutfile);&lt;br /&gt;
&lt;br /&gt;
If scratch space is required, please use the three-argument format instead:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir, tmpdir)&lt;br /&gt;
&lt;br /&gt;
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2014 Submission Instructions below).&lt;br /&gt;
&lt;br /&gt;
== Packaging submissions ==&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).&lt;br /&gt;
# Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]].&lt;br /&gt;
# Be sure to follow the [[MIREX 2014 Submission Instructions]]. For example, under '''Very Important Things to Note''', Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the information:&lt;br /&gt;
# Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
# Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
# Expected memory footprint&lt;br /&gt;
# Expected runtime&lt;br /&gt;
# Approximately how much scratch disk space will the submission need to store any feature/cache files?&lt;br /&gt;
# Any required environments/architectures (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
# Any special notice regarding to running your algorithm&lt;br /&gt;
&lt;br /&gt;
Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result. &lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation_Results&amp;diff=11584</id>
		<title>2014:Singing Voice Separation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation_Results&amp;diff=11584"/>
		<updated>2015-11-19T02:27:37Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
These are the results for the 2014 running of the Singing Voice Separation task set. For more information about this task set please refer to the [[2014:Singing Voice Separation]] page.&lt;br /&gt;
&lt;br /&gt;
=== Legend ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Submission code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract PDF&lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	|-&lt;br /&gt;
	! GW1&lt;br /&gt;
	| Bayesian Singing-Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GW1.pdf PDF] || Guan-Xiang Wang, Po-Kai Yang, Chung-Chien Hsu, Jen-Tzung Chien&lt;br /&gt;
        |-&lt;br /&gt;
	! HKHS1&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS1.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
	|-&lt;br /&gt;
	! HKHS2&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS2.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
        |-&lt;br /&gt;
	! HKHS3&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS3.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
	|-&lt;br /&gt;
	! IIY1&lt;br /&gt;
	| Singing Voice Separation and Vocal F0 Estimation based on Robust PCA and Subharmonic Summation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/IIY1.pdf PDF] || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! IIY2&lt;br /&gt;
	| Singing Voice Separation and Vocal F0 Estimation based on Robust PCA and Subharmonic Summation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/IIY2.pdf PDF] || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! JL1&lt;br /&gt;
	| Singing Voice Separation Based on Sparse Nature and Spectral/Temporal Discontinuity || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JL1.pdf PDF] || Il-Young Jeong, Kyogu Lee&lt;br /&gt;
        |-&lt;br /&gt;
	! LFR1&lt;br /&gt;
	| Kernel Additive Modelling with light models || style=&amp;quot;text-align: center;&amp;quot; | [http://dx.doi.org/10.1109/ICASSP.2015.7177935 PDF] || Antoine Liutkus, Derry Fitzgerald, Zafar Rafii&lt;br /&gt;
        |-&lt;br /&gt;
	! RNA1&lt;br /&gt;
	| Singing Voice Separation using Adaptive Window Harmonic Sinusoidal Modeling || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/RNA1.pdf PDF] || Preeti Rao, Nagesh Nayak, Sharath Adavanne&lt;br /&gt;
        |-&lt;br /&gt;
	! RP1&lt;br /&gt;
	| REPET-SIM for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/RP1.pdf PDF] || Zafar Rafii, Bryan Pardo&lt;br /&gt;
        |-&lt;br /&gt;
	! YC1&lt;br /&gt;
	| MIREX 2014 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/YC1.pdf PDF] || Frederick Yen, Tai-Shih Chi&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
==== Evaluation Criteria ====&lt;br /&gt;
&lt;br /&gt;
'''GNSDR''' = Global Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''NSDR''' = Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SIR''' = Signal-to-Interference Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SAR''' = Signal-to-Artifacts Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Results ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! Algorithm !! Voice GNSDR (dB) !! Music GNSDR (dB) !! Runtime (hh)&lt;br /&gt;
	|-&lt;br /&gt;
	| GW1 || 2.8861 || 5.2549 || 24&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS1 || -1.3988 || 0.3483 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS2 || -1.9413 || 0.5239 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS3 || -2.4807 || 0.1414 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY1 || 4.2190 || 7.7893 || 02&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY2 || 4.4764 || 7.8661 || 02&lt;br /&gt;
	|-&lt;br /&gt;
	| JL1 || 4.1564 || 5.6304 || 01&lt;br /&gt;
	|-&lt;br /&gt;
	| LFR1 || 0.6499 || 3.0867 || 03&lt;br /&gt;
	|-&lt;br /&gt;
	| RNA1 || 3.6915 || 7.3153 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| RP1 || 2.8602 || 5.0306 || 01&lt;br /&gt;
	|-&lt;br /&gt;
	| YC1 || -0.8202 || -3.1150 || 13&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
== NSDR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/nsdr-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/nsdr-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-nsdr.png]]&lt;br /&gt;
&lt;br /&gt;
== SIR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sir-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sir-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-sir.png]]&lt;br /&gt;
&lt;br /&gt;
== SAR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sar-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sar-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-sar.png]]&lt;br /&gt;
&lt;br /&gt;
== Individual Spectrograms ==&lt;br /&gt;
&lt;br /&gt;
As the MIREX test set is private, we use three other songs with similar characteristics to demonstrate the algorithms.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-gw1.png|thumb|Spectrograms for GW1]]&lt;br /&gt;
	| [[File:2014-svs-hkhs1.png|thumb|Spectrograms for HKHS1]]&lt;br /&gt;
	| [[File:2014-svs-hkhs2.png|thumb|Spectrograms for HKHS2]]&lt;br /&gt;
	| [[File:2014-svs-hkhs3.png|thumb|Spectrograms for HKHS3]]&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-iiy1.png|thumb|Spectrograms for IIY1]]&lt;br /&gt;
	| [[File:2014-svs-iiy2.png|thumb|Spectrograms for IIY2]]&lt;br /&gt;
	| [[File:2014-svs-jl1.png|thumb|Spectrograms for JL1]]&lt;br /&gt;
	| [[File:2014-svs-lfr1.png|thumb|Spectrograms for LFR1]]&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-rna1.png|thumb|Spectrograms for RNA1]]&lt;br /&gt;
	| [[File:2014-svs-rp1.png|thumb|Spectrograms for RP1]]&lt;br /&gt;
	| [[File:2014-svs-yc1.png|thumb|Spectrograms for YC1]]&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
=== Labels ===&lt;br /&gt;
&lt;br /&gt;
'''a''' = input mixture ''x'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''b''' = ground truth voice for ''x'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''c''' = extracted voice from ''x'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''d''' = input mixture ''y'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''e''' = ground truth voice for ''y'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''f''' = extracted voice from ''y'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''g''' = input mixture ''z'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''h''' = ground truth voice for ''z'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''i''' = extracted voice from ''z'' &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Runtime Data ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation_Results&amp;diff=11329</id>
		<title>2015:Singing Voice Separation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation_Results&amp;diff=11329"/>
		<updated>2015-10-17T06:00:54Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Legend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
These are the results for the 2015 running of the Singing Voice Separation task set. For more information about this task set please refer to the [[2015:Singing Voice Separation]] page.&lt;br /&gt;
&lt;br /&gt;
=== Legend ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Submission code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract PDF&lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	|-&lt;br /&gt;
	! FJ1&lt;br /&gt;
	| MIREX 2015 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Zhe-Cheng Fan, Jyh-Shing Roger Jang&lt;br /&gt;
        |-&lt;br /&gt;
	! FJ2&lt;br /&gt;
	| MIREX 2015 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Zhe-Cheng Fan, Jyh-Shing Roger Jang&lt;br /&gt;
        |-&lt;br /&gt;
	! IIY3&lt;br /&gt;
	| MIREX2015: Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/IIY3.pdf PDF] || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! IIY4&lt;br /&gt;
	| MIREX2015: Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/IIY4.pdf PDF] || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! MD3&lt;br /&gt;
	| An Ensemble Method for Learning to Extract Vocals from Polyphonic Musical Audio || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/MD3.pdf PDF] || Matt McVicar, Tijl De Bie&lt;br /&gt;
        |-&lt;br /&gt;
	! MD4&lt;br /&gt;
	| An Ensemble Method for Learning to Extract Vocals from Polyphonic Musical Audio || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/MD4.pdf PDF] || Matt McVicar, Tijl De Bie&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
==== Evaluation Criteria ====&lt;br /&gt;
&lt;br /&gt;
'''GNSDR''' = Global Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''NSDR''' = Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SIR''' = Signal-to-Interference Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SAR''' = Signal-to-Artifacts Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Results ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! Algorithm !! Voice GNSDR (dB) !! Music GNSDR (dB) !! Runtime (m)&lt;br /&gt;
	|-&lt;br /&gt;
	| FJ1 || 6.8236 || 10.135 || 3.4014&lt;br /&gt;
	|-&lt;br /&gt;
	| FJ2 || 6.3487 || 9.8678 || 2.9135&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY3 || 4.9862 || 8.2138 || 99.1737&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY4 || 5.3953 || 8.77 || 46.6621&lt;br /&gt;
	|-&lt;br /&gt;
	| MD3 || 2.9831 || 6.3671 || 121.6655&lt;br /&gt;
	|-&lt;br /&gt;
	| MD4 || 3.1022 || 7.4657 || 121.1348&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
== NSDR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/nsdr-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/nsdr-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2015-svs-nsdr.png]]&lt;br /&gt;
&lt;br /&gt;
== SIR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sir-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sir-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2015-svs-sir.png]]&lt;br /&gt;
&lt;br /&gt;
== SAR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sar-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sar-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2015-svs-sar.png]]&lt;br /&gt;
&lt;br /&gt;
== Runtime Data ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation_Results&amp;diff=11266</id>
		<title>2015:Singing Voice Separation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation_Results&amp;diff=11266"/>
		<updated>2015-10-06T04:14:58Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
These are the results for the 2015 running of the Singing Voice Separation task set. For more information about this task set please refer to the [[2015:Singing Voice Separation]] page.&lt;br /&gt;
&lt;br /&gt;
=== Legend ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Submission code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract PDF&lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	|-&lt;br /&gt;
	! FJ1&lt;br /&gt;
	| MIREX 2015 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Zhe-Cheng Fan, Jyh-Shing Roger Jang&lt;br /&gt;
        |-&lt;br /&gt;
	! FJ2&lt;br /&gt;
	| MIREX 2015 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Zhe-Cheng Fan, Jyh-Shing Roger Jang&lt;br /&gt;
        |-&lt;br /&gt;
	! IIY3&lt;br /&gt;
	| MIREX 2015 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! IIY4&lt;br /&gt;
	| MIREX 2015 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | - || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! MD3&lt;br /&gt;
	| An Ensemble Method for Learning to Extract Vocals from Polyphonic Musical Audio || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/MD3.pdf PDF] || Matt McVicar, Tijl De Bie&lt;br /&gt;
        |-&lt;br /&gt;
	! MD4&lt;br /&gt;
	| An Ensemble Method for Learning to Extract Vocals from Polyphonic Musical Audio || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/MD4.pdf PDF] || Matt McVicar, Tijl De Bie&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
==== Evaluation Criteria ====&lt;br /&gt;
&lt;br /&gt;
'''GNSDR''' = Global Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''NSDR''' = Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SIR''' = Signal-to-Interference Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SAR''' = Signal-to-Artifacts Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Results ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! Algorithm !! Voice GNSDR (dB) !! Music GNSDR (dB) !! Runtime (m)&lt;br /&gt;
	|-&lt;br /&gt;
	| FJ1 || 6.8236 || 10.135 || 3.4014&lt;br /&gt;
	|-&lt;br /&gt;
	| FJ2 || 6.3487 || 9.8678 || 2.9135&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY3 || 4.9862 || 8.2138 || 99.1737&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY4 || 5.3953 || 8.77 || 46.6621&lt;br /&gt;
	|-&lt;br /&gt;
	| MD3 || 2.9831 || 6.3671 || 121.6655&lt;br /&gt;
	|-&lt;br /&gt;
	| MD4 || 3.1022 || 7.4657 || 121.1348&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
== NSDR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/nsdr-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/nsdr-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2015-svs-nsdr.png]]&lt;br /&gt;
&lt;br /&gt;
== SIR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sir-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sir-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2015-svs-sir.png]]&lt;br /&gt;
&lt;br /&gt;
== SAR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sar-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sar-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2015-svs-sar.png]]&lt;br /&gt;
&lt;br /&gt;
== Runtime Data ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:2015-svs-sir.png&amp;diff=11265</id>
		<title>File:2015-svs-sir.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:2015-svs-sir.png&amp;diff=11265"/>
		<updated>2015-10-06T03:47:43Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: SIR&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SIR&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:2015-svs-sar.png&amp;diff=11264</id>
		<title>File:2015-svs-sar.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:2015-svs-sar.png&amp;diff=11264"/>
		<updated>2015-10-06T03:47:28Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: SAR&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SAR&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:2015-svs-nsdr.png&amp;diff=11263</id>
		<title>File:2015-svs-nsdr.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:2015-svs-nsdr.png&amp;diff=11263"/>
		<updated>2015-10-06T03:47:08Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: NSDR&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;NSDR&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation_Results&amp;diff=11262</id>
		<title>2015:Singing Voice Separation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation_Results&amp;diff=11262"/>
		<updated>2015-10-05T15:55:55Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: Created page with &amp;quot;== Introduction ==  === Description ===  These are the results for the 2015 running of the Singing Voice Separation task set. For more information about this task set please refe...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
These are the results for the 2015 running of the Singing Voice Separation task set. For more information about this task set please refer to the [[2015:Singing Voice Separation]] page.&lt;br /&gt;
&lt;br /&gt;
=== Legend ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Submission code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract PDF&lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	|-&lt;br /&gt;
	! FJ1&lt;br /&gt;
	| Submission name || style=&amp;quot;text-align: center;&amp;quot; | - || Contributors&lt;br /&gt;
        |-&lt;br /&gt;
	! FJ2&lt;br /&gt;
	| Submission name || style=&amp;quot;text-align: center;&amp;quot; | - || Contributors&lt;br /&gt;
        |-&lt;br /&gt;
	! IIY3&lt;br /&gt;
	| Submission name || style=&amp;quot;text-align: center;&amp;quot; | - || Contributors&lt;br /&gt;
        |-&lt;br /&gt;
	! IIY4&lt;br /&gt;
	| Submission name || style=&amp;quot;text-align: center;&amp;quot; | - || Contributors&lt;br /&gt;
        |-&lt;br /&gt;
	! MD3&lt;br /&gt;
	| Submission name || style=&amp;quot;text-align: center;&amp;quot; | - || Contributors&lt;br /&gt;
        |-&lt;br /&gt;
	! MD4&lt;br /&gt;
	| Submission name || style=&amp;quot;text-align: center;&amp;quot; | - || Contributors&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
==== Evaluation Criteria ====&lt;br /&gt;
&lt;br /&gt;
'''GNSDR''' = Global Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''NSDR''' = Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SIR''' = Signal-to-Interference Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SAR''' = Signal-to-Artifacts Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Results ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! Algorithm !! Voice GNSDR (dB) !! Music GNSDR (dB) !! Runtime (m)&lt;br /&gt;
	|-&lt;br /&gt;
	| FJ1 ||  ||  || &lt;br /&gt;
	|-&lt;br /&gt;
	| FJ2 ||  ||  || &lt;br /&gt;
	|-&lt;br /&gt;
	| IIY3 ||  ||  || &lt;br /&gt;
	|-&lt;br /&gt;
	| IIY4 ||  ||  || &lt;br /&gt;
	|-&lt;br /&gt;
	| MD3 ||  ||  || &lt;br /&gt;
	|-&lt;br /&gt;
	| MD4 ||  ||  || &lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
== NSDR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/nsdr-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/nsdr-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2015-svs-nsdr.png]]&lt;br /&gt;
&lt;br /&gt;
== SIR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sir-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sir-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2015-svs-sir.png]]&lt;br /&gt;
&lt;br /&gt;
== SAR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sar-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/sar-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2015-svs-sar.png]]&lt;br /&gt;
&lt;br /&gt;
== Runtime Data ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/svs/runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11199</id>
		<title>2015:Singing Voice Separation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11199"/>
		<updated>2015-08-10T08:25:36Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
&lt;br /&gt;
All discussions take place on the MIREX &amp;quot;EvalFest&amp;quot; list. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms (these are the hidden parts of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset). If your algorithm is a supervised one, you are welcome to use the public part of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset for training. In addition, you can train with 30-second segments from the SiSEC [https://sisec.inria.fr/professionally-produced-music-recordings/ MUS] challenge.&lt;br /&gt;
&lt;br /&gt;
Collection statistics:&lt;br /&gt;
&lt;br /&gt;
# Size of collection: 100 clips&lt;br /&gt;
# Audio details: 16-bit, mono, 44.1kHz, WAV&lt;br /&gt;
# Duration of each clip: 30 seconds&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
&lt;br /&gt;
For evaluation we use [http://hal.inria.fr/inria-00630985/PDF/vincent_SigPro11.pdf Vincent ''et al.'''s (2012)] Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by [http://bass-db.gforge.inria.fr/bss_eval/bss_eval_sources.m bss_eval_sources.m] in [http://bass-db.gforge.inria.fr/bss_eval/ BSS Eval Version 3.0] (but with the permutation part removed, as the ability to classify signals is part of the singing voice separation challenge). Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; trueVoice = wavread('trueVoice.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueKaraoke = wavread('trueKaraoke.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueMixed = trueVoice + trueKaraoke;&lt;br /&gt;
 &amp;gt;&amp;gt; [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed);&lt;br /&gt;
 &amp;gt;&amp;gt; [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; NSDR = SDR - NSDR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSIR = SIR - NSIR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSAR = SAR - NSAR;&lt;br /&gt;
&lt;br /&gt;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GNSDR=\frac{\sum_{i=1}^{100}NSDR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSIR=\frac{\sum_{i=1}^{100}SIR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSAR=\frac{\sum_{i=1}^{100}SAR_i}{100}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, sd, min, max and median will also be reported.&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
&lt;br /&gt;
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir)&lt;br /&gt;
 [~, name, ext] = fileparts(infile);&lt;br /&gt;
 your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext]));&lt;br /&gt;
 &lt;br /&gt;
 function your_algorithm(infile, voiceoutfile, musicoutfile)&lt;br /&gt;
 mixed = wavread(infile);&lt;br /&gt;
 &lt;br /&gt;
 % Insert your algorithm here&lt;br /&gt;
 &lt;br /&gt;
 wavwrite(voice, 44100, voiceoutfile);&lt;br /&gt;
 wavwrite(music, 44100, musicoutfile);&lt;br /&gt;
&lt;br /&gt;
If scratch space is required, please use the three-argument format instead:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir, tmpdir)&lt;br /&gt;
&lt;br /&gt;
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2015 Submission Instructions below). For supervised submissions, please provide training details in the extended abstract (e.g. datasets used).&lt;br /&gt;
&lt;br /&gt;
== Packaging submissions ==&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).&lt;br /&gt;
# Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]].&lt;br /&gt;
# Be sure to follow the [[MIREX 2015 Submission Instructions]]. For example, under '''Very Important Things to Note''', Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the information:&lt;br /&gt;
# Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
# Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
# Expected memory footprint&lt;br /&gt;
# Expected runtime&lt;br /&gt;
# Approximately how much scratch disk space will the submission need to store any feature/cache files?&lt;br /&gt;
# Any required environments/architectures (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
# Any special notice regarding to running your algorithm&lt;br /&gt;
&lt;br /&gt;
Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result. &lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11198</id>
		<title>2015:Singing Voice Separation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11198"/>
		<updated>2015-08-10T08:24:03Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Evaluation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
&lt;br /&gt;
All discussions take place on the MIREX &amp;quot;EvalFest&amp;quot; list. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms. These are the hidden parts of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset. If your algorithm is a supervised one, you are welcome to use the public part of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset. Alternatively, or in addition, you can train with 30-second segments from the SiSEC [https://sisec.inria.fr/professionally-produced-music-recordings/ MUS] challenge.&lt;br /&gt;
&lt;br /&gt;
Collection statistics:&lt;br /&gt;
&lt;br /&gt;
# Size of collection: 100 clips&lt;br /&gt;
# Audio details: 16-bit, mono, 44.1kHz, WAV&lt;br /&gt;
# Duration of each clip: 30 seconds&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
&lt;br /&gt;
For evaluation we use [http://hal.inria.fr/inria-00630985/PDF/vincent_SigPro11.pdf Vincent ''et al.'''s (2012)] Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by [http://bass-db.gforge.inria.fr/bss_eval/bss_eval_sources.m bss_eval_sources.m] in [http://bass-db.gforge.inria.fr/bss_eval/ BSS Eval Version 3.0] (but with the permutation part removed, as the ability to classify signals is part of the singing voice separation challenge). Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; trueVoice = wavread('trueVoice.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueKaraoke = wavread('trueKaraoke.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueMixed = trueVoice + trueKaraoke;&lt;br /&gt;
 &amp;gt;&amp;gt; [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed);&lt;br /&gt;
 &amp;gt;&amp;gt; [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; NSDR = SDR - NSDR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSIR = SIR - NSIR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSAR = SAR - NSAR;&lt;br /&gt;
&lt;br /&gt;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GNSDR=\frac{\sum_{i=1}^{100}NSDR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSIR=\frac{\sum_{i=1}^{100}SIR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSAR=\frac{\sum_{i=1}^{100}SAR_i}{100}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, sd, min, max and median will also be reported.&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
&lt;br /&gt;
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir)&lt;br /&gt;
 [~, name, ext] = fileparts(infile);&lt;br /&gt;
 your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext]));&lt;br /&gt;
 &lt;br /&gt;
 function your_algorithm(infile, voiceoutfile, musicoutfile)&lt;br /&gt;
 mixed = wavread(infile);&lt;br /&gt;
 &lt;br /&gt;
 % Insert your algorithm here&lt;br /&gt;
 &lt;br /&gt;
 wavwrite(voice, 44100, voiceoutfile);&lt;br /&gt;
 wavwrite(music, 44100, musicoutfile);&lt;br /&gt;
&lt;br /&gt;
If scratch space is required, please use the three-argument format instead:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir, tmpdir)&lt;br /&gt;
&lt;br /&gt;
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2015 Submission Instructions below). For supervised submissions, please provide training details in the extended abstract (e.g. datasets used).&lt;br /&gt;
&lt;br /&gt;
== Packaging submissions ==&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).&lt;br /&gt;
# Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]].&lt;br /&gt;
# Be sure to follow the [[MIREX 2015 Submission Instructions]]. For example, under '''Very Important Things to Note''', Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the information:&lt;br /&gt;
# Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
# Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
# Expected memory footprint&lt;br /&gt;
# Expected runtime&lt;br /&gt;
# Approximately how much scratch disk space will the submission need to store any feature/cache files?&lt;br /&gt;
# Any required environments/architectures (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
# Any special notice regarding to running your algorithm&lt;br /&gt;
&lt;br /&gt;
Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result. &lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11197</id>
		<title>2015:Singing Voice Separation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11197"/>
		<updated>2015-08-10T08:23:48Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
&lt;br /&gt;
All discussions take place on the MIREX &amp;quot;EvalFest&amp;quot; list. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms. These are the hidden parts of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset. If your algorithm is a supervised one, you are welcome to use the public part of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset. Alternatively, or in addition, you can train with 30-second segments from the SiSEC [https://sisec.inria.fr/professionally-produced-music-recordings/ MUS] challenge.&lt;br /&gt;
&lt;br /&gt;
Collection statistics:&lt;br /&gt;
&lt;br /&gt;
# Size of collection: 100 clips&lt;br /&gt;
# Audio details: 16-bit, mono, 44.1kHz, WAV&lt;br /&gt;
# Duration of each clip: 30 seconds&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
&lt;br /&gt;
For evaluation we use [http://hal.inria.fr/inria-00630985/PDF/vincent_SigPro11.pdf Vincent ''et al.'''s (2012)] Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by [http://bass-db.gforge.inria.fr/bss_eval/bss_eval_sources.m bss_eval_sources.m] in [http://bass-db.gforge.inria.fr/bss_eval/ BSS Eval Version 3.0] (NEW: but with the perm part removed, as the ability to classify signals is part of the singing voice separation challenge). Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; trueVoice = wavread('trueVoice.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueKaraoke = wavread('trueKaraoke.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueMixed = trueVoice + trueKaraoke;&lt;br /&gt;
 &amp;gt;&amp;gt; [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed);&lt;br /&gt;
 &amp;gt;&amp;gt; [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; NSDR = SDR - NSDR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSIR = SIR - NSIR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSAR = SAR - NSAR;&lt;br /&gt;
&lt;br /&gt;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GNSDR=\frac{\sum_{i=1}^{100}NSDR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSIR=\frac{\sum_{i=1}^{100}SIR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSAR=\frac{\sum_{i=1}^{100}SAR_i}{100}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, sd, min, max and median will also be reported.&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
&lt;br /&gt;
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir)&lt;br /&gt;
 [~, name, ext] = fileparts(infile);&lt;br /&gt;
 your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext]));&lt;br /&gt;
 &lt;br /&gt;
 function your_algorithm(infile, voiceoutfile, musicoutfile)&lt;br /&gt;
 mixed = wavread(infile);&lt;br /&gt;
 &lt;br /&gt;
 % Insert your algorithm here&lt;br /&gt;
 &lt;br /&gt;
 wavwrite(voice, 44100, voiceoutfile);&lt;br /&gt;
 wavwrite(music, 44100, musicoutfile);&lt;br /&gt;
&lt;br /&gt;
If scratch space is required, please use the three-argument format instead:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir, tmpdir)&lt;br /&gt;
&lt;br /&gt;
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2015 Submission Instructions below). For supervised submissions, please provide training details in the extended abstract (e.g. datasets used).&lt;br /&gt;
&lt;br /&gt;
== Packaging submissions ==&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).&lt;br /&gt;
# Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]].&lt;br /&gt;
# Be sure to follow the [[MIREX 2015 Submission Instructions]]. For example, under '''Very Important Things to Note''', Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the information:&lt;br /&gt;
# Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
# Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
# Expected memory footprint&lt;br /&gt;
# Expected runtime&lt;br /&gt;
# Approximately how much scratch disk space will the submission need to store any feature/cache files?&lt;br /&gt;
# Any required environments/architectures (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
# Any special notice regarding to running your algorithm&lt;br /&gt;
&lt;br /&gt;
Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result. &lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11196</id>
		<title>2015:Singing Voice Separation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11196"/>
		<updated>2015-08-10T08:17:22Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Submission format */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
&lt;br /&gt;
All discussions take place on the MIREX &amp;quot;EvalFest&amp;quot; list. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms. These are the hidden parts of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset. For training purposes, you are welcome to use the public part of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset. Alternatively (or in addition), you can train with 30-second segments from the SiSEC [https://sisec.inria.fr/professionally-produced-music-recordings/ MUS] challenge.&lt;br /&gt;
&lt;br /&gt;
Collection statistics:&lt;br /&gt;
&lt;br /&gt;
# Size of collection: 100 clips&lt;br /&gt;
# Audio details: 16-bit, mono, 44.1kHz, WAV&lt;br /&gt;
# Duration of each clip: 30 seconds&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
&lt;br /&gt;
For evaluation we use [http://hal.inria.fr/inria-00630985/PDF/vincent_SigPro11.pdf Vincent ''et al.'''s (2012)] Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by [http://bass-db.gforge.inria.fr/bss_eval/bss_eval_sources.m bss_eval_sources.m] in [http://bass-db.gforge.inria.fr/bss_eval/ BSS Eval Version 3.0] (NEW: but with the perm part removed, as the ability to classify signals is part of the singing voice separation challenge). Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; trueVoice = wavread('trueVoice.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueKaraoke = wavread('trueKaraoke.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueMixed = trueVoice + trueKaraoke;&lt;br /&gt;
 &amp;gt;&amp;gt; [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed);&lt;br /&gt;
 &amp;gt;&amp;gt; [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; NSDR = SDR - NSDR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSIR = SIR - NSIR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSAR = SAR - NSAR;&lt;br /&gt;
&lt;br /&gt;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GNSDR=\frac{\sum_{i=1}^{100}NSDR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSIR=\frac{\sum_{i=1}^{100}SIR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSAR=\frac{\sum_{i=1}^{100}SAR_i}{100}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, sd, min, max and median will also be reported.&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
&lt;br /&gt;
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir)&lt;br /&gt;
 [~, name, ext] = fileparts(infile);&lt;br /&gt;
 your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext]));&lt;br /&gt;
 &lt;br /&gt;
 function your_algorithm(infile, voiceoutfile, musicoutfile)&lt;br /&gt;
 mixed = wavread(infile);&lt;br /&gt;
 &lt;br /&gt;
 % Insert your algorithm here&lt;br /&gt;
 &lt;br /&gt;
 wavwrite(voice, 44100, voiceoutfile);&lt;br /&gt;
 wavwrite(music, 44100, musicoutfile);&lt;br /&gt;
&lt;br /&gt;
If scratch space is required, please use the three-argument format instead:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir, tmpdir)&lt;br /&gt;
&lt;br /&gt;
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2015 Submission Instructions below). For supervised submissions, please provide training details in the extended abstract (e.g. datasets used).&lt;br /&gt;
&lt;br /&gt;
== Packaging submissions ==&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).&lt;br /&gt;
# Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]].&lt;br /&gt;
# Be sure to follow the [[MIREX 2015 Submission Instructions]]. For example, under '''Very Important Things to Note''', Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the information:&lt;br /&gt;
# Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
# Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
# Expected memory footprint&lt;br /&gt;
# Expected runtime&lt;br /&gt;
# Approximately how much scratch disk space will the submission need to store any feature/cache files?&lt;br /&gt;
# Any required environments/architectures (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
# Any special notice regarding to running your algorithm&lt;br /&gt;
&lt;br /&gt;
Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result. &lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11195</id>
		<title>2015:Singing Voice Separation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11195"/>
		<updated>2015-08-10T08:10:46Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
&lt;br /&gt;
All discussions take place on the MIREX &amp;quot;EvalFest&amp;quot; list. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms. These are the hidden parts of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset. For training purposes, you are welcome to use the public part of the [http://mac.citi.sinica.edu.tw/ikala/ iKala] dataset. Alternatively (or in addition), you can train with 30-second segments from the SiSEC [https://sisec.inria.fr/professionally-produced-music-recordings/ MUS] challenge.&lt;br /&gt;
&lt;br /&gt;
Collection statistics:&lt;br /&gt;
&lt;br /&gt;
# Size of collection: 100 clips&lt;br /&gt;
# Audio details: 16-bit, mono, 44.1kHz, WAV&lt;br /&gt;
# Duration of each clip: 30 seconds&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
&lt;br /&gt;
For evaluation we use [http://hal.inria.fr/inria-00630985/PDF/vincent_SigPro11.pdf Vincent ''et al.'''s (2012)] Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by [http://bass-db.gforge.inria.fr/bss_eval/bss_eval_sources.m bss_eval_sources.m] in [http://bass-db.gforge.inria.fr/bss_eval/ BSS Eval Version 3.0] (NEW: but with the perm part removed, as the ability to classify signals is part of the singing voice separation challenge). Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; trueVoice = wavread('trueVoice.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueKaraoke = wavread('trueKaraoke.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueMixed = trueVoice + trueKaraoke;&lt;br /&gt;
 &amp;gt;&amp;gt; [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed);&lt;br /&gt;
 &amp;gt;&amp;gt; [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; NSDR = SDR - NSDR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSIR = SIR - NSIR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSAR = SAR - NSAR;&lt;br /&gt;
&lt;br /&gt;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GNSDR=\frac{\sum_{i=1}^{100}NSDR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSIR=\frac{\sum_{i=1}^{100}SIR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSAR=\frac{\sum_{i=1}^{100}SAR_i}{100}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, sd, min, max and median will also be reported.&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
&lt;br /&gt;
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir)&lt;br /&gt;
 [~, name, ext] = fileparts(infile);&lt;br /&gt;
 your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext]));&lt;br /&gt;
 &lt;br /&gt;
 function your_algorithm(infile, voiceoutfile, musicoutfile)&lt;br /&gt;
 mixed = wavread(infile);&lt;br /&gt;
 &lt;br /&gt;
 % Insert your algorithm here&lt;br /&gt;
 &lt;br /&gt;
 wavwrite(voice, 44100, voiceoutfile);&lt;br /&gt;
 wavwrite(music, 44100, musicoutfile);&lt;br /&gt;
&lt;br /&gt;
If scratch space is required, please use the three-argument format instead:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir, tmpdir)&lt;br /&gt;
&lt;br /&gt;
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2015 Submission Instructions below).&lt;br /&gt;
&lt;br /&gt;
== Packaging submissions ==&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).&lt;br /&gt;
# Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]].&lt;br /&gt;
# Be sure to follow the [[MIREX 2015 Submission Instructions]]. For example, under '''Very Important Things to Note''', Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the information:&lt;br /&gt;
# Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
# Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
# Expected memory footprint&lt;br /&gt;
# Expected runtime&lt;br /&gt;
# Approximately how much scratch disk space will the submission need to store any feature/cache files?&lt;br /&gt;
# Any required environments/architectures (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
# Any special notice regarding to running your algorithm&lt;br /&gt;
&lt;br /&gt;
Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result. &lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11194</id>
		<title>2015:Singing Voice Separation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Singing_Voice_Separation&amp;diff=11194"/>
		<updated>2015-08-10T08:02:51Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Evaluation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
&lt;br /&gt;
All discussions take place on the MIREX &amp;quot;EvalFest&amp;quot; list. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms.&lt;br /&gt;
&lt;br /&gt;
Collection statistics:&lt;br /&gt;
&lt;br /&gt;
# Size of collection: 100 clips&lt;br /&gt;
# Audio details: 16-bit, mono, 44.1kHz, WAV&lt;br /&gt;
# Duration of each clip: 30 seconds&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
&lt;br /&gt;
For evaluation we use [http://hal.inria.fr/inria-00630985/PDF/vincent_SigPro11.pdf Vincent ''et al.'''s (2012)] Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by [http://bass-db.gforge.inria.fr/bss_eval/bss_eval_sources.m bss_eval_sources.m] in [http://bass-db.gforge.inria.fr/bss_eval/ BSS Eval Version 3.0] (NEW: but with the perm part removed, as the ability to classify signals is part of the singing voice separation challenge). Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; trueVoice = wavread('trueVoice.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueKaraoke = wavread('trueKaraoke.wav');&lt;br /&gt;
 &amp;gt;&amp;gt; trueMixed = trueVoice + trueKaraoke;&lt;br /&gt;
 &amp;gt;&amp;gt; [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed);&lt;br /&gt;
 &amp;gt;&amp;gt; [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke));&lt;br /&gt;
 &amp;gt;&amp;gt; NSDR = SDR - NSDR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSIR = SIR - NSIR;&lt;br /&gt;
 &amp;gt;&amp;gt; NSAR = SAR - NSAR;&lt;br /&gt;
&lt;br /&gt;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GNSDR=\frac{\sum_{i=1}^{100}NSDR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSIR=\frac{\sum_{i=1}^{100}SIR_i}{100}&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;GSAR=\frac{\sum_{i=1}^{100}SAR_i}{100}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, sd, min, max and median will also be reported.&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
&lt;br /&gt;
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir)&lt;br /&gt;
 [~, name, ext] = fileparts(infile);&lt;br /&gt;
 your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext]));&lt;br /&gt;
 &lt;br /&gt;
 function your_algorithm(infile, voiceoutfile, musicoutfile)&lt;br /&gt;
 mixed = wavread(infile);&lt;br /&gt;
 &lt;br /&gt;
 % Insert your algorithm here&lt;br /&gt;
 &lt;br /&gt;
 wavwrite(voice, 44100, voiceoutfile);&lt;br /&gt;
 wavwrite(music, 44100, musicoutfile);&lt;br /&gt;
&lt;br /&gt;
If scratch space is required, please use the three-argument format instead:&lt;br /&gt;
&lt;br /&gt;
 function singing_voice_separation(infile, outdir, tmpdir)&lt;br /&gt;
&lt;br /&gt;
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2015 Submission Instructions below).&lt;br /&gt;
&lt;br /&gt;
== Packaging submissions ==&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).&lt;br /&gt;
# Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]].&lt;br /&gt;
# Be sure to follow the [[MIREX 2015 Submission Instructions]]. For example, under '''Very Important Things to Note''', Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the information:&lt;br /&gt;
# Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
# Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
# Expected memory footprint&lt;br /&gt;
# Expected runtime&lt;br /&gt;
# Approximately how much scratch disk space will the submission need to store any feature/cache files?&lt;br /&gt;
# Any required environments/architectures (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
# Any special notice regarding to running your algorithm&lt;br /&gt;
&lt;br /&gt;
Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result. &lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=User:Tak-Shing_Chan&amp;diff=10902</id>
		<title>User:Tak-Shing Chan</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=User:Tak-Shing_Chan&amp;diff=10902"/>
		<updated>2015-05-06T06:13:39Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tak-Shing T. Chan received the Ph.D. degree from the University of London, London, UK in 2008. From 2006 to 2008, he was a Scientific Programmer at the University of Sheffield. In 2011, he worked as a Research Associate at the Hong Kong Polytechnic University. He is currently a Postdoctoral Fellow at the Academia Sinica, Taipei, Taiwan.&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Task_Captains&amp;diff=10833</id>
		<title>2015:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Task_Captains&amp;diff=10833"/>
		<updated>2015-03-27T02:13:34Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Like ISMIR 2014, we are prepared to improve the distribution of tasks for the upcoming MIREX 2015.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead one or more tasks, please add your name in the &amp;quot;Captains&amp;quot; column.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain(s)&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2015:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2015:Audio Chord Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2015:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2015:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ade&lt;br /&gt;
|[[2015:Audio Downbeat Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2015:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2015:Audio Melody Extraction]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2015:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2015:Audio Onset Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2015:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2015:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2015:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2015:Query by Singing/Humming]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2015:Query by Tapping]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2015:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2015:Symbolic Melodic Similarity]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2015:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|drts&lt;br /&gt;
|[[2015:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|afp&lt;br /&gt;
|[[2015:Audio_Fingerprinting]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|svs&lt;br /&gt;
|[[2015:Singing_Voice_Separation]]&lt;br /&gt;
|Tak-Shing Chan, Li Su, Yi-Hsuan Yang&lt;br /&gt;
|-&lt;br /&gt;
|kgc&lt;br /&gt;
|[[2015:Audio K-POP Genre Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|kmc&lt;br /&gt;
|[[2015:Audio K-POP Mood Classification]]&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=User:Tak-Shing_Chan&amp;diff=10822</id>
		<title>User:Tak-Shing Chan</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=User:Tak-Shing_Chan&amp;diff=10822"/>
		<updated>2015-02-10T07:42:07Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tak-Shing Chan received the Ph.D. degree in computing from the University of London in 2008. From 2006 to 2008, he was a Scientific Programmer at the University of Sheffield. In 2011, he worked as a Research Associate at the Hong Kong Polytechnic University. He is currently a Postdoctoral Fellow at the Academia Sinica. His research interests include sparse coding, signal processing, music cognition, and distributed systems.&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation_Results&amp;diff=10571</id>
		<title>2014:Singing Voice Separation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation_Results&amp;diff=10571"/>
		<updated>2014-10-17T10:19:54Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Labels */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
These are the results for the 2014 running of the Singing Voice Separation task set. For more information about this task set please refer to the [[2014:Singing Voice Separation]] page.&lt;br /&gt;
&lt;br /&gt;
=== Legend ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Submission code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract PDF&lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	|-&lt;br /&gt;
	! GW1&lt;br /&gt;
	| Bayesian Singing-Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GW1.pdf PDF] || Guan-Xiang Wang, Po-Kai Yang, Chung-Chien Hsu, Jen-Tzung Chien&lt;br /&gt;
        |-&lt;br /&gt;
	! HKHS1&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS1.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
	|-&lt;br /&gt;
	! HKHS2&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS2.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
        |-&lt;br /&gt;
	! HKHS3&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS3.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
	|-&lt;br /&gt;
	! IIY1&lt;br /&gt;
	| Singing Voice Separation and Vocal F0 Estimation based on Robust PCA and Subharmonic Summation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/IIY1.pdf PDF] || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! IIY2&lt;br /&gt;
	| Singing Voice Separation and Vocal F0 Estimation based on Robust PCA and Subharmonic Summation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/IIY2.pdf PDF] || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! JL1&lt;br /&gt;
	| Singing Voice Separation Based on Sparse Nature and Spectral/Temporal Discontinuity || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JL1.pdf PDF] || Il-Young Jeong, Kyogu Lee&lt;br /&gt;
        |-&lt;br /&gt;
	! LFR1&lt;br /&gt;
	| Kernel Additive Modelling with light models || style=&amp;quot;text-align: center;&amp;quot; | - || Antoine Liutkus, Derry Fitzgerald, Zafar Rafii&lt;br /&gt;
        |-&lt;br /&gt;
	! RNA1&lt;br /&gt;
	| Singing Voice Separation using Adaptive Window Harmonic Sinusoidal Modeling || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/RNA1.pdf PDF] || Preeti Rao, Nagesh Nayak, Sharath Adavanne&lt;br /&gt;
        |-&lt;br /&gt;
	! RP1&lt;br /&gt;
	| REPET-SIM for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/RP1.pdf PDF] || Zafar Rafii, Bryan Pardo&lt;br /&gt;
        |-&lt;br /&gt;
	! YC1&lt;br /&gt;
	| MIREX 2014 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/YC1.pdf PDF] || Frederick Yen, Tai-Shih Chi&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
==== Evaluation Criteria ====&lt;br /&gt;
&lt;br /&gt;
'''GNSDR''' = Global Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''NSDR''' = Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SIR''' = Signal-to-Interference Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SAR''' = Signal-to-Artifacts Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Results ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! Algorithm !! Voice GNSDR (dB) !! Music GNSDR (dB) !! Runtime (hh)&lt;br /&gt;
	|-&lt;br /&gt;
	| GW1 || 2.8861 || 5.2549 || 24&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS1 || -1.3988 || 0.3483 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS2 || -1.9413 || 0.5239 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS3 || -2.4807 || 0.1414 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY1 || 4.2190 || 7.7893 || 02&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY2 || 4.4764 || 7.8661 || 02&lt;br /&gt;
	|-&lt;br /&gt;
	| JL1 || 4.1564 || 5.6304 || 01&lt;br /&gt;
	|-&lt;br /&gt;
	| LFR1 || 0.6499 || 3.0867 || 03&lt;br /&gt;
	|-&lt;br /&gt;
	| RNA1 || 3.6915 || 7.3153 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| RP1 || 2.8602 || 5.0306 || 01&lt;br /&gt;
	|-&lt;br /&gt;
	| YC1 || -0.8202 || -3.1150 || 13&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
== NSDR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/nsdr-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/nsdr-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-nsdr.png]]&lt;br /&gt;
&lt;br /&gt;
== SIR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sir-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sir-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-sir.png]]&lt;br /&gt;
&lt;br /&gt;
== SAR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sar-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sar-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-sar.png]]&lt;br /&gt;
&lt;br /&gt;
== Individual Spectrograms ==&lt;br /&gt;
&lt;br /&gt;
As the MIREX test set is private, we use three other songs with similar characteristics to demonstrate the algorithms.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-gw1.png|thumb|Spectrograms for GW1]]&lt;br /&gt;
	| [[File:2014-svs-hkhs1.png|thumb|Spectrograms for HKHS1]]&lt;br /&gt;
	| [[File:2014-svs-hkhs2.png|thumb|Spectrograms for HKHS2]]&lt;br /&gt;
	| [[File:2014-svs-hkhs3.png|thumb|Spectrograms for HKHS3]]&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-iiy1.png|thumb|Spectrograms for IIY1]]&lt;br /&gt;
	| [[File:2014-svs-iiy2.png|thumb|Spectrograms for IIY2]]&lt;br /&gt;
	| [[File:2014-svs-jl1.png|thumb|Spectrograms for JL1]]&lt;br /&gt;
	| [[File:2014-svs-lfr1.png|thumb|Spectrograms for LFR1]]&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-rna1.png|thumb|Spectrograms for RNA1]]&lt;br /&gt;
	| [[File:2014-svs-rp1.png|thumb|Spectrograms for RP1]]&lt;br /&gt;
	| [[File:2014-svs-yc1.png|thumb|Spectrograms for YC1]]&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
=== Labels ===&lt;br /&gt;
&lt;br /&gt;
'''a''' = input mixture ''x'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''b''' = ground truth voice for ''x'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''c''' = extracted voice from ''x'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''d''' = input mixture ''y'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''e''' = ground truth voice for ''y'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''f''' = extracted voice from ''y'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''g''' = input mixture ''z'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''h''' = ground truth voice for ''z'' &amp;lt;br /&amp;gt;&lt;br /&gt;
'''i''' = extracted voice from ''z'' &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Runtime Data ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation_Results&amp;diff=10570</id>
		<title>2014:Singing Voice Separation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation_Results&amp;diff=10570"/>
		<updated>2014-10-17T09:57:43Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Legend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
These are the results for the 2014 running of the Singing Voice Separation task set. For more information about this task set please refer to the [[2014:Singing Voice Separation]] page.&lt;br /&gt;
&lt;br /&gt;
=== Legend ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Submission code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract PDF&lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	|-&lt;br /&gt;
	! GW1&lt;br /&gt;
	| Bayesian Singing-Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GW1.pdf PDF] || Guan-Xiang Wang, Po-Kai Yang, Chung-Chien Hsu, Jen-Tzung Chien&lt;br /&gt;
        |-&lt;br /&gt;
	! HKHS1&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS1.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
	|-&lt;br /&gt;
	! HKHS2&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS2.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
        |-&lt;br /&gt;
	! HKHS3&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS3.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
	|-&lt;br /&gt;
	! IIY1&lt;br /&gt;
	| Singing Voice Separation and Vocal F0 Estimation based on Robust PCA and Subharmonic Summation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/IIY1.pdf PDF] || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! IIY2&lt;br /&gt;
	| Singing Voice Separation and Vocal F0 Estimation based on Robust PCA and Subharmonic Summation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/IIY2.pdf PDF] || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! JL1&lt;br /&gt;
	| Singing Voice Separation Based on Sparse Nature and Spectral/Temporal Discontinuity || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JL1.pdf PDF] || Il-Young Jeong, Kyogu Lee&lt;br /&gt;
        |-&lt;br /&gt;
	! LFR1&lt;br /&gt;
	| Kernel Additive Modelling with light models || style=&amp;quot;text-align: center;&amp;quot; | - || Antoine Liutkus, Derry Fitzgerald, Zafar Rafii&lt;br /&gt;
        |-&lt;br /&gt;
	! RNA1&lt;br /&gt;
	| Singing Voice Separation using Adaptive Window Harmonic Sinusoidal Modeling || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/RNA1.pdf PDF] || Preeti Rao, Nagesh Nayak, Sharath Adavanne&lt;br /&gt;
        |-&lt;br /&gt;
	! RP1&lt;br /&gt;
	| REPET-SIM for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/RP1.pdf PDF] || Zafar Rafii, Bryan Pardo&lt;br /&gt;
        |-&lt;br /&gt;
	! YC1&lt;br /&gt;
	| MIREX 2014 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/YC1.pdf PDF] || Frederick Yen, Tai-Shih Chi&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
==== Evaluation Criteria ====&lt;br /&gt;
&lt;br /&gt;
'''GNSDR''' = Global Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''NSDR''' = Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SIR''' = Signal-to-Interference Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SAR''' = Signal-to-Artifacts Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Results ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! Algorithm !! Voice GNSDR (dB) !! Music GNSDR (dB) !! Runtime (hh)&lt;br /&gt;
	|-&lt;br /&gt;
	| GW1 || 2.8861 || 5.2549 || 24&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS1 || -1.3988 || 0.3483 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS2 || -1.9413 || 0.5239 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS3 || -2.4807 || 0.1414 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY1 || 4.2190 || 7.7893 || 02&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY2 || 4.4764 || 7.8661 || 02&lt;br /&gt;
	|-&lt;br /&gt;
	| JL1 || 4.1564 || 5.6304 || 01&lt;br /&gt;
	|-&lt;br /&gt;
	| LFR1 || 0.6499 || 3.0867 || 03&lt;br /&gt;
	|-&lt;br /&gt;
	| RNA1 || 3.6915 || 7.3153 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| RP1 || 2.8602 || 5.0306 || 01&lt;br /&gt;
	|-&lt;br /&gt;
	| YC1 || -0.8202 || -3.1150 || 13&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
== NSDR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/nsdr-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/nsdr-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-nsdr.png]]&lt;br /&gt;
&lt;br /&gt;
== SIR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sir-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sir-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-sir.png]]&lt;br /&gt;
&lt;br /&gt;
== SAR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sar-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sar-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-sar.png]]&lt;br /&gt;
&lt;br /&gt;
== Individual Spectrograms ==&lt;br /&gt;
&lt;br /&gt;
As the MIREX test set is private, we use three other songs with similar characteristics to demonstrate the algorithms.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-gw1.png|thumb|Spectrograms for GW1]]&lt;br /&gt;
	| [[File:2014-svs-hkhs1.png|thumb|Spectrograms for HKHS1]]&lt;br /&gt;
	| [[File:2014-svs-hkhs2.png|thumb|Spectrograms for HKHS2]]&lt;br /&gt;
	| [[File:2014-svs-hkhs3.png|thumb|Spectrograms for HKHS3]]&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-iiy1.png|thumb|Spectrograms for IIY1]]&lt;br /&gt;
	| [[File:2014-svs-iiy2.png|thumb|Spectrograms for IIY2]]&lt;br /&gt;
	| [[File:2014-svs-jl1.png|thumb|Spectrograms for JL1]]&lt;br /&gt;
	| [[File:2014-svs-lfr1.png|thumb|Spectrograms for LFR1]]&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-rna1.png|thumb|Spectrograms for RNA1]]&lt;br /&gt;
	| [[File:2014-svs-rp1.png|thumb|Spectrograms for RP1]]&lt;br /&gt;
	| [[File:2014-svs-yc1.png|thumb|Spectrograms for YC1]]&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
=== Labels ===&lt;br /&gt;
&lt;br /&gt;
'''a''' = input mixture x, '''b''' = ground truth voice for x, '''c''' = extracted voice from x, &amp;lt;br /&amp;gt;&lt;br /&gt;
'''d''' = input mixture y, '''e''' = ground truth voice for y, '''f''' = extracted voice from y, &amp;lt;br /&amp;gt;&lt;br /&gt;
'''g''' = input mixture z, '''h''' = ground truth voice for z, '''i''' = extracted voice from z &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Runtime Data ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation_Results&amp;diff=10569</id>
		<title>2014:Singing Voice Separation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation_Results&amp;diff=10569"/>
		<updated>2014-10-17T09:56:16Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: /* Legend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
These are the results for the 2014 running of the Singing Voice Separation task set. For more information about this task set please refer to the [[2014:Singing Voice Separation]] page.&lt;br /&gt;
&lt;br /&gt;
=== Legend ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Submission code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract PDF&lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	|-&lt;br /&gt;
	! GW1&lt;br /&gt;
	| Bayesian Singing-Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GW1.pdf PDF] || Guan-Xiang Wang, Po-Kai Yang, Chung-Chien Hsu, Jen-Tzung Chien&lt;br /&gt;
        |-&lt;br /&gt;
	! HKHS1&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS1.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
	|-&lt;br /&gt;
	! HKHS2&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS2.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
        |-&lt;br /&gt;
	! HKHS3&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS3.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
	|-&lt;br /&gt;
	! IIY1&lt;br /&gt;
	| Singing Voice Separation and Vocal F0 Estimation based on Robust PCA and Subharmonic Summation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/IYI1.pdf PDF] || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! IIY2&lt;br /&gt;
	| Singing Voice Separation and Vocal F0 Estimation based on Robust PCA and Subharmonic Summation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/IYI1.pdf PDF] || Yukara Ikemiya, Katsutoshi Itoyama, Kazuyoshi Yoshii&lt;br /&gt;
        |-&lt;br /&gt;
	! JL1&lt;br /&gt;
	| Singing Voice Separation Based on Sparse Nature and Spectral/Temporal Discontinuity || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JL1.pdf PDF] || Il-Young Jeong, Kyogu Lee&lt;br /&gt;
        |-&lt;br /&gt;
	! LFR1&lt;br /&gt;
	| Kernel Additive Modelling with light models || style=&amp;quot;text-align: center;&amp;quot; | - || Antoine Liutkus, Derry Fitzgerald, Zafar Rafii&lt;br /&gt;
        |-&lt;br /&gt;
	! RNA1&lt;br /&gt;
	| Singing Voice Separation using Adaptive Window Harmonic Sinusoidal Modeling || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/RNA1.pdf PDF] || Preeti Rao, Nagesh Nayak, Sharath Adavanne&lt;br /&gt;
        |-&lt;br /&gt;
	! RP1&lt;br /&gt;
	| REPET-SIM for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/RP1.pdf PDF] || Zafar Rafii, Bryan Pardo&lt;br /&gt;
        |-&lt;br /&gt;
	! YC1&lt;br /&gt;
	| MIREX 2014 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/YC1.pdf PDF] || Frederick Yen, Tai-Shih Chi&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
==== Evaluation Criteria ====&lt;br /&gt;
&lt;br /&gt;
'''GNSDR''' = Global Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''NSDR''' = Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SIR''' = Signal-to-Interference Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SAR''' = Signal-to-Artifacts Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Results ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! Algorithm !! Voice GNSDR (dB) !! Music GNSDR (dB) !! Runtime (hh)&lt;br /&gt;
	|-&lt;br /&gt;
	| GW1 || 2.8861 || 5.2549 || 24&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS1 || -1.3988 || 0.3483 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS2 || -1.9413 || 0.5239 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS3 || -2.4807 || 0.1414 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY1 || 4.2190 || 7.7893 || 02&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY2 || 4.4764 || 7.8661 || 02&lt;br /&gt;
	|-&lt;br /&gt;
	| JL1 || 4.1564 || 5.6304 || 01&lt;br /&gt;
	|-&lt;br /&gt;
	| LFR1 || 0.6499 || 3.0867 || 03&lt;br /&gt;
	|-&lt;br /&gt;
	| RNA1 || 3.6915 || 7.3153 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| RP1 || 2.8602 || 5.0306 || 01&lt;br /&gt;
	|-&lt;br /&gt;
	| YC1 || -0.8202 || -3.1150 || 13&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
== NSDR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/nsdr-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/nsdr-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-nsdr.png]]&lt;br /&gt;
&lt;br /&gt;
== SIR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sir-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sir-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-sir.png]]&lt;br /&gt;
&lt;br /&gt;
== SAR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sar-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sar-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-sar.png]]&lt;br /&gt;
&lt;br /&gt;
== Individual Spectrograms ==&lt;br /&gt;
&lt;br /&gt;
As the MIREX test set is private, we use three other songs with similar characteristics to demonstrate the algorithms.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-gw1.png|thumb|Spectrograms for GW1]]&lt;br /&gt;
	| [[File:2014-svs-hkhs1.png|thumb|Spectrograms for HKHS1]]&lt;br /&gt;
	| [[File:2014-svs-hkhs2.png|thumb|Spectrograms for HKHS2]]&lt;br /&gt;
	| [[File:2014-svs-hkhs3.png|thumb|Spectrograms for HKHS3]]&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-iiy1.png|thumb|Spectrograms for IIY1]]&lt;br /&gt;
	| [[File:2014-svs-iiy2.png|thumb|Spectrograms for IIY2]]&lt;br /&gt;
	| [[File:2014-svs-jl1.png|thumb|Spectrograms for JL1]]&lt;br /&gt;
	| [[File:2014-svs-lfr1.png|thumb|Spectrograms for LFR1]]&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-rna1.png|thumb|Spectrograms for RNA1]]&lt;br /&gt;
	| [[File:2014-svs-rp1.png|thumb|Spectrograms for RP1]]&lt;br /&gt;
	| [[File:2014-svs-yc1.png|thumb|Spectrograms for YC1]]&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
=== Labels ===&lt;br /&gt;
&lt;br /&gt;
'''a''' = input mixture x, '''b''' = ground truth voice for x, '''c''' = extracted voice from x, &amp;lt;br /&amp;gt;&lt;br /&gt;
'''d''' = input mixture y, '''e''' = ground truth voice for y, '''f''' = extracted voice from y, &amp;lt;br /&amp;gt;&lt;br /&gt;
'''g''' = input mixture z, '''h''' = ground truth voice for z, '''i''' = extracted voice from z &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Runtime Data ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation_Results&amp;diff=10567</id>
		<title>2014:Singing Voice Separation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Singing_Voice_Separation_Results&amp;diff=10567"/>
		<updated>2014-10-16T15:45:12Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
These are the results for the 2014 running of the Singing Voice Separation task set. For more information about this task set please refer to the [[2014:Singing Voice Separation]] page.&lt;br /&gt;
&lt;br /&gt;
=== Legend ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Submission code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract PDF&lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	|-&lt;br /&gt;
	! GW1&lt;br /&gt;
	| Bayesian Singing-Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GW1.pdf PDF] || Guan-Xiang Wang, Po-Kai Yang, Chung-Chien Hsu, Jen-Tzung Chien&lt;br /&gt;
        |-&lt;br /&gt;
	! HKHS1&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS1.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
	|-&lt;br /&gt;
	! HKHS2&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS2.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
        |-&lt;br /&gt;
	! HKHS3&lt;br /&gt;
	| Singing-Voice Separation using Deep Recurrent Neural Networks || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/HKHS3.pdf PDF] || Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis&lt;br /&gt;
	|-&lt;br /&gt;
	! IIY1&lt;br /&gt;
	| IIY1 || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/IIY1.pdf PDF] || Yukara Ikemiya&lt;br /&gt;
        |-&lt;br /&gt;
	! IIY2&lt;br /&gt;
	| IIY2 || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/IIY2.pdf PDF] || Yukara Ikemiya&lt;br /&gt;
        |-&lt;br /&gt;
	! JL1&lt;br /&gt;
	| MIREX 2014: Singing Voice Separation using Spectral/Temporal Discontinuity || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JL1.pdf PDF] || Il-Young Jeong, Kyogu Lee&lt;br /&gt;
        |-&lt;br /&gt;
	! LFR1&lt;br /&gt;
	| LFR1 || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/LFR1.pdf PDF] || Antoine Liutkus, Derry Fitzgerald, Zafar Rafii&lt;br /&gt;
        |-&lt;br /&gt;
	! RNA1&lt;br /&gt;
	| Singing Voice Separation using Adaptive Window Harmonic Sinusoidal Modeling || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/RNA1.pdf PDF] || Preeti Rao, Nagesh Nayak, Sharath Adavanne&lt;br /&gt;
        |-&lt;br /&gt;
	! RP1&lt;br /&gt;
	| REPET-SIM for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/RP1.pdf PDF] || Zafar Rafii, Bryan Pardo&lt;br /&gt;
        |-&lt;br /&gt;
	! YC1&lt;br /&gt;
	| MIREX 2014 Submission for Singing Voice Separation || style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/YC1.pdf PDF] || Frederick Yen, Tai-Shih Chi&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
==== Evaluation Criteria ====&lt;br /&gt;
&lt;br /&gt;
'''GNSDR''' = Global Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''NSDR''' = Normalized Signal-to-Distortion Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SIR''' = Signal-to-Interference Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
'''SAR''' = Signal-to-Artifacts Ratio &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Results ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! Algorithm !! Voice GNSDR (dB) !! Music GNSDR (dB) !! Runtime (hh)&lt;br /&gt;
	|-&lt;br /&gt;
	| GW1 || 2.8861 || 5.2549 || 24&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS1 || -1.3988 || 0.3483 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS2 || -1.9413 || 0.5239 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| HKHS3 || -2.4807 || 0.1414 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY1 || 4.2190 || 7.7893 || 02&lt;br /&gt;
	|-&lt;br /&gt;
	| IIY2 || 4.4764 || 7.8661 || 02&lt;br /&gt;
	|-&lt;br /&gt;
	| JL1 || 4.1564 || 5.6304 || 01&lt;br /&gt;
	|-&lt;br /&gt;
	| LFR1 || 0.6499 || 3.0867 || 03&lt;br /&gt;
	|-&lt;br /&gt;
	| RNA1 || 3.6915 || 7.3153 || 06&lt;br /&gt;
	|-&lt;br /&gt;
	| RP1 || 2.8602 || 5.0306 || 01&lt;br /&gt;
	|-&lt;br /&gt;
	| YC1 || -0.8202 || -3.1150 || 13&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
== NSDR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/nsdr-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/nsdr-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-nsdr.png]]&lt;br /&gt;
&lt;br /&gt;
== SIR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sir-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sir-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-sir.png]]&lt;br /&gt;
&lt;br /&gt;
== SAR ==&lt;br /&gt;
&lt;br /&gt;
=== For the Singing Voice (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sar-voice.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== For the Music Accompaniment (dB) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/sar-music.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Boxplots ===&lt;br /&gt;
&lt;br /&gt;
[[File:2014-svs-sar.png]]&lt;br /&gt;
&lt;br /&gt;
== Individual Spectrograms ==&lt;br /&gt;
&lt;br /&gt;
As the MIREX test set is private, we use three other songs with similar characteristics to demonstrate the algorithms.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-gw1.png|thumb|Spectrograms for GW1]]&lt;br /&gt;
	| [[File:2014-svs-hkhs1.png|thumb|Spectrograms for HKHS1]]&lt;br /&gt;
	| [[File:2014-svs-hkhs2.png|thumb|Spectrograms for HKHS2]]&lt;br /&gt;
	| [[File:2014-svs-hkhs3.png|thumb|Spectrograms for HKHS3]]&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-iiy1.png|thumb|Spectrograms for IIY1]]&lt;br /&gt;
	| [[File:2014-svs-iiy2.png|thumb|Spectrograms for IIY2]]&lt;br /&gt;
	| [[File:2014-svs-jl1.png|thumb|Spectrograms for JL1]]&lt;br /&gt;
	| [[File:2014-svs-lfr1.png|thumb|Spectrograms for LFR1]]&lt;br /&gt;
	|-&lt;br /&gt;
	| [[File:2014-svs-rna1.png|thumb|Spectrograms for RNA1]]&lt;br /&gt;
	| [[File:2014-svs-rp1.png|thumb|Spectrograms for RP1]]&lt;br /&gt;
	| [[File:2014-svs-yc1.png|thumb|Spectrograms for YC1]]&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
=== Labels ===&lt;br /&gt;
&lt;br /&gt;
'''a''' = input mixture x, '''b''' = ground truth voice for x, '''c''' = extracted voice from x, &amp;lt;br /&amp;gt;&lt;br /&gt;
'''d''' = input mixture y, '''e''' = ground truth voice for y, '''f''' = extracted voice from y, &amp;lt;br /&amp;gt;&lt;br /&gt;
'''g''' = input mixture z, '''h''' = ground truth voice for z, '''i''' = extracted voice from z &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Runtime Data ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/svs/runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:2014-svs-yc1.png&amp;diff=10566</id>
		<title>File:2014-svs-yc1.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:2014-svs-yc1.png&amp;diff=10566"/>
		<updated>2014-10-16T15:02:34Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: YC1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;YC1&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:2014-svs-rp1.png&amp;diff=10565</id>
		<title>File:2014-svs-rp1.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:2014-svs-rp1.png&amp;diff=10565"/>
		<updated>2014-10-16T15:02:09Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: RP1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;RP1&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:2014-svs-rna1.png&amp;diff=10564</id>
		<title>File:2014-svs-rna1.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:2014-svs-rna1.png&amp;diff=10564"/>
		<updated>2014-10-16T15:00:26Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: RNA1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;RNA1&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:2014-svs-lfr1.png&amp;diff=10563</id>
		<title>File:2014-svs-lfr1.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:2014-svs-lfr1.png&amp;diff=10563"/>
		<updated>2014-10-16T14:59:47Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: LFR1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;LFR1&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:2014-svs-jl1.png&amp;diff=10562</id>
		<title>File:2014-svs-jl1.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:2014-svs-jl1.png&amp;diff=10562"/>
		<updated>2014-10-16T14:59:17Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: JL1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;JL1&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:2014-svs-iiy2.png&amp;diff=10561</id>
		<title>File:2014-svs-iiy2.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:2014-svs-iiy2.png&amp;diff=10561"/>
		<updated>2014-10-16T14:58:31Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: IIY2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;IIY2&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:2014-svs-iiy1.png&amp;diff=10560</id>
		<title>File:2014-svs-iiy1.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:2014-svs-iiy1.png&amp;diff=10560"/>
		<updated>2014-10-16T14:57:44Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: IIY1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;IIY1&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:2014-svs-hkhs3.png&amp;diff=10559</id>
		<title>File:2014-svs-hkhs3.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:2014-svs-hkhs3.png&amp;diff=10559"/>
		<updated>2014-10-16T14:56:45Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: HKHS3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;HKHS3&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:2014-svs-hkhs2.png&amp;diff=10558</id>
		<title>File:2014-svs-hkhs2.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:2014-svs-hkhs2.png&amp;diff=10558"/>
		<updated>2014-10-16T14:56:04Z</updated>

		<summary type="html">&lt;p&gt;Tak-Shing Chan: HKHS2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;HKHS2&lt;/div&gt;</summary>
		<author><name>Tak-Shing Chan</name></author>
		
	</entry>
</feed>