<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Peter+Organisciak</id>
	<title>MIREX Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Peter+Organisciak"/>
	<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/wiki/Special:Contributions/Peter_Organisciak"/>
	<updated>2026-04-29T19:18:07Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:MIREX2016_Results&amp;diff=11846</id>
		<title>2016:MIREX2016 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:MIREX2016_Results&amp;diff=11846"/>
		<updated>2016-08-04T16:58:47Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: Struct results&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;*Train-Test Task Set&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2016/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2016/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2016/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2016/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2016/results/act/kgenrek_report/ Audio KPOP Genre (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2016/results/act/kgenrea_report/ Audio KPOP Genre (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2016/results/act/kmoodk_report/ Audio KPOP Mood (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2016/results/act/kmooda_report/ Audio KPOP Mood (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Key Detection Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/akd/mrx_05 MIREX 2005 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/akd/gsteps GiantSteps Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/orchset/ ORCHSET15 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Query-by-Singing/Humming Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/qbsh/qbsh_task1_hidden/  Hidden Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/qbsh/qbsh_task1_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/qbsh/qbsh_task1_thinkit/ ThinkIt Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/qbsh/qbsh_task1_ioacas/ IOACAS Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/qbsh/qbsh_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2016:Audio Downbeat Estimation Results|Audio Downbeat Estimation Results]]&lt;br /&gt;
* [[2016:Audio Fingerprinting Results|Audio Fingerprinting Results]]&lt;br /&gt;
&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/struct/salami/ SALAMI dataset] (partial) &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/abt/smc/ SMC Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2016/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Results | Audio Chord Estimation]]&lt;br /&gt;
** [[2016:Audio_Chord_Estimation_Results#Isophonics_2009 | Isophonics 2009 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2016:Audio_Chord_Estimation_Results#Billboard_2012 | Billboard 2012 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2016:Audio_Chord_Estimation_Results#Billboard_2013 | Billboard 2013 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2016:Audio_Chord_Estimation_Results#JayChou_2015 | JayChou 2015 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2016:Audio_Chord_Estimation_Results#RobbieWilliams_2016 | RobbieWilliams 2016 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Multiple Fundamental Frequency Estimation &amp;amp; Tracking Results&lt;br /&gt;
** [[2016:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results_%2D_MIREX_Dataset | MIREX Dataset]] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2016:Set List Identification Results | Set List Identification Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2016:Singing_Voice_Separation_Results Singing Voice Separation]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Cover_Song_Identification_Results&amp;diff=11525</id>
		<title>2015:Audio Cover Song Identification Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Cover_Song_Identification_Results&amp;diff=11525"/>
		<updated>2015-10-21T04:06:14Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* General Legend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Audio Cover Song (ACS) Identification was evaluated against two datasets: ''Mixed Collection'' and ''Sapp's Mazurka Collection''.&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection Information===&lt;br /&gt;
This is the &amp;quot;original&amp;quot; ACS collection. Within the 1000 pieces in the Audio Cover Song database, there are embedded 30 different &amp;quot;cover songs&amp;quot; each represented by 11 different &amp;quot;versions&amp;quot; for a total of 330 audio files (16bit, monophonic, 22.05khz, wav). The &amp;quot;cover songs&amp;quot; represent a variety of genres (e.g., classical, jazz, gospel, rock, folk-rock, etc.) and the variations span a variety of styles and orchestrations. &lt;br /&gt;
&lt;br /&gt;
Using each of these cover song files in turn as the &amp;quot;seed/query&amp;quot; file, we will examine the returned lists of items for the presence of the other 10 versions of the &amp;quot;seed/query&amp;quot; file.&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection Information ===&lt;br /&gt;
In addition to our original ACS dataset, we used the  [http://www.mazurka.org.uk/ Mazurka.org dataset] put together by Craig Sapp. We randomly chose 11 versions from 49 mazurkas and ran it as a separate ACS subtask. Systems returned a distance matrix of 539x539 from which we located the ranks of each of the associated cover versions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code &lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract &lt;br /&gt;
    ! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
    ! CT1&lt;br /&gt;
    | 	MFCCShapeSSM ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CT1.pdf PDF] || [http://www.ctralie.com/ Christopher Tralie]&lt;br /&gt;
    |-&lt;br /&gt;
    ! YWW1&lt;br /&gt;
    | 	YWW ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YWW1.pdf PDF] || [http://slam.iis.sinica.edu.tw/index.htm Ming-Chi Yen], [http://sovideo.iis.sinica.edu.tw/SLG/index.html Ju-Chiang Wang], [http://www.iis.sinica.edu.tw 	Hsin-Min Wang]&lt;br /&gt;
    |-&lt;br /&gt;
    ! CYWW1&lt;br /&gt;
    | 	CYWW ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CYWW1.pdf PDF] || [http://slam.iis.sinica.edu.tw/  Chuan-Yau Chan], [http://slam.iis.sinica.edu.tw/index.htm Ming-Chi Yen], [http://sovideo.iis.sinica.edu.tw/SLG/index.html Ju-Chiang Wang], [http://www.iis.sinica.edu.tw 	Hsin-Min Wang]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary Results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/coversong1000.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten==== &lt;br /&gt;
&amp;lt;csv&amp;gt;2015/acs/coversong1000.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/coversong1000.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazuraka Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/mazurkas.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/acs/mazurkas.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/mazurkas.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Individual Results Files==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.coversong1000.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.coversong1000.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.coversong1000.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.coversong1000.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection===&lt;br /&gt;
&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.mazurkas.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.mazurkas.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.mazurkas.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.mazurkas.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
[[Category: Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Cover_Song_Identification_Results&amp;diff=11524</id>
		<title>2015:Audio Cover Song Identification Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Cover_Song_Identification_Results&amp;diff=11524"/>
		<updated>2015-10-21T04:02:39Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* General Legend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Audio Cover Song (ACS) Identification was evaluated against two datasets: ''Mixed Collection'' and ''Sapp's Mazurka Collection''.&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection Information===&lt;br /&gt;
This is the &amp;quot;original&amp;quot; ACS collection. Within the 1000 pieces in the Audio Cover Song database, there are embedded 30 different &amp;quot;cover songs&amp;quot; each represented by 11 different &amp;quot;versions&amp;quot; for a total of 330 audio files (16bit, monophonic, 22.05khz, wav). The &amp;quot;cover songs&amp;quot; represent a variety of genres (e.g., classical, jazz, gospel, rock, folk-rock, etc.) and the variations span a variety of styles and orchestrations. &lt;br /&gt;
&lt;br /&gt;
Using each of these cover song files in turn as the &amp;quot;seed/query&amp;quot; file, we will examine the returned lists of items for the presence of the other 10 versions of the &amp;quot;seed/query&amp;quot; file.&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection Information ===&lt;br /&gt;
In addition to our original ACS dataset, we used the  [http://www.mazurka.org.uk/ Mazurka.org dataset] put together by Craig Sapp. We randomly chose 11 versions from 49 mazurkas and ran it as a separate ACS subtask. Systems returned a distance matrix of 539x539 from which we located the ranks of each of the associated cover versions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code &lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract &lt;br /&gt;
    ! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
    ! CT1&lt;br /&gt;
    | 	MFCCShapeSSM ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CT1.pdf PDF] || Christopher Tralie&lt;br /&gt;
    |-&lt;br /&gt;
    ! YWW1&lt;br /&gt;
    | 	YWW ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YWW1.pdf PDF] || [http://slam.iis.sinica.edu.tw/index.htm Ming-Chi Yen], [http://sovideo.iis.sinica.edu.tw/SLG/index.html Ju-Chiang Wang], [http://www.iis.sinica.edu.tw 	Hsin-Min Wang]&lt;br /&gt;
    |-&lt;br /&gt;
    ! CYWW1&lt;br /&gt;
    | 	CYWW ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CYWW1.pdf PDF] || [http://slam.iis.sinica.edu.tw/  Chuan-Yau Chan], [http://slam.iis.sinica.edu.tw/index.htm Ming-Chi Yen], [http://sovideo.iis.sinica.edu.tw/SLG/index.html Ju-Chiang Wang], [http://www.iis.sinica.edu.tw 	Hsin-Min Wang]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary Results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/coversong1000.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten==== &lt;br /&gt;
&amp;lt;csv&amp;gt;2015/acs/coversong1000.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/coversong1000.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazuraka Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/mazurkas.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/acs/mazurkas.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/mazurkas.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Individual Results Files==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.coversong1000.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.coversong1000.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.coversong1000.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.coversong1000.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection===&lt;br /&gt;
&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.mazurkas.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.mazurkas.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.mazurkas.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.mazurkas.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
[[Category: Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Cover_Song_Identification_Results&amp;diff=11523</id>
		<title>2015:Audio Cover Song Identification Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Cover_Song_Identification_Results&amp;diff=11523"/>
		<updated>2015-10-21T04:02:16Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* General Legend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Audio Cover Song (ACS) Identification was evaluated against two datasets: ''Mixed Collection'' and ''Sapp's Mazurka Collection''.&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection Information===&lt;br /&gt;
This is the &amp;quot;original&amp;quot; ACS collection. Within the 1000 pieces in the Audio Cover Song database, there are embedded 30 different &amp;quot;cover songs&amp;quot; each represented by 11 different &amp;quot;versions&amp;quot; for a total of 330 audio files (16bit, monophonic, 22.05khz, wav). The &amp;quot;cover songs&amp;quot; represent a variety of genres (e.g., classical, jazz, gospel, rock, folk-rock, etc.) and the variations span a variety of styles and orchestrations. &lt;br /&gt;
&lt;br /&gt;
Using each of these cover song files in turn as the &amp;quot;seed/query&amp;quot; file, we will examine the returned lists of items for the presence of the other 10 versions of the &amp;quot;seed/query&amp;quot; file.&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection Information ===&lt;br /&gt;
In addition to our original ACS dataset, we used the  [http://www.mazurka.org.uk/ Mazurka.org dataset] put together by Craig Sapp. We randomly chose 11 versions from 49 mazurkas and ran it as a separate ACS subtask. Systems returned a distance matrix of 539x539 from which we located the ranks of each of the associated cover versions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code &lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract &lt;br /&gt;
    ! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
    ! CT1&lt;br /&gt;
    | 	MFCCShapeSSM ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CT1.pdf PDF] || Christopher Tralie&lt;br /&gt;
    |-&lt;br /&gt;
    ! YWW1&lt;br /&gt;
    | 	YWW ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YWW1.pdf PDF] | [http://slam.iis.sinica.edu.tw/index.htm Ming-Chi Yen], [http://sovideo.iis.sinica.edu.tw/SLG/index.html Ju-Chiang Wang], [http://www.iis.sinica.edu.tw 	Hsin-Min Wang]&lt;br /&gt;
    |-&lt;br /&gt;
    ! CYWW1&lt;br /&gt;
    | 	CYWW ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CYWW1.pdf PDF] || [http://slam.iis.sinica.edu.tw/  Chuan-Yau Chan], [http://slam.iis.sinica.edu.tw/index.htm Ming-Chi Yen], [http://sovideo.iis.sinica.edu.tw/SLG/index.html Ju-Chiang Wang], [http://www.iis.sinica.edu.tw 	Hsin-Min Wang]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary Results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/coversong1000.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten==== &lt;br /&gt;
&amp;lt;csv&amp;gt;2015/acs/coversong1000.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/coversong1000.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazuraka Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/mazurkas.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/acs/mazurkas.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/mazurkas.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Individual Results Files==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.coversong1000.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.coversong1000.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.coversong1000.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.coversong1000.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection===&lt;br /&gt;
&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.mazurkas.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.mazurkas.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.mazurkas.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.mazurkas.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
[[Category: Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Cover_Song_Identification_Results&amp;diff=11522</id>
		<title>2015:Audio Cover Song Identification Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Cover_Song_Identification_Results&amp;diff=11522"/>
		<updated>2015-10-21T04:01:15Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Audio Cover Song (ACS) Identification was evaluated against two datasets: ''Mixed Collection'' and ''Sapp's Mazurka Collection''.&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection Information===&lt;br /&gt;
This is the &amp;quot;original&amp;quot; ACS collection. Within the 1000 pieces in the Audio Cover Song database, there are embedded 30 different &amp;quot;cover songs&amp;quot; each represented by 11 different &amp;quot;versions&amp;quot; for a total of 330 audio files (16bit, monophonic, 22.05khz, wav). The &amp;quot;cover songs&amp;quot; represent a variety of genres (e.g., classical, jazz, gospel, rock, folk-rock, etc.) and the variations span a variety of styles and orchestrations. &lt;br /&gt;
&lt;br /&gt;
Using each of these cover song files in turn as the &amp;quot;seed/query&amp;quot; file, we will examine the returned lists of items for the presence of the other 10 versions of the &amp;quot;seed/query&amp;quot; file.&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection Information ===&lt;br /&gt;
In addition to our original ACS dataset, we used the  [http://www.mazurka.org.uk/ Mazurka.org dataset] put together by Craig Sapp. We randomly chose 11 versions from 49 mazurkas and ran it as a separate ACS subtask. Systems returned a distance matrix of 539x539 from which we located the ranks of each of the associated cover versions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code &lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract &lt;br /&gt;
    ! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
    ! CT1&lt;br /&gt;
    | 	MFCCShapeSSM ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CT1.pdf PDF] || Christopher Tralie&lt;br /&gt;
    |-&lt;br /&gt;
    ! YWW1&lt;br /&gt;
    | 	YWW ||  style=&amp;quot;text-align: center;&amp;quot; | [http://slam.iis.sinica.edu.tw/index.htm Ming-Chi Yen], [http://sovideo.iis.sinica.edu.tw/SLG/index.html Ju-Chiang Wang], [http://www.iis.sinica.edu.tw 	Hsin-Min Wang]&lt;br /&gt;
    |-&lt;br /&gt;
    ! CYWW1&lt;br /&gt;
    | 	CYWW ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CYWW1.pdf PDF] || [http://slam.iis.sinica.edu.tw/  Chuan-Yau Chan], [http://slam.iis.sinica.edu.tw/index.htm Ming-Chi Yen], [http://sovideo.iis.sinica.edu.tw/SLG/index.html Ju-Chiang Wang], [http://www.iis.sinica.edu.tw 	Hsin-Min Wang]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary Results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/coversong1000.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten==== &lt;br /&gt;
&amp;lt;csv&amp;gt;2015/acs/coversong1000.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/coversong1000.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazuraka Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/mazurkas.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/acs/mazurkas.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/mazurkas.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Individual Results Files==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.coversong1000.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.coversong1000.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.coversong1000.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.coversong1000.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection===&lt;br /&gt;
&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.mazurkas.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.mazurkas.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.mazurkas.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.mazurkas.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
[[Category: Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Cover_Song_Identification_Results&amp;diff=11521</id>
		<title>2015:Audio Cover Song Identification Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Cover_Song_Identification_Results&amp;diff=11521"/>
		<updated>2015-10-21T01:49:50Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: Created page with &amp;quot;== Introduction ==  Audio Cover Song (ACS) Identification was evaluated against two datasets: ''Mixed Collection'' and ''Sapp's Mazurka Collection''.  ===Mixed Collection Informa...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Audio Cover Song (ACS) Identification was evaluated against two datasets: ''Mixed Collection'' and ''Sapp's Mazurka Collection''.&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection Information===&lt;br /&gt;
This is the &amp;quot;original&amp;quot; ACS collection. Within the 1000 pieces in the Audio Cover Song database, there are embedded 30 different &amp;quot;cover songs&amp;quot; each represented by 11 different &amp;quot;versions&amp;quot; for a total of 330 audio files (16bit, monophonic, 22.05khz, wav). The &amp;quot;cover songs&amp;quot; represent a variety of genres (e.g., classical, jazz, gospel, rock, folk-rock, etc.) and the variations span a variety of styles and orchestrations. &lt;br /&gt;
&lt;br /&gt;
Using each of these cover song files in turn as the &amp;quot;seed/query&amp;quot; file, we will examine the returned lists of items for the presence of the other 10 versions of the &amp;quot;seed/query&amp;quot; file.&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection Information ===&lt;br /&gt;
In addition to our original ACS dataset, we used the  [http://www.mazurka.org.uk/ Mazurka.org dataset] put together by Craig Sapp. We randomly chose 11 versions from 49 mazurkas and ran it as a separate ACS subtask. Systems returned a distance matrix of 539x539 from which we located the ranks of each of the associated cover versions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code &lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract &lt;br /&gt;
    ! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
    ! YCW1&lt;br /&gt;
    | 	MCY_COVER ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YCW1.pdf PDF] || [http://slam.iis.sinica.edu.tw/index.htm Ming-Chi Yen], [http://slam.iis.sinica.edu.tw/  Chuan-Yau Chan], [http://www.iis.sinica.edu.tw 	Hsin-Min Wang]&lt;br /&gt;
    |-&lt;br /&gt;
    ! CYWW1&lt;br /&gt;
    | 	CYC_COVER ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CYWW1.pdf PDF] || [http://slam.iis.sinica.edu.tw/  Chuan-Yau Chan], [http://slam.iis.sinica.edu.tw/index.htm Ming-Chi Yen], [http://sovideo.iis.sinica.edu.tw/SLG/index.html Ju-Chiang Wang], [http://www.iis.sinica.edu.tw 	Hsin-Min Wang]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary Results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/coversong1000.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten==== &lt;br /&gt;
&amp;lt;csv&amp;gt;2015/acs/coversong1000.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/coversong1000.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazuraka Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/mazurkas.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/acs/mazurkas.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2015/acs/mazurkas.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Individual Results Files==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.coversong1000.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.coversong1000.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.coversong1000.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.coversong1000.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection===&lt;br /&gt;
&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.mazurkas.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.mazurkas.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''CT1''' : [https://music-ir.org/mirex/results/2015/acs/CT1.mazurkas.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2015/acs/CYWW1.mazurkas.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
[[Category: Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:MIREX2015_Results&amp;diff=11516</id>
		<title>2015:MIREX2015 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:MIREX2015_Results&amp;diff=11516"/>
		<updated>2015-10-21T00:15:07Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Other Tasks */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Results by Task ==&lt;br /&gt;
&lt;br /&gt;
==OVERALL RESULTS POSTERS &amp;lt;!--(First Version: Will need updating as last runs are completed)--&amp;gt;==&lt;br /&gt;
&lt;br /&gt;
[https://www.music-ir.org/mirex/results/2015/mirex_2015_poster.pdf MIREX 2015 Overall Results Posters (PDF)]&lt;br /&gt;
&lt;br /&gt;
===Train-Test Task Set===&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/kgenrek_report/ Audio KPOP Genre (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/kgenrea_report/ Audio KPOP Genre (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/kmoodk_report/ Audio KPOP Mood (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/kmooda_report/ Audio KPOP Mood (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
===Other Tasks===&lt;br /&gt;
&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/struct/sal/ SALAMI dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2015:Discovery of Repeated Themes &amp;amp; Sections Results | Discovery of Repeated Themes &amp;amp; Sections Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2015/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2015:Audio Cover Song Identification Results|Audio Cover Song Identification Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Key Detection Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/akd/mrx_05 MIREX 2015 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/akd/gsteps GiantSteps Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/abt/smc/ SMC Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Chord Estimation&lt;br /&gt;
** [[2015:Audio_Chord_Estimation_Results#MIREX_Chord_2009 | MIREX Chord &amp;amp;rsquo;09 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2015:Audio_Chord_Estimation_Results#Billboard_2012 | Billboard &amp;amp;rsquo;12 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2015:Audio_Chord_Estimation_Results#Billboard_2013 | Billboard &amp;amp;rsquo;13 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/orchset/ Orcheset15 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Query-by-Singing/Humming Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbsh/qbsh_task1_hidden/  Hidden Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbsh/qbsh_task1_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbsh/qbsh_task1_thinkit/ ThinkIt Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbsh/qbsh_task1_ioacas/ IOACAS Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbsh/qbsh_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Query-by-Tapping Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbt/qbt_task1_jang/ Subtask 1, Jang dataset]&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbt/qbt_task1_hsiao/ Subtask 1, Hsiao dataset]&lt;br /&gt;
** Subtask 1, QBT-Extended dataset&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbt/qbt_task2_jang/ Subtask 2, Jang dataset]&lt;br /&gt;
** Subtask 3, QBT-Extended dataset&lt;br /&gt;
&lt;br /&gt;
* [[2015:Audio Downbeat Estimation Results|Audio Downbeat Estimation Results]]&lt;br /&gt;
* [[2015:Audio Fingerprinting Results|Audio Fingerprinting Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2015:Real-time_Audio_to_Score_Alignment_(a.k.a._Score_Following)_Results#Summary_Results Real-time Audio to Score Alignment (a.k.a. Score Following) Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2015:Audio_Music_Similarity_and_Retrieval_Results Audio Music Similarity and Retrieval Results]&amp;amp;nbsp;&lt;br /&gt;
* [[2015:Symbolic_Melodic_Similarity_Results | Symbolic Melodic Similarity Results]]&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2015:Singing_Voice_Separation_Results Singing Voice Separation]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Multiple Fundamental Frequency Estimation &amp;amp; Tracking Results&lt;br /&gt;
** [[2015:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results_%2D_MIREX_Dataset | MIREX Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2015:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results_%2D_Su_Dataset |Su Dataset]] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2015:Music/Speech_Classification_and_Detection_Results Music/Speech Classification and Detection]&lt;br /&gt;
&lt;br /&gt;
* [[2015:Set List Identification Results | Set List Identification Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:MIREX2014_Results&amp;diff=10607</id>
		<title>2014:MIREX2014 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:MIREX2014_Results&amp;diff=10607"/>
		<updated>2014-10-20T16:03:43Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Other Tasks */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Results by Task ==&lt;br /&gt;
&lt;br /&gt;
===Train-Test Task Set===&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kgenrek_report/ Audio KPOP Genre (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kgenrea_report/ Audio KPOP Genre (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kmoodk_report/ Audio KPOP Mood (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kmooda_report/ Audio KPOP Mood (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
===Other Tasks===&lt;br /&gt;
&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/sal/ SALAMI dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2014/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2014:Audio Cover Song Identification Results|Audio Cover Song Identification Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2014/results/akd/ Audio Key Detection Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/abt/smc/ SMC Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Chord Estimation&lt;br /&gt;
** [[2014:Audio_Chord_Estimation_Results_MIREX_2009 | MIREX &amp;amp;rsquo;09 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2014:Audio_Chord_Estimation_Results_Billboard_2012 | Billboard &amp;amp;rsquo;12 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2014:Audio_Chord_Estimation_Results_Billboard_2013 | Billboard &amp;amp;rsquo;13 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Tag Classification Results&lt;br /&gt;
** Major Miner Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask1_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask1_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
** Mood Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask2_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask2_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Query-by-Singing/Humming Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_hidden/  Hidden Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_thinkit/ ThinkIt Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_ioacas/ IOACAS Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2014:Audio Downbeat Estimation Results|Audio Downbeat Estimation Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Real-time_Audio_to_Score_Alignment_(a.k.a._Score_Following)_Results#Summary_Results Real-time Audio to Score Alignment (a.k.a. Score Following) Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Audio_Music_Similarity_and_Retrieval_Results Audio Music Similarity and Retrieval Results]&amp;amp;nbsp;&lt;br /&gt;
* [[2014:Symbolic_Melodic_Similarity_Results | Symbolic Melodic Similarity Results]]&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Singing_Voice_Separation_Results#For_the_Music_Accompaniment_.28dB.29 Singing Voice Separation]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results Multiple Fundamental Frequency Estimation &amp;amp; Tracking Results]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2014:Discovery of Repeated Themes &amp;amp; Sections Results | Discovery of Repeated Themes &amp;amp; Sections Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2012&amp;diff=10606</id>
		<title>2014:Audio Chord Estimation Results Billboard 2012</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2012&amp;diff=10606"/>
		<updated>2014-10-20T16:02:27Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for an abridged version of the ''Billboard'' dataset from McGill University, including a representative sample of American popular music from the 1950s through the 1990s, as used for MIREX 2012.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard12.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2012.zip BillboardTest2012.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2012Output.zip BillboardTest2012Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2013&amp;diff=10605</id>
		<title>2014:Audio Chord Estimation Results Billboard 2013</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2013&amp;diff=10605"/>
		<updated>2014-10-20T16:02:05Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for a special subset of the ''Billboard'' dataset from McGill University that has never been made available to the public. Further subsets have been withheld to support the ACE task through MIREX 2015.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard13.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2013.zip BillboardTest2013.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2013Output.zip BillboardTest2013Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10604</id>
		<title>2014:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10604"/>
		<updated>2014-10-20T16:01:38Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for the Isophonics dataset, a.k.a. the MIREX 2009 dataset. It comprises the collected Beatles, Queen, and Zweieck datasets from Queen Mary, University of London, and has been used for audio chord estimation in MIREX for many years.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/mirex09.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/MirexChord2009.zip MirexChord2009.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/MirexChord2009Output.zip MirexChord2009Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10603</id>
		<title>2014:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10603"/>
		<updated>2014-10-20T15:59:44Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for the Isophonics dataset, a.k.a. the MIREX 2009 dataset. It comprises the collected Beatles, Queen, and Zweieck datasets from Queen Mary, University of London, and has been used for audio chord estimation in MIREX for many years.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/mirex09.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/MirexChord2009.zip MirexChord2009.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/MirexChord2009Output.zip MirexChord2009Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2013&amp;diff=10602</id>
		<title>2014:Audio Chord Estimation Results Billboard 2013</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2013&amp;diff=10602"/>
		<updated>2014-10-20T15:59:28Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for a special subset of the ''Billboard'' dataset from McGill University that has never been made available to the public. Further subsets have been withheld to support the ACE task through MIREX 2015.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard13.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2013.zip BillboardTest2013.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2013Output.zip BillboardTest2013Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2012&amp;diff=10601</id>
		<title>2014:Audio Chord Estimation Results Billboard 2012</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2012&amp;diff=10601"/>
		<updated>2014-10-20T15:58:39Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Submissions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for an abridged version of the ''Billboard'' dataset from McGill University, including a representative sample of American popular music from the 1950s through the 1990s, as used for MIREX 2012.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard12.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2012.zip BillboardTest2012.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2012Output.zip BillboardTest2012Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2013&amp;diff=10600</id>
		<title>2014:Audio Chord Estimation Results Billboard 2013</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2013&amp;diff=10600"/>
		<updated>2014-10-20T15:58:31Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Submissions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This year, we have started a new evaluation battery for audio chord estimation. This page contains the results of these new evaluations for a special subset of the ''Billboard'' dataset from McGill University that has never been made available to the public. Further subsets have been withheld to support the ACE task through MIREX 2015.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard13.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2013.zip BillboardTest2013.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2013Output.zip BillboardTest2013Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10599</id>
		<title>2014:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10599"/>
		<updated>2014-10-20T15:58:23Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Submissions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This year, we have started a new evaluation battery for audio chord estimation. This page contains the results of these new evaluations for the Isophonics dataset, a.k.a. the MIREX 2009 dataset. It comprises the collected Beatles, Queen, and Zweieck datasets from Queen Mary, University of London, and has been used for audio chord estimation in MIREX for many years.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/mirex09.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/MirexChord2009.zip MirexChord2009.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/MirexChord2009Output.zip MirexChord2009Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10596</id>
		<title>2014:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10596"/>
		<updated>2014-10-20T15:52:26Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Summary */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This year, we have started a new evaluation battery for audio chord estimation. This page contains the results of these new evaluations for the Isophonics dataset, a.k.a. the MIREX 2009 dataset. It comprises the collected Beatles, Queen, and Zweieck datasets from Queen Mary, University of London, and has been used for audio chord estimation in MIREX for many years.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CB3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CB3.pdf PDF]&lt;br /&gt;
| Taemin Cho &amp;amp; Juan P. Bello&lt;br /&gt;
|-&lt;br /&gt;
| CB4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CB4.pdf PDF]&lt;br /&gt;
| Taemin Cho &amp;amp; Juan P. Bello&lt;br /&gt;
|-&lt;br /&gt;
| CF2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CF2.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch, Matthew E. P. Davies, Simon Dixon, Christian Landone, Katy Noland, Mark Levy, Massimiliano Zanoni, Dan Stowell &amp;amp; Luís A. Figueira&lt;br /&gt;
|-&lt;br /&gt;
| KO1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich &amp;amp; Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| KO2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/KO2.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich &amp;amp; Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| NG1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NG1.pdf PDF]&lt;br /&gt;
| Nikolay Glazyrin&lt;br /&gt;
|-&lt;br /&gt;
| NG2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NG2.pdf PDF]&lt;br /&gt;
| Nikolay Glazyrin&lt;br /&gt;
|-&lt;br /&gt;
| NMSD1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NMSD1.pdf PDF]&lt;br /&gt;
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez &amp;amp; Tijl De Bie&lt;br /&gt;
|-&lt;br /&gt;
| NMSD2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NMSD2.pdf PDF]&lt;br /&gt;
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez &amp;amp; Tijl De Bie&lt;br /&gt;
|-&lt;br /&gt;
| PP3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/PP3.pdf PDF] &lt;br /&gt;
| Johan Pauwels &amp;amp; Geoffroy Peeters&lt;br /&gt;
|-&lt;br /&gt;
| PP4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/PP4.pdf PDF] &lt;br /&gt;
| Johan Pauwels &amp;amp; Geoffroy Peeters&lt;br /&gt;
|-&lt;br /&gt;
| SB8&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/SB8.pdf PDF] &lt;br /&gt;
| Nikolaas Steenbergen &amp;amp; John Ashley Burgoyne&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/mirex09.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/MirexChord2009.zip MirexChord2009.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/MirexChord2009Output.zip MirexChord2009Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2013&amp;diff=10595</id>
		<title>2014:Audio Chord Estimation Results Billboard 2013</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2013&amp;diff=10595"/>
		<updated>2014-10-20T15:50:21Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Summary */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This year, we have started a new evaluation battery for audio chord estimation. This page contains the results of these new evaluations for a special subset of the ''Billboard'' dataset from McGill University that has never been made available to the public. Further subsets have been withheld to support the ACE task through MIREX 2015.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CB3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CB3.pdf PDF]&lt;br /&gt;
| Taemin Cho &amp;amp; Juan P. Bello&lt;br /&gt;
|-&lt;br /&gt;
| CB4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CB4.pdf PDF]&lt;br /&gt;
| Taemin Cho &amp;amp; Juan P. Bello&lt;br /&gt;
|-&lt;br /&gt;
| CF2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CF2.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch, Matthew E. P. Davies, Simon Dixon, Christian Landone, Katy Noland, Mark Levy, Massimiliano Zanoni, Dan Stowell &amp;amp; Luís A. Figueira&lt;br /&gt;
|-&lt;br /&gt;
| KO1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich &amp;amp; Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| KO2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/KO2.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich &amp;amp; Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| NG1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NG1.pdf PDF]&lt;br /&gt;
| Nikolay Glazyrin&lt;br /&gt;
|-&lt;br /&gt;
| NG2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NG2.pdf PDF]&lt;br /&gt;
| Nikolay Glazyrin&lt;br /&gt;
|-&lt;br /&gt;
| NMSD1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NMSD1.pdf PDF]&lt;br /&gt;
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez &amp;amp; Tijl De Bie&lt;br /&gt;
|-&lt;br /&gt;
| NMSD2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NMSD2.pdf PDF]&lt;br /&gt;
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez &amp;amp; Tijl De Bie&lt;br /&gt;
|-&lt;br /&gt;
| PP3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/PP3.pdf PDF] &lt;br /&gt;
| Johan Pauwels &amp;amp; Geoffroy Peeters&lt;br /&gt;
|-&lt;br /&gt;
| PP4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/PP4.pdf PDF] &lt;br /&gt;
| Johan Pauwels &amp;amp; Geoffroy Peeters&lt;br /&gt;
|-&lt;br /&gt;
| SB8&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/SB8.pdf PDF] &lt;br /&gt;
| Nikolaas Steenbergen &amp;amp; John Ashley Burgoyne&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard13.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2013.zip BillboardTest2013.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2013Output.zip BillboardTest2013Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2012&amp;diff=10594</id>
		<title>2014:Audio Chord Estimation Results Billboard 2012</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2012&amp;diff=10594"/>
		<updated>2014-10-20T15:49:56Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Summary */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for an abridged version of the ''Billboard'' dataset from McGill University, including a representative sample of American popular music from the 1950s through the 1990s, as used for MIREX 2012.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CB3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CB3.pdf PDF]&lt;br /&gt;
| Taemin Cho &amp;amp; Juan P. Bello&lt;br /&gt;
|-&lt;br /&gt;
| CB4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CB4.pdf PDF]&lt;br /&gt;
| Taemin Cho &amp;amp; Juan P. Bello&lt;br /&gt;
|-&lt;br /&gt;
| CF2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CF2.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch, Matthew E. P. Davies, Simon Dixon, Christian Landone, Katy Noland, Mark Levy, Massimiliano Zanoni, Dan Stowell &amp;amp; Luís A. Figueira&lt;br /&gt;
|-&lt;br /&gt;
| KO1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich &amp;amp; Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| KO2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/KO2.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich &amp;amp; Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| NG1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NG1.pdf PDF]&lt;br /&gt;
| Nikolay Glazyrin&lt;br /&gt;
|-&lt;br /&gt;
| NG2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NG2.pdf PDF]&lt;br /&gt;
| Nikolay Glazyrin&lt;br /&gt;
|-&lt;br /&gt;
| NMSD1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NMSD1.pdf PDF]&lt;br /&gt;
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez &amp;amp; Tijl De Bie&lt;br /&gt;
|-&lt;br /&gt;
| NMSD2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NMSD2.pdf PDF]&lt;br /&gt;
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez &amp;amp; Tijl De Bie&lt;br /&gt;
|-&lt;br /&gt;
| PP3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/PP3.pdf PDF] &lt;br /&gt;
| Johan Pauwels &amp;amp; Geoffroy Peeters&lt;br /&gt;
|-&lt;br /&gt;
| PP4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/PP4.pdf PDF] &lt;br /&gt;
| Johan Pauwels &amp;amp; Geoffroy Peeters&lt;br /&gt;
|-&lt;br /&gt;
| SB8&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/SB8.pdf PDF] &lt;br /&gt;
| Nikolaas Steenbergen &amp;amp; John Ashley Burgoyne&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard12.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2012.zip BillboardTest2012.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2012Output.zip BillboardTest2012Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2012&amp;diff=10593</id>
		<title>2014:Audio Chord Estimation Results Billboard 2012</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2012&amp;diff=10593"/>
		<updated>2014-10-20T15:40:53Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for an abridged version of the ''Billboard'' dataset from McGill University, including a representative sample of American popular music from the 1950s through the 1990s, as used for MIREX 2012.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CB3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CB3.pdf PDF]&lt;br /&gt;
| Taemin Cho &amp;amp; Juan P. Bello&lt;br /&gt;
|-&lt;br /&gt;
| CB4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CB4.pdf PDF]&lt;br /&gt;
| Taemin Cho &amp;amp; Juan P. Bello&lt;br /&gt;
|-&lt;br /&gt;
| CF2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CF2.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch, Matthew E. P. Davies, Simon Dixon, Christian Landone, Katy Noland, Mark Levy, Massimiliano Zanoni, Dan Stowell &amp;amp; Luís A. Figueira&lt;br /&gt;
|-&lt;br /&gt;
| KO1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich &amp;amp; Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| KO2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/KO2.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich &amp;amp; Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| NG1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NG1.pdf PDF]&lt;br /&gt;
| Nikolay Glazyrin&lt;br /&gt;
|-&lt;br /&gt;
| NG2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NG2.pdf PDF]&lt;br /&gt;
| Nikolay Glazyrin&lt;br /&gt;
|-&lt;br /&gt;
| NMSD1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NMSD1.pdf PDF]&lt;br /&gt;
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez &amp;amp; Tijl De Bie&lt;br /&gt;
|-&lt;br /&gt;
| NMSD2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NMSD2.pdf PDF]&lt;br /&gt;
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez &amp;amp; Tijl De Bie&lt;br /&gt;
|-&lt;br /&gt;
| PP3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/PP3.pdf PDF] &lt;br /&gt;
| Johan Pauwels &amp;amp; Geoffroy Peeters&lt;br /&gt;
|-&lt;br /&gt;
| PP4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/PP4.pdf PDF] &lt;br /&gt;
| Johan Pauwels &amp;amp; Geoffroy Peeters&lt;br /&gt;
|-&lt;br /&gt;
| SB8&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/SB8.pdf PDF] &lt;br /&gt;
| Nikolaas Steenbergen &amp;amp; John Ashley Burgoyne&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2013/ace/billboard12.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2012.zip BillboardTest2012.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2012Output.zip BillboardTest2012Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:MIREX2014_Results&amp;diff=10591</id>
		<title>2014:MIREX2014 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:MIREX2014_Results&amp;diff=10591"/>
		<updated>2014-10-20T14:57:02Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Results by Task */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Results by Task ==&lt;br /&gt;
&lt;br /&gt;
===Train-Test Task Set===&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kgenrek_report/ Audio KPOP Genre (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kgenrea_report/ Audio KPOP Genre (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kmoodk_report/ Audio KPOP Mood (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kmooda_report/ Audio KPOP Mood (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
===Other Tasks===&lt;br /&gt;
&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/sal/ SALAMI dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2014/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2014:Audio Cover Song Identification Results|Audio Cover Song Identification Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/abt/smc/ SMC Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Tag Classification Results&lt;br /&gt;
** Major Miner Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask1_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask1_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
** Mood Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask2_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask2_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2014:Audio Downbeat Estimation Results|Audio Downbeat Estimation Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Real-time_Audio_to_Score_Alignment_(a.k.a._Score_Following)_Results#Summary_Results Real-time Audio to Score Alignment (a.k.a. Score Following) Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Audio_Music_Similarity_and_Retrieval_Results Audio Music Similarity and Retrieval Results]&amp;amp;nbsp;&lt;br /&gt;
* [[2014:Symbolic_Melodic_Similarity_Results | Symbolic Melodic Similarity Results]]&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Singing_Voice_Separation_Results#For_the_Music_Accompaniment_.28dB.29 Singing Voice Separation]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results Multiple Fundamental Frequency Estimation &amp;amp; Tracking Results]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2014:Discovery of Repeated Themes &amp;amp; Sections Results | Discovery of Repeated Themes &amp;amp; Sections Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Cover_Song_Identification_Results&amp;diff=10503</id>
		<title>2014:Audio Cover Song Identification Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Cover_Song_Identification_Results&amp;diff=10503"/>
		<updated>2014-10-08T17:35:26Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Sapp's Mazurka Collection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Audio Cover Song (ACS) Identification was evaluated against two datasets: ''Mixed Collection'' and ''Sapp's Mazurka Collection''.&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection Information===&lt;br /&gt;
This is the &amp;quot;original&amp;quot; ACS collection. Within the 1000 pieces in the Audio Cover Song database, there are embedded 30 different &amp;quot;cover songs&amp;quot; each represented by 11 different &amp;quot;versions&amp;quot; for a total of 330 audio files (16bit, monophonic, 22.05khz, wav). The &amp;quot;cover songs&amp;quot; represent a variety of genres (e.g., classical, jazz, gospel, rock, folk-rock, etc.) and the variations span a variety of styles and orchestrations. &lt;br /&gt;
&lt;br /&gt;
Using each of these cover song files in turn as the &amp;quot;seed/query&amp;quot; file, we will examine the returned lists of items for the presence of the other 10 versions of the &amp;quot;seed/query&amp;quot; file.&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection Information ===&lt;br /&gt;
In addition to our original ACS dataset, we used the  [http://www.mazurka.org.uk/ Mazurka.org dataset] put together by Craig Sapp. We randomly chose 11 versions from 49 mazurkas and ran it as a separate ACS subtask. Systems returned a distance matrix of 539x539 from which we located the ranks of each of the associated cover versions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code &lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract &lt;br /&gt;
    ! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
    !NC2&lt;br /&gt;
    | 	NC2 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/NC2.pdf PDF] || Ning Chen&lt;br /&gt;
    |-&lt;br /&gt;
    ! NC3&lt;br /&gt;
    | 	NC3 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/NC2.pdf PDF] || Ning Chen&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary Results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2014/acs/coversong1000.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten==== &lt;br /&gt;
&amp;lt;csv&amp;gt;2014/acs/coversong1000.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2014/acs/coversong1000.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazuraka Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2014/acs/mazurkas.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/acs/mazurkas.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2014/acs/mazurkas.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Individual Results Files==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''NC2''' : [https://music-ir.org/mirex/results/2014/acs/NC2.coversong1000.avgprec.txt Ning Chen] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NC3''' : [https://music-ir.org/mirex/results/2014/acs/NC3.coversong1000.avgprec.txt Ning Chen] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''NC2''' : [https://music-ir.org/mirex/results/2014/acs/NC2.coversong1000.ranklist.txt Ning Chen]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NC3''' : [https://music-ir.org/mirex/results/2014/acs/NC3.coversong1000.ranklist.txt Ning Chen] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''NC2''' : [https://music-ir.org/mirex/results/2014/acs/NC2.mazurkas.avgprec.txt Ning Chen] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NC3''' : [https://music-ir.org/mirex/results/2014/acs/NC3.mazurkas.avgprec.txt Ning Chen] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''NC2''' : [https://music-ir.org/mirex/results/2014/acs/NC2.mazurkas.ranklist.txt Ning Chen]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NC3''' : [https://music-ir.org/mirex/results/2014/acs/NC3.mazurkas.ranklist.txt Ning Chen] &amp;lt;br /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Cover_Song_Identification_Results&amp;diff=10502</id>
		<title>2014:Audio Cover Song Identification Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Cover_Song_Identification_Results&amp;diff=10502"/>
		<updated>2014-10-08T17:28:46Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Mixed Collection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Audio Cover Song (ACS) Identification was evaluated against two datasets: ''Mixed Collection'' and ''Sapp's Mazurka Collection''.&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection Information===&lt;br /&gt;
This is the &amp;quot;original&amp;quot; ACS collection. Within the 1000 pieces in the Audio Cover Song database, there are embedded 30 different &amp;quot;cover songs&amp;quot; each represented by 11 different &amp;quot;versions&amp;quot; for a total of 330 audio files (16bit, monophonic, 22.05khz, wav). The &amp;quot;cover songs&amp;quot; represent a variety of genres (e.g., classical, jazz, gospel, rock, folk-rock, etc.) and the variations span a variety of styles and orchestrations. &lt;br /&gt;
&lt;br /&gt;
Using each of these cover song files in turn as the &amp;quot;seed/query&amp;quot; file, we will examine the returned lists of items for the presence of the other 10 versions of the &amp;quot;seed/query&amp;quot; file.&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection Information ===&lt;br /&gt;
In addition to our original ACS dataset, we used the  [http://www.mazurka.org.uk/ Mazurka.org dataset] put together by Craig Sapp. We randomly chose 11 versions from 49 mazurkas and ran it as a separate ACS subtask. Systems returned a distance matrix of 539x539 from which we located the ranks of each of the associated cover versions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code &lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract &lt;br /&gt;
    ! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
    !NC2&lt;br /&gt;
    | 	NC2 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/NC2.pdf PDF] || Ning Chen&lt;br /&gt;
    |-&lt;br /&gt;
    ! NC3&lt;br /&gt;
    | 	NC3 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/NC2.pdf PDF] || Ning Chen&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary Results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2014/acs/coversong1000.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten==== &lt;br /&gt;
&amp;lt;csv&amp;gt;2014/acs/coversong1000.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2014/acs/coversong1000.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazuraka Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2014/acs/mazurkas.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/acs/mazurkas.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2014/acs/mazurkas.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Individual Results Files==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''NC2''' : [https://music-ir.org/mirex/results/2014/acs/NC2.coversong1000.avgprec.txt Ning Chen] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NC3''' : [https://music-ir.org/mirex/results/2014/acs/NC3.coversong1000.avgprec.txt Ning Chen] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''NC2''' : [https://music-ir.org/mirex/results/2014/acs/NC2.coversong1000.ranklist.txt Ning Chen]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NC3''' : [https://music-ir.org/mirex/results/2014/acs/NC3.coversong1000.ranklist.txt Ning Chen] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection===&lt;br /&gt;
&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.mazurkas.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.mazurkas.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.mazurkas.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.mazurkas.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
[[Category: Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Cover_Song_Identification_Results&amp;diff=10501</id>
		<title>2014:Audio Cover Song Identification Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Cover_Song_Identification_Results&amp;diff=10501"/>
		<updated>2014-10-08T17:26:45Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Sapp's Mazuraka Collection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Audio Cover Song (ACS) Identification was evaluated against two datasets: ''Mixed Collection'' and ''Sapp's Mazurka Collection''.&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection Information===&lt;br /&gt;
This is the &amp;quot;original&amp;quot; ACS collection. Within the 1000 pieces in the Audio Cover Song database, there are embedded 30 different &amp;quot;cover songs&amp;quot; each represented by 11 different &amp;quot;versions&amp;quot; for a total of 330 audio files (16bit, monophonic, 22.05khz, wav). The &amp;quot;cover songs&amp;quot; represent a variety of genres (e.g., classical, jazz, gospel, rock, folk-rock, etc.) and the variations span a variety of styles and orchestrations. &lt;br /&gt;
&lt;br /&gt;
Using each of these cover song files in turn as the &amp;quot;seed/query&amp;quot; file, we will examine the returned lists of items for the presence of the other 10 versions of the &amp;quot;seed/query&amp;quot; file.&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection Information ===&lt;br /&gt;
In addition to our original ACS dataset, we used the  [http://www.mazurka.org.uk/ Mazurka.org dataset] put together by Craig Sapp. We randomly chose 11 versions from 49 mazurkas and ran it as a separate ACS subtask. Systems returned a distance matrix of 539x539 from which we located the ranks of each of the associated cover versions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code &lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract &lt;br /&gt;
    ! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
    !NC2&lt;br /&gt;
    | 	NC2 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/NC2.pdf PDF] || Ning Chen&lt;br /&gt;
    |-&lt;br /&gt;
    ! NC3&lt;br /&gt;
    | 	NC3 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/NC2.pdf PDF] || Ning Chen&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary Results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2014/acs/coversong1000.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten==== &lt;br /&gt;
&amp;lt;csv&amp;gt;2014/acs/coversong1000.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2014/acs/coversong1000.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazuraka Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2014/acs/mazurkas.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/acs/mazurkas.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2014/acs/mazurkas.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Individual Results Files==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.coversong1000.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.coversong1000.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.coversong1000.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.coversong1000.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection===&lt;br /&gt;
&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.mazurkas.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.mazurkas.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.mazurkas.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.mazurkas.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
[[Category: Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Cover_Song_Identification_Results&amp;diff=10500</id>
		<title>2014:Audio Cover Song Identification Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Cover_Song_Identification_Results&amp;diff=10500"/>
		<updated>2014-10-08T17:26:08Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Mixed Collection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Audio Cover Song (ACS) Identification was evaluated against two datasets: ''Mixed Collection'' and ''Sapp's Mazurka Collection''.&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection Information===&lt;br /&gt;
This is the &amp;quot;original&amp;quot; ACS collection. Within the 1000 pieces in the Audio Cover Song database, there are embedded 30 different &amp;quot;cover songs&amp;quot; each represented by 11 different &amp;quot;versions&amp;quot; for a total of 330 audio files (16bit, monophonic, 22.05khz, wav). The &amp;quot;cover songs&amp;quot; represent a variety of genres (e.g., classical, jazz, gospel, rock, folk-rock, etc.) and the variations span a variety of styles and orchestrations. &lt;br /&gt;
&lt;br /&gt;
Using each of these cover song files in turn as the &amp;quot;seed/query&amp;quot; file, we will examine the returned lists of items for the presence of the other 10 versions of the &amp;quot;seed/query&amp;quot; file.&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection Information ===&lt;br /&gt;
In addition to our original ACS dataset, we used the  [http://www.mazurka.org.uk/ Mazurka.org dataset] put together by Craig Sapp. We randomly chose 11 versions from 49 mazurkas and ran it as a separate ACS subtask. Systems returned a distance matrix of 539x539 from which we located the ranks of each of the associated cover versions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code &lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract &lt;br /&gt;
    ! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
    !NC2&lt;br /&gt;
    | 	NC2 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/NC2.pdf PDF] || Ning Chen&lt;br /&gt;
    |-&lt;br /&gt;
    ! NC3&lt;br /&gt;
    | 	NC3 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/NC2.pdf PDF] || Ning Chen&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary Results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2014/acs/coversong1000.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten==== &lt;br /&gt;
&amp;lt;csv&amp;gt;2014/acs/coversong1000.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2014/acs/coversong1000.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazuraka Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2013/acs/mazurkas.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten====&lt;br /&gt;
&amp;lt;csv&amp;gt;2013/acs/mazurkas.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2013/acs/mazurkas.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Individual Results Files==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.coversong1000.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.coversong1000.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.coversong1000.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.coversong1000.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection===&lt;br /&gt;
&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.mazurkas.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.mazurkas.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.mazurkas.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.mazurkas.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
[[Category: Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Cover_Song_Identification_Results&amp;diff=10499</id>
		<title>2014:Audio Cover Song Identification Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Cover_Song_Identification_Results&amp;diff=10499"/>
		<updated>2014-10-08T16:38:38Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* General Legend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Audio Cover Song (ACS) Identification was evaluated against two datasets: ''Mixed Collection'' and ''Sapp's Mazurka Collection''.&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection Information===&lt;br /&gt;
This is the &amp;quot;original&amp;quot; ACS collection. Within the 1000 pieces in the Audio Cover Song database, there are embedded 30 different &amp;quot;cover songs&amp;quot; each represented by 11 different &amp;quot;versions&amp;quot; for a total of 330 audio files (16bit, monophonic, 22.05khz, wav). The &amp;quot;cover songs&amp;quot; represent a variety of genres (e.g., classical, jazz, gospel, rock, folk-rock, etc.) and the variations span a variety of styles and orchestrations. &lt;br /&gt;
&lt;br /&gt;
Using each of these cover song files in turn as the &amp;quot;seed/query&amp;quot; file, we will examine the returned lists of items for the presence of the other 10 versions of the &amp;quot;seed/query&amp;quot; file.&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection Information ===&lt;br /&gt;
In addition to our original ACS dataset, we used the  [http://www.mazurka.org.uk/ Mazurka.org dataset] put together by Craig Sapp. We randomly chose 11 versions from 49 mazurkas and ran it as a separate ACS subtask. Systems returned a distance matrix of 539x539 from which we located the ranks of each of the associated cover versions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code &lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract &lt;br /&gt;
    ! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
    !NC2&lt;br /&gt;
    | 	NC2 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/NC2.pdf PDF] || Ning Chen&lt;br /&gt;
    |-&lt;br /&gt;
    ! NC3&lt;br /&gt;
    | 	NC3 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/NC2.pdf PDF] || Ning Chen&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary Results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2013/acs/coversong1000.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten==== &lt;br /&gt;
&amp;lt;csv&amp;gt;2013/acs/coversong1000.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2013/acs/coversong1000.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazuraka Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2013/acs/mazurkas.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten====&lt;br /&gt;
&amp;lt;csv&amp;gt;2013/acs/mazurkas.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2013/acs/mazurkas.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Individual Results Files==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.coversong1000.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.coversong1000.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.coversong1000.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.coversong1000.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection===&lt;br /&gt;
&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.mazurkas.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.mazurkas.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.mazurkas.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.mazurkas.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
[[Category: Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Cover_Song_Identification_Results&amp;diff=10498</id>
		<title>2014:Audio Cover Song Identification Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Cover_Song_Identification_Results&amp;diff=10498"/>
		<updated>2014-10-08T16:37:54Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Audio Cover Song (ACS) Identification was evaluated against two datasets: ''Mixed Collection'' and ''Sapp's Mazurka Collection''.&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection Information===&lt;br /&gt;
This is the &amp;quot;original&amp;quot; ACS collection. Within the 1000 pieces in the Audio Cover Song database, there are embedded 30 different &amp;quot;cover songs&amp;quot; each represented by 11 different &amp;quot;versions&amp;quot; for a total of 330 audio files (16bit, monophonic, 22.05khz, wav). The &amp;quot;cover songs&amp;quot; represent a variety of genres (e.g., classical, jazz, gospel, rock, folk-rock, etc.) and the variations span a variety of styles and orchestrations. &lt;br /&gt;
&lt;br /&gt;
Using each of these cover song files in turn as the &amp;quot;seed/query&amp;quot; file, we will examine the returned lists of items for the presence of the other 10 versions of the &amp;quot;seed/query&amp;quot; file.&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection Information ===&lt;br /&gt;
In addition to our original ACS dataset, we used the  [http://www.mazurka.org.uk/ Mazurka.org dataset] put together by Craig Sapp. We randomly chose 11 versions from 49 mazurkas and ran it as a separate ACS subtask. Systems returned a distance matrix of 539x539 from which we located the ranks of each of the associated cover versions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code &lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract &lt;br /&gt;
    ! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
    !NC2&lt;br /&gt;
    | 	NC2 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/NC2.pdf PDF] || Ning Chen&lt;br /&gt;
    ! NC3&lt;br /&gt;
    | 	NC3 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/NC2.pdf PDF] || Ning Chen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary Results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2013/acs/coversong1000.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten==== &lt;br /&gt;
&amp;lt;csv&amp;gt;2013/acs/coversong1000.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2013/acs/coversong1000.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazuraka Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2013/acs/mazurkas.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten====&lt;br /&gt;
&amp;lt;csv&amp;gt;2013/acs/mazurkas.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2013/acs/mazurkas.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Individual Results Files==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.coversong1000.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.coversong1000.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.coversong1000.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.coversong1000.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection===&lt;br /&gt;
&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.mazurkas.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.mazurkas.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.mazurkas.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.mazurkas.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
[[Category: Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:MIREX2014_Results&amp;diff=10497</id>
		<title>2014:MIREX2014 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:MIREX2014_Results&amp;diff=10497"/>
		<updated>2014-10-08T16:30:03Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Other Tasks */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Results by Task ==&lt;br /&gt;
&lt;br /&gt;
===Other Tasks===&lt;br /&gt;
&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/sal/ SALAMI dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2014/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2014:Audio Cover Song Identification Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:MIREX2014_Results&amp;diff=10476</id>
		<title>2014:MIREX2014 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:MIREX2014_Results&amp;diff=10476"/>
		<updated>2014-09-24T18:20:36Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Results by Task ==&lt;br /&gt;
&lt;br /&gt;
===Other Tasks===&lt;br /&gt;
&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/sal/ SALAMI dataset] &amp;amp;nbsp;&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=MIREX_2014_Submission_Instructions&amp;diff=10364</id>
		<title>MIREX 2014 Submission Instructions</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=MIREX_2014_Submission_Instructions&amp;diff=10364"/>
		<updated>2014-08-14T22:56:51Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Submission System URL */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Some Reminders==&lt;br /&gt;
* Be sure to read through the rest of this page&lt;br /&gt;
* Be sure to read through the [[2014:Main_Page| 2014 MIREX Home]] page&lt;br /&gt;
* Be sure to read through the task pages for which you are submitting&lt;br /&gt;
* Be sure to follow the [[2009:Best Coding Practices for MIREX | Best Coding Practices for MIREX]]&lt;br /&gt;
&lt;br /&gt;
==Begin with the Video Tutorial==&lt;br /&gt;
Go watch the [https://www.music-ir.org//mirex/2010/submission_tutorial/ MIREX 2012 Submission System Video Tutorial]&lt;br /&gt;
&lt;br /&gt;
==Basic Steps==&lt;br /&gt;
&lt;br /&gt;
# Tell us about yourself by creating an identity profile. If you are participating under multiple affiliations, repeat this step for each affiliation.&lt;br /&gt;
# Create a submission record. You'll need to add all your contributors. This is easiest if they have also registered and completed step 1, but you can create profiles for them when you create your submission.&lt;br /&gt;
# Upload your submission via SFTP to the dropbox. Specific instructions are given after you complete your submission.&lt;br /&gt;
# Upload your abstract via the webform.&lt;br /&gt;
&lt;br /&gt;
==Very Important Things to Note==&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;i&amp;gt;NOTA BENE&amp;lt;/i&amp;gt;: We are REQUIRING that &amp;lt;b&amp;gt;EACH&amp;lt;/b&amp;gt; person involved in a MIREX 2014 submission MUST create an identity for themselves on the submission system. Identities are important to us as they help us better manage the submissions. Even if a colleague of yours is going to do the actual submitting, you still need to create an identity for yourself in the system.&lt;br /&gt;
# When you create your personal identity in the system, review your input &amp;lt;b&amp;gt;carefully&amp;lt;/b&amp;gt; for errors! Once your personal identity is created and the &amp;quot;submit&amp;quot; button is pressed, it is not possible for you to edit your identity information.&lt;br /&gt;
# If you are submitting on behalf of a team you will need to make sure that the identity for each team member is associated with your submission. Your first job is to find out if they have already created identities in the system by using the search tool. If they have, simply click on the identity to add them. &lt;br /&gt;
# If you cannot find an identity for one or more of your colleagues, the best way to proceed is get them to create an identity for themselves on the system. This way, they are responsible for the accuracy of their information.&lt;br /&gt;
# If your colleague, for some reason, cannot create an identity for themselves, you will need to create an identity for them. Do your best to create as accurate an identity for them as possible. &lt;br /&gt;
# &amp;lt;b&amp;gt;If you plan to submit more than one algorithm or algorithm variant to a given task, &amp;lt;i&amp;gt;EACH&amp;lt;/i&amp;gt; algorithm or variant needs its own complete submission to be made including the README and binary bundle upload&amp;lt;/b&amp;gt;. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team.&lt;br /&gt;
&lt;br /&gt;
==Getting Help==&lt;br /&gt;
If things do not work or if you make a major mistake or if you are simply confused, please contact the MIREX team at mirex [at] imirsel.org.&lt;br /&gt;
&lt;br /&gt;
==Participant Identity Information Fields==&lt;br /&gt;
(* = required field)&lt;br /&gt;
*First name*:&lt;br /&gt;
*Last name*:&lt;br /&gt;
*Organization*:&lt;br /&gt;
*Department:&lt;br /&gt;
*Unit/Lab:&lt;br /&gt;
*URL*:&lt;br /&gt;
*Title*:&lt;br /&gt;
*From (year)*: To:&lt;br /&gt;
*Email:&lt;br /&gt;
*Street Address:&lt;br /&gt;
*Street Address 2:&lt;br /&gt;
*Street Address 3:&lt;br /&gt;
*City:&lt;br /&gt;
*State, Region:&lt;br /&gt;
*Postal Code:&lt;br /&gt;
*Country:&lt;br /&gt;
&lt;br /&gt;
==Extended Abstract Details==&lt;br /&gt;
The extended abstracts provide the outside world with a general understanding of what each submission is trying to accomplish. The extended abstracts need NOT be cutting edge/never-before-published materials. The extended abstracts will be revised by the authors after the data has been collected (to allow for commentary on results data); however, we at MIREX still need the first-pass drafts at submission time to help us understand what is happening in the submission. Like last year we will post the final versions of the extended abstracts as part of the MIREX 2014 results page. &lt;br /&gt;
&lt;br /&gt;
The MIREX 2014 extended abstracts:&lt;br /&gt;
# Are two to four pages long.&lt;br /&gt;
# Must conform to the guidelines in the following templates: [https://www.music-ir.org/mirex/templates/2010/MIREX2010_tex_template.zip LaTeX template] [https://www.music-ir.org/mirex/templates/2010/MIREX2010_doc_template.zip Word template] &lt;br /&gt;
# Must be submitted in PDF format.&lt;br /&gt;
# Should include, if exists, references to other publications about your work (yes, self-reference is encouraged!)&lt;br /&gt;
# Should have the same general look and feel as these examples from last year:&lt;br /&gt;
&lt;br /&gt;
==Submission System URL==&lt;br /&gt;
The MIREX 2014 Submission System can be found at: https://www.music-ir.org/mirex/sub/ .&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=MIREX_2014_Submission_Instructions&amp;diff=10363</id>
		<title>MIREX 2014 Submission Instructions</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=MIREX_2014_Submission_Instructions&amp;diff=10363"/>
		<updated>2014-08-14T22:56:43Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Submission System URL */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Some Reminders==&lt;br /&gt;
* Be sure to read through the rest of this page&lt;br /&gt;
* Be sure to read through the [[2014:Main_Page| 2014 MIREX Home]] page&lt;br /&gt;
* Be sure to read through the task pages for which you are submitting&lt;br /&gt;
* Be sure to follow the [[2009:Best Coding Practices for MIREX | Best Coding Practices for MIREX]]&lt;br /&gt;
&lt;br /&gt;
==Begin with the Video Tutorial==&lt;br /&gt;
Go watch the [https://www.music-ir.org//mirex/2010/submission_tutorial/ MIREX 2012 Submission System Video Tutorial]&lt;br /&gt;
&lt;br /&gt;
==Basic Steps==&lt;br /&gt;
&lt;br /&gt;
# Tell us about yourself by creating an identity profile. If you are participating under multiple affiliations, repeat this step for each affiliation.&lt;br /&gt;
# Create a submission record. You'll need to add all your contributors. This is easiest if they have also registered and completed step 1, but you can create profiles for them when you create your submission.&lt;br /&gt;
# Upload your submission via SFTP to the dropbox. Specific instructions are given after you complete your submission.&lt;br /&gt;
# Upload your abstract via the webform.&lt;br /&gt;
&lt;br /&gt;
==Very Important Things to Note==&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;i&amp;gt;NOTA BENE&amp;lt;/i&amp;gt;: We are REQUIRING that &amp;lt;b&amp;gt;EACH&amp;lt;/b&amp;gt; person involved in a MIREX 2014 submission MUST create an identity for themselves on the submission system. Identities are important to us as they help us better manage the submissions. Even if a colleague of yours is going to do the actual submitting, you still need to create an identity for yourself in the system.&lt;br /&gt;
# When you create your personal identity in the system, review your input &amp;lt;b&amp;gt;carefully&amp;lt;/b&amp;gt; for errors! Once your personal identity is created and the &amp;quot;submit&amp;quot; button is pressed, it is not possible for you to edit your identity information.&lt;br /&gt;
# If you are submitting on behalf of a team you will need to make sure that the identity for each team member is associated with your submission. Your first job is to find out if they have already created identities in the system by using the search tool. If they have, simply click on the identity to add them. &lt;br /&gt;
# If you cannot find an identity for one or more of your colleagues, the best way to proceed is get them to create an identity for themselves on the system. This way, they are responsible for the accuracy of their information.&lt;br /&gt;
# If your colleague, for some reason, cannot create an identity for themselves, you will need to create an identity for them. Do your best to create as accurate an identity for them as possible. &lt;br /&gt;
# &amp;lt;b&amp;gt;If you plan to submit more than one algorithm or algorithm variant to a given task, &amp;lt;i&amp;gt;EACH&amp;lt;/i&amp;gt; algorithm or variant needs its own complete submission to be made including the README and binary bundle upload&amp;lt;/b&amp;gt;. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team.&lt;br /&gt;
&lt;br /&gt;
==Getting Help==&lt;br /&gt;
If things do not work or if you make a major mistake or if you are simply confused, please contact the MIREX team at mirex [at] imirsel.org.&lt;br /&gt;
&lt;br /&gt;
==Participant Identity Information Fields==&lt;br /&gt;
(* = required field)&lt;br /&gt;
*First name*:&lt;br /&gt;
*Last name*:&lt;br /&gt;
*Organization*:&lt;br /&gt;
*Department:&lt;br /&gt;
*Unit/Lab:&lt;br /&gt;
*URL*:&lt;br /&gt;
*Title*:&lt;br /&gt;
*From (year)*: To:&lt;br /&gt;
*Email:&lt;br /&gt;
*Street Address:&lt;br /&gt;
*Street Address 2:&lt;br /&gt;
*Street Address 3:&lt;br /&gt;
*City:&lt;br /&gt;
*State, Region:&lt;br /&gt;
*Postal Code:&lt;br /&gt;
*Country:&lt;br /&gt;
&lt;br /&gt;
==Extended Abstract Details==&lt;br /&gt;
The extended abstracts provide the outside world with a general understanding of what each submission is trying to accomplish. The extended abstracts need NOT be cutting edge/never-before-published materials. The extended abstracts will be revised by the authors after the data has been collected (to allow for commentary on results data); however, we at MIREX still need the first-pass drafts at submission time to help us understand what is happening in the submission. Like last year we will post the final versions of the extended abstracts as part of the MIREX 2014 results page. &lt;br /&gt;
&lt;br /&gt;
The MIREX 2014 extended abstracts:&lt;br /&gt;
# Are two to four pages long.&lt;br /&gt;
# Must conform to the guidelines in the following templates: [https://www.music-ir.org/mirex/templates/2010/MIREX2010_tex_template.zip LaTeX template] [https://www.music-ir.org/mirex/templates/2010/MIREX2010_doc_template.zip Word template] &lt;br /&gt;
# Must be submitted in PDF format.&lt;br /&gt;
# Should include, if exists, references to other publications about your work (yes, self-reference is encouraged!)&lt;br /&gt;
# Should have the same general look and feel as these examples from last year:&lt;br /&gt;
&lt;br /&gt;
==Submission System URL==&lt;br /&gt;
The MIREX 2014 Submission System can be found at: https://www.music-ir.org/mirex/sub/ . &amp;lt;br/&amp;gt;&lt;br /&gt;
We will let you know once the submission system is ready. Thanks!&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:GC14UX&amp;diff=10180</id>
		<title>2014:GC14UX</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:GC14UX&amp;diff=10180"/>
		<updated>2014-07-01T02:24:00Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: /* Task */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Grand Challenge on User Experience 2014}}&lt;br /&gt;
=Welcome to GC14UX=&lt;br /&gt;
Grand Challenge on User Experience 2014&lt;br /&gt;
=Purpose=&lt;br /&gt;
Holistic evaluation of user experience in interacting with user-serving MIR systems.&lt;br /&gt;
=Goals=&lt;br /&gt;
# to inspire the development of complete MIR systems.&lt;br /&gt;
# to promote the notion of user experience as a first-class research objective in the MIR community.&lt;br /&gt;
=Dataset=&lt;br /&gt;
A set of music 10,000 music audio tracks is provided for the GC14UX. It will be a subset of tracks drawn from the [http://www.jamendo.com/en/welcome Jamendo collection's] CC-BY licensed works.&lt;br /&gt;
&lt;br /&gt;
The Jamendo collection contains music in a variety of genres and moods, but is mostly unknown to most listeners. This will mitigate against the possible user experience bias induced by the differential presence (or absence) of popular or known music within the participating systems. &lt;br /&gt;
&lt;br /&gt;
As of May 20, 2014, the Jamendo collection contains 14742 tracks with the [http://creativecommons.org/licenses/by/3.0/ CC-BY license]. The CC-BY license allows others to distribute, modify, optimize and use your work as a basis, even commercially, as long as you give credit for the original creation. This is one of the most permissive licenses possible.&lt;br /&gt;
&lt;br /&gt;
The 10,000 tracks in GC14UX will be sampled (w.r.t. maximizing music variety) from the Jamendo collection with CC-BY license and made available for participants (system developers) to download to build their systems. &lt;br /&gt;
=Participating Systems=&lt;br /&gt;
Unlike conventional MIREX tasks, participants are not asked to submit their systems. Instead, the systems will be hosted by their developers. All participating systems need to be constructed as websites accessible to users through normal web browsers. Participating teams will submit the URLs to their systems to the GC14UX team.&lt;br /&gt;
&lt;br /&gt;
==Potential Participants==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! (Cool) Team Name&lt;br /&gt;
! Name(s)&lt;br /&gt;
! Email(s)&lt;br /&gt;
|-&lt;br /&gt;
| The MIR UX Master&lt;br /&gt;
| Dr. MIR&lt;br /&gt;
| mir@domain.com&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=Participating Systems=&lt;br /&gt;
Unlike conventional MIREX tasks, participants are not asked to submit their systems. Instead, the systems will be hosted by their developers. All participating systems need to be constructed as websites accessible to users through normal web browsers. Participating teams will submit the URLs to their systems to the GC14UX team.&lt;br /&gt;
&lt;br /&gt;
To ensure a consistent experience, evaluators will see participating systems in fixed size window: '''1024x768'''. Please test your system for this screen size.&lt;br /&gt;
=Evaluation=&lt;br /&gt;
==Task==&lt;br /&gt;
&lt;br /&gt;
Evaluators are given the following task:&lt;br /&gt;
&lt;br /&gt;
''You are creating a short video about a memorable occasion that happened to you recently, and you need to find some songs to use as background music.''&lt;br /&gt;
&lt;br /&gt;
==Criteria==&lt;br /&gt;
&lt;br /&gt;
''Note that the evaluation criteria or its descriptions may change in the months leading up to the submission deadline, as we test it and work to improve it.''&lt;br /&gt;
&lt;br /&gt;
* '''Overall satisfaction''': Overall, how pleasurable do you find the experience of using this system?&lt;br /&gt;
Extremely unsatisfactory / Unsatisfactory / Slightly unsatisfactory / Neutral / Slightly satisfactory / Satisfactory / Extremely satisfactory&lt;br /&gt;
&lt;br /&gt;
* '''Learnability''': How easy was it to figure out how to use the system? &lt;br /&gt;
Extremely difficult / Difficult / Slightly difficult / Neutral / Slightly easy / Easy / Extremely easy&lt;br /&gt;
&lt;br /&gt;
* '''Robustness''': How good is the system’s ability to warn you when you’re about to make a mistake and allow you to recover?&lt;br /&gt;
Extremely Poor / Poor / Slightly Poor / Neutral / Slightly Good / Good / Extremely Good ||| Not Applicable&lt;br /&gt;
&lt;br /&gt;
* '''Affordances''': How well does the system allow you to perform what you want to do?&lt;br /&gt;
&lt;br /&gt;
* '''Presentation''': How well does the system communicate what’s going on? (How well do you feel the system informs you of its status? Can you clearly understand the labels and words used in the system? How visible are all of your options and menus when you use this system?)&lt;br /&gt;
&lt;br /&gt;
* '''Aesthetics''': How good is the design? (Is it aesthetically pleasing?)&lt;br /&gt;
Extremely Poor / Poor / Slightly Poor / Neutral / Slightly Good / Good / Extremely Good&lt;br /&gt;
&lt;br /&gt;
* '''Feedback''' (Optional): An open-ended question is provided but is optional for users to give feedback if they wish to do so. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This simplicity is because: 1) the GC14UX is all about how users perceive their experiences of the systems. We intend to capture the user perceptions in a minimally intrusive manner and not to burden the users/evaluators with too many questions or required data inputs. 2) more data capturing opportunities will distract from the real user experience. &lt;br /&gt;
&lt;br /&gt;
==Evaluation mechanism==&lt;br /&gt;
The GC14UX team will provide a set of evaluation forms which wrap around the participating system. In other words, the evaluation system will offer forms for scoring the participating system, and embed the system within an iframe.&lt;br /&gt;
&lt;br /&gt;
==Evaluators==&lt;br /&gt;
Evaluators will be users aged 18 and above. For this round, evaluators will be drawn primarily from the MIR community through solicitations via the ISMIR-community mailing list. The evaluation webforms developed by the GC14UX team will ensure all participating systems will get equal number of evaluators. &lt;br /&gt;
&lt;br /&gt;
==Evaluation results==&lt;br /&gt;
Statistics of the scores given by all evaluators will be reported: mean, average deviation. Meaningful text comments from the evaluators will also be reported. &lt;br /&gt;
&lt;br /&gt;
=Wireframes=&lt;br /&gt;
[[File:GCUX wireframe evaluation.png|900px]]&lt;br /&gt;
&lt;br /&gt;
=Important Dates=&lt;br /&gt;
&lt;br /&gt;
*July 1: announce the GC&lt;br /&gt;
*Sep. 21st: deadline for system submission  &lt;br /&gt;
*Sep. 28th: start the evaluation&lt;br /&gt;
*Oct. 20th: close the evaluation system&lt;br /&gt;
*Oct. 27th: announce the results&lt;br /&gt;
*Oct. 31st: MIREX and GC session in ISMIR2014&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:GC14UX&amp;diff=10179</id>
		<title>2014:GC14UX</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:GC14UX&amp;diff=10179"/>
		<updated>2014-07-01T02:15:47Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Grand Challenge on User Experience 2014}}&lt;br /&gt;
=Welcome to GC14UX=&lt;br /&gt;
Grand Challenge on User Experience 2014&lt;br /&gt;
=Purpose=&lt;br /&gt;
Holistic evaluation of user experience in interacting with user-serving MIR systems.&lt;br /&gt;
=Goals=&lt;br /&gt;
# to inspire the development of complete MIR systems.&lt;br /&gt;
# to promote the notion of user experience as a first-class research objective in the MIR community.&lt;br /&gt;
=Dataset=&lt;br /&gt;
A set of music 10,000 music audio tracks is provided for the GC14UX. It will be a subset of tracks drawn from the [http://www.jamendo.com/en/welcome Jamendo collection's] CC-BY licensed works.&lt;br /&gt;
&lt;br /&gt;
The Jamendo collection contains music in a variety of genres and moods, but is mostly unknown to most listeners. This will mitigate against the possible user experience bias induced by the differential presence (or absence) of popular or known music within the participating systems. &lt;br /&gt;
&lt;br /&gt;
As of May 20, 2014, the Jamendo collection contains 14742 tracks with the [http://creativecommons.org/licenses/by/3.0/ CC-BY license]. The CC-BY license allows others to distribute, modify, optimize and use your work as a basis, even commercially, as long as you give credit for the original creation. This is one of the most permissive licenses possible.&lt;br /&gt;
&lt;br /&gt;
The 10,000 tracks in GC14UX will be sampled (w.r.t. maximizing music variety) from the Jamendo collection with CC-BY license and made available for participants (system developers) to download to build their systems. &lt;br /&gt;
=Participating Systems=&lt;br /&gt;
Unlike conventional MIREX tasks, participants are not asked to submit their systems. Instead, the systems will be hosted by their developers. All participating systems need to be constructed as websites accessible to users through normal web browsers. Participating teams will submit the URLs to their systems to the GC14UX team.&lt;br /&gt;
&lt;br /&gt;
==Potential Participants==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! (Cool) Team Name&lt;br /&gt;
! Name(s)&lt;br /&gt;
! Email(s)&lt;br /&gt;
|-&lt;br /&gt;
| The MIR UX Master&lt;br /&gt;
| Dr. MIR&lt;br /&gt;
| mir@domain.com&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=Participating Systems=&lt;br /&gt;
Unlike conventional MIREX tasks, participants are not asked to submit their systems. Instead, the systems will be hosted by their developers. All participating systems need to be constructed as websites accessible to users through normal web browsers. Participating teams will submit the URLs to their systems to the GC14UX team.&lt;br /&gt;
&lt;br /&gt;
To ensure a consistent experience, evaluators will see participating systems in fixed size window: '''1024x768'''. Please test your system for this screen size.&lt;br /&gt;
=Evaluation=&lt;br /&gt;
==Task==&lt;br /&gt;
&lt;br /&gt;
Evaluators are given the following task:&lt;br /&gt;
&lt;br /&gt;
''You are creating a short video about what you did this summer and you need to find some songs to use as background music.''&lt;br /&gt;
&lt;br /&gt;
==Criteria==&lt;br /&gt;
&lt;br /&gt;
''Note that the evaluation criteria or its descriptions may change in the months leading up to the submission deadline, as we test it and work to improve it.''&lt;br /&gt;
&lt;br /&gt;
* '''Overall satisfaction''': Overall, how pleasurable do you find the experience of using this system?&lt;br /&gt;
Extremely unsatisfactory / Unsatisfactory / Slightly unsatisfactory / Neutral / Slightly satisfactory / Satisfactory / Extremely satisfactory&lt;br /&gt;
&lt;br /&gt;
* '''Learnability''': How easy was it to figure out how to use the system? &lt;br /&gt;
Extremely difficult / Difficult / Slightly difficult / Neutral / Slightly easy / Easy / Extremely easy&lt;br /&gt;
&lt;br /&gt;
* '''Robustness''': How good is the system’s ability to warn you when you’re about to make a mistake and allow you to recover?&lt;br /&gt;
Extremely Poor / Poor / Slightly Poor / Neutral / Slightly Good / Good / Extremely Good ||| Not Applicable&lt;br /&gt;
&lt;br /&gt;
* '''Affordances''': How well does the system allow you to perform what you want to do?&lt;br /&gt;
&lt;br /&gt;
* '''Presentation''': How well does the system communicate what’s going on? (How well do you feel the system informs you of its status? Can you clearly understand the labels and words used in the system? How visible are all of your options and menus when you use this system?)&lt;br /&gt;
&lt;br /&gt;
* '''Aesthetics''': How good is the design? (Is it aesthetically pleasing?)&lt;br /&gt;
Extremely Poor / Poor / Slightly Poor / Neutral / Slightly Good / Good / Extremely Good&lt;br /&gt;
&lt;br /&gt;
* '''Feedback''' (Optional): An open-ended question is provided but is optional for users to give feedback if they wish to do so. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This simplicity is because: 1) the GC14UX is all about how users perceive their experiences of the systems. We intend to capture the user perceptions in a minimally intrusive manner and not to burden the users/evaluators with too many questions or required data inputs. 2) more data capturing opportunities will distract from the real user experience. &lt;br /&gt;
&lt;br /&gt;
==Evaluation mechanism==&lt;br /&gt;
The GC14UX team will provide a set of evaluation forms which wrap around the participating system. In other words, the evaluation system will offer forms for scoring the participating system, and embed the system within an iframe.&lt;br /&gt;
&lt;br /&gt;
==Evaluators==&lt;br /&gt;
Evaluators will be users aged 18 and above. For this round, evaluators will be drawn primarily from the MIR community through solicitations via the ISMIR-community mailing list. The evaluation webforms developed by the GC14UX team will ensure all participating systems will get equal number of evaluators. &lt;br /&gt;
&lt;br /&gt;
==Evaluation results==&lt;br /&gt;
Statistics of the scores given by all evaluators will be reported: mean, average deviation. Meaningful text comments from the evaluators will also be reported. &lt;br /&gt;
&lt;br /&gt;
=Wireframes=&lt;br /&gt;
[[File:GCUX wireframe evaluation.png|900px]]&lt;br /&gt;
&lt;br /&gt;
=Important Dates=&lt;br /&gt;
&lt;br /&gt;
*July 1: announce the GC&lt;br /&gt;
*Sep. 21st: deadline for system submission  &lt;br /&gt;
*Sep. 28th: start the evaluation&lt;br /&gt;
*Oct. 20th: close the evaluation system&lt;br /&gt;
*Oct. 27th: announce the results&lt;br /&gt;
*Oct. 31st: MIREX and GC session in ISMIR2014&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:GC14UX&amp;diff=10177</id>
		<title>2014:GC14UX</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:GC14UX&amp;diff=10177"/>
		<updated>2014-07-01T02:14:20Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: Update to newest version&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Grand Challenge on User Experience 2014}}&lt;br /&gt;
=Welcome to GC14UX=&lt;br /&gt;
Grand Challenge on User Experience 2014&lt;br /&gt;
=Purpose=&lt;br /&gt;
Holistic evaluation of user experience in interacting with user-serving MIR systems.&lt;br /&gt;
=Goals=&lt;br /&gt;
# to inspire the development of complete MIR systems.&lt;br /&gt;
# to promote the notion of user experience as a first-class research objective in the MIR community.&lt;br /&gt;
=Dataset=&lt;br /&gt;
A set of music 10,000 music audio tracks is provided for the GC14UX. It will be a subset of tracks drawn from the [http://www.jamendo.com/en/welcome Jamendo collection's] CC-BY licensed works.&lt;br /&gt;
&lt;br /&gt;
The Jamendo collection contains music in a variety of genres and moods, but is mostly unknown to most listeners. This will mitigate against the possible user experience bias induced by the differential presence (or absence) of popular or known music within the participating systems. &lt;br /&gt;
&lt;br /&gt;
As of May 20, 2014, the Jamendo collection contains 14742 tracks with the [http://creativecommons.org/licenses/by/3.0/ CC-BY license]. The CC-BY license allows others to distribute, modify, optimize and use your work as a basis, even commercially, as long as you give credit for the original creation. This is one of the most permissive licenses possible.&lt;br /&gt;
&lt;br /&gt;
The 10,000 tracks in GC14UX will be sampled (w.r.t. maximizing music variety) from the Jamendo collection with CC-BY license and made available for participants (system developers) to download to build their systems. &lt;br /&gt;
=Participating Systems=&lt;br /&gt;
Unlike conventional MIREX tasks, participants are not asked to submit their systems. Instead, the systems will be hosted by their developers. All participating systems need to be constructed as websites accessible to users through normal web browsers. Participating teams will submit the URLs to their systems to the GC14UX team.&lt;br /&gt;
&lt;br /&gt;
==Potential Participants==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! (Cool) Team Name&lt;br /&gt;
! Name(s)&lt;br /&gt;
! Email(s)&lt;br /&gt;
|-&lt;br /&gt;
| The MIR UX Master&lt;br /&gt;
| Dr. MIR&lt;br /&gt;
| mir@domain.com&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=Participating Systems=&lt;br /&gt;
Unlike conventional MIREX tasks, participants are not asked to submit their systems. Instead, the systems will be hosted by their developers. All participating systems need to be constructed as websites accessible to users through normal web browsers. Participating teams will submit the URLs to their systems to the GC14UX team.&lt;br /&gt;
&lt;br /&gt;
To ensure a consistent experience, evaluators will see participating systems in fixed size window: '''1024x768'''. Please test your system for this screen size.&lt;br /&gt;
 &lt;br /&gt;
=Evaluation=&lt;br /&gt;
==Task==&lt;br /&gt;
&lt;br /&gt;
Evaluators are given the following task:&lt;br /&gt;
&lt;br /&gt;
''You are creating a short video about what you did this summer and you need to find some songs to use as background music.''&lt;br /&gt;
&lt;br /&gt;
==Criteria==&lt;br /&gt;
&lt;br /&gt;
''Note that the evaluation criteria or its descriptions may change in the months leading up to the submission deadline, as we test it and work to improve it.''&lt;br /&gt;
&lt;br /&gt;
* '''Overall satisfaction''': Overall, how pleasurable do you find the experience of using this system?&lt;br /&gt;
Extremely unsatisfactory / Unsatisfactory / Slightly unsatisfactory / Neutral / Slightly satisfactory / Satisfactory / Extremely satisfactory&lt;br /&gt;
&lt;br /&gt;
* '''Learnability''': How easy was it to figure out how to use the system? &lt;br /&gt;
Extremely difficult / Difficult / Slightly difficult / Neutral / Slightly easy / Easy / Extremely easy&lt;br /&gt;
&lt;br /&gt;
* '''Robustness''': How good is the system’s ability to warn you when you’re about to make a mistake and allow you to recover?&lt;br /&gt;
Extremely Poor / Poor / Slightly Poor / Neutral / Slightly Good / Good / Extremely Good ||| Not Applicable&lt;br /&gt;
&lt;br /&gt;
* ‘’’Affordances’’’: How well does the system allow you to perform what you want to do?&lt;br /&gt;
&lt;br /&gt;
* '''Presentation''': How well does the system communicate what’s going on? (How well do you feel the system informs you of its status? Can you clearly understand the labels and words used in the system? How visible are all of your options and menus when you use this system?)&lt;br /&gt;
&lt;br /&gt;
* '''Aesthetics''': How good is the design? (Is it aesthetically pleasing?)&lt;br /&gt;
Extremely Poor / Poor / Slightly Poor / Neutral / Slightly Good / Good / Extremely Good&lt;br /&gt;
&lt;br /&gt;
* '''Feedback''' (Optional): An open-ended question is provided but is optional for users to give feedback if they wish to do so. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This simplicity is because: 1) the GC14UX is all about how users perceive their experiences of the systems. We intend to capture the user perceptions in a minimally intrusive manner and not to burden the users/evaluators with too many questions or required data inputs. 2) more data capturing opportunities will distract from the real user experience. &lt;br /&gt;
&lt;br /&gt;
==Evaluation mechanism==&lt;br /&gt;
The GC14UX team will provide a set of evaluation forms which wrap around the participating system. In other words, the evaluation system will offer forms for scoring the participating system, and embed the system within an iframe.&lt;br /&gt;
&lt;br /&gt;
==Evaluators==&lt;br /&gt;
Evaluators will be users aged 18 and above. For this round, evaluators will be drawn primarily from the MIR community through solicitations via the ISMIR-community mailing list. The evaluation webforms developed by the GC14UX team will ensure all participating systems will get equal number of evaluators. &lt;br /&gt;
&lt;br /&gt;
==Evaluation results==&lt;br /&gt;
Statistics of the scores given by all evaluators will be reported: mean, average deviation. Meaningful text comments from the evaluators will also be reported. &lt;br /&gt;
&lt;br /&gt;
=Wireframes=&lt;br /&gt;
[[File:GCUX wireframe evaluation.png|900px]]&lt;br /&gt;
&lt;br /&gt;
=Important Dates=&lt;br /&gt;
&lt;br /&gt;
*July 1: announce the GC&lt;br /&gt;
*Sep. 21st: deadline for system submission  &lt;br /&gt;
*Sep. 28th: start the evaluation&lt;br /&gt;
*Oct. 20th: close the evaluation system&lt;br /&gt;
*Oct. 27th: announce the results&lt;br /&gt;
*Oct. 31st: MIREX and GC session in ISMIR2014&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:GCUX_wireframe_2014_06_16.png&amp;diff=10124</id>
		<title>File:GCUX wireframe 2014 06 16.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:GCUX_wireframe_2014_06_16.png&amp;diff=10124"/>
		<updated>2014-06-25T19:36:47Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: uploaded a new version of &amp;amp;quot;File:GCUX wireframe 2014 06 16.png&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:MIREX2013_Results&amp;diff=9863</id>
		<title>2013:MIREX2013 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:MIREX2013_Results&amp;diff=9863"/>
		<updated>2013-11-04T22:47:44Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==OVERALL RESULTS POSTERS &amp;lt;!--(First Version: Will need updating as last runs are completed)--&amp;gt;==&lt;br /&gt;
&lt;br /&gt;
This page is under construction. &lt;br /&gt;
&lt;br /&gt;
[https://www.music-ir.org/mirex/results/2013/mirex_2013_poster.pdf MIREX 2013 Overall Results Posters (PDF)]&lt;br /&gt;
&lt;br /&gt;
==Results by Task ==&lt;br /&gt;
&lt;br /&gt;
===Train-Test Task Set===&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
===Other Tasks===&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/abt/dav/ DAV Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
* Audio Chord Detection Results (will add more results soon)&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ace/mrx09/index.html MIREX Dataset]  &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ace/bill/index.html McGill Dataset]  &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2013/results/akd/ Audio Key Detection Results] &amp;amp;nbsp;&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
* [[2013:Audio_Music_Similarity_and_Retrieval_Results | Audio Music Similarity and Retrieval Results]] &lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2013/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
* Audio Tag Classification Results&lt;br /&gt;
** Major Miner Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask1_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask1_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
** Mood Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask2_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask2_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
* [[2013:Multiple_Fundamental_Frequency_Estimation_&amp;amp;_Tracking_Results | Multiple Fundamental Frequency Estimation &amp;amp; Tracking Results]]&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/sal/ SALAMI dataset] &amp;amp;nbsp;&lt;br /&gt;
* Query-by-Singing/Humming Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1_hidden/  Hidden Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1a_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1b_thinkit/ ThinkIt Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1c_ioacas/ IOACAS Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
* Query-by-Tapping Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbt/qbt_task1_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbt/qbt_task1_hsiao/ HSIAO Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbt/qbt_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
*[[2013:Real-time_Audio_to_Score_Alignment_(a.k.a._Score_Following)_Results | Real-time Audio to Score Alignment (a.k.a. Score Following) Results ]]&lt;br /&gt;
* [[2013:Symbolic_Melodic_Similarity_Results | Symbolic Melodic Similarity Results]]&lt;br /&gt;
* [[2013:Discovery of Repeated Themes &amp;amp; Sections Results | Discovery of Repeated Themes &amp;amp; Sections Results]]&lt;br /&gt;
* [[2013:Audio Cover Song Identification Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Audio_Cover_Song_Identification_Results&amp;diff=9862</id>
		<title>2013:Audio Cover Song Identification Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Audio_Cover_Song_Identification_Results&amp;diff=9862"/>
		<updated>2013-11-04T22:35:54Z</updated>

		<summary type="html">&lt;p&gt;Peter Organisciak: Created page with &amp;quot;== Introduction ==  Audio Cover Song (ACS) Identification was evaluated against two datasets: ''Mixed Collection'' and ''Sapp's Mazurka Collection''.  ===Mixed Collection Informa...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Audio Cover Song (ACS) Identification was evaluated against two datasets: ''Mixed Collection'' and ''Sapp's Mazurka Collection''.&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection Information===&lt;br /&gt;
This is the &amp;quot;original&amp;quot; ACS collection. Within the 1000 pieces in the Audio Cover Song database, there are embedded 30 different &amp;quot;cover songs&amp;quot; each represented by 11 different &amp;quot;versions&amp;quot; for a total of 330 audio files (16bit, monophonic, 22.05khz, wav). The &amp;quot;cover songs&amp;quot; represent a variety of genres (e.g., classical, jazz, gospel, rock, folk-rock, etc.) and the variations span a variety of styles and orchestrations. &lt;br /&gt;
&lt;br /&gt;
Using each of these cover song files in turn as the &amp;quot;seed/query&amp;quot; file, we will examine the returned lists of items for the presence of the other 10 versions of the &amp;quot;seed/query&amp;quot; file.&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection Information ===&lt;br /&gt;
In addition to our original ACS dataset, we used the  [http://www.mazurka.org.uk/ Mazurka.org dataset] put together by Craig Sapp. We randomly chose 11 versions from 49 mazurkas and ran it as a separate ACS subtask. Systems returned a distance matrix of 539x539 from which we located the ranks of each of the associated cover versions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code &lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract &lt;br /&gt;
    ! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
    ! YCW1&lt;br /&gt;
    | 	MCY_COVER ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/YCW1.pdf PDF] || [http://slam.iis.sinica.edu.tw/index.htm Ming-Chi Yen], [http://slam.iis.sinica.edu.tw/  Chuan-Yau Chan], [http://www.iis.sinica.edu.tw 	Hsin-Min Wang]&lt;br /&gt;
    |-&lt;br /&gt;
    ! CYWW1&lt;br /&gt;
    | 	CYC_COVER ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CYWW1.pdf PDF] || [http://slam.iis.sinica.edu.tw/  Chuan-Yau Chan], [http://slam.iis.sinica.edu.tw/index.htm Ming-Chi Yen], [http://sovideo.iis.sinica.edu.tw/SLG/index.html Ju-Chiang Wang], [http://www.iis.sinica.edu.tw 	Hsin-Min Wang]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary Results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2013/acs/coversong1000.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten==== &lt;br /&gt;
&amp;lt;csv&amp;gt;2013/acs/coversong1000.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2013/acs/coversong1000.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazuraka Collection===&lt;br /&gt;
&lt;br /&gt;
====Summary results====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2013/acs/mazurkas.summary.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Number of Correct Covers at Rank X Returned in Top Ten====&lt;br /&gt;
&amp;lt;csv&amp;gt;2013/acs/mazurkas.toptendist.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Average Performance per Query Group====&lt;br /&gt;
&amp;lt;csv p=2&amp;gt;2013/acs/mazurkas.precision.groups.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Individual Results Files==&lt;br /&gt;
&lt;br /&gt;
===Mixed Collection===&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.coversong1000.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.coversong1000.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.coversong1000.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.coversong1000.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sapp's Mazurka Collection===&lt;br /&gt;
&lt;br /&gt;
'''Average Precision by Query'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.mazurkas.avgprec.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.mazurkas.avgprec.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Rank Lists'''&lt;br /&gt;
&lt;br /&gt;
'''YCW1''' : [https://music-ir.org/mirex/results/2013/acs/YCW1.mazurkas.ranklist.txt Ming-Chi Yen, Chuan-Yau Chan, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''CYWW1''' : [https://music-ir.org/mirex/results/2013/acs/CYWW1.mazurkas.ranklist.txt Chuan-Yau Chan, Ming-Chi Yen, Ju-Chiang Wang, Hsin-Min Wang] &amp;lt;br /&amp;gt;&lt;br /&gt;
[[Category: Results]]&lt;/div&gt;</summary>
		<author><name>Peter Organisciak</name></author>
		
	</entry>
</feed>