<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=JohanPauwels</id>
	<title>MIREX Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=JohanPauwels"/>
	<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/wiki/Special:Contributions/JohanPauwels"/>
	<updated>2026-05-12T00:09:50Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2019:MIREX2019_Results&amp;diff=13286</id>
		<title>2019:MIREX2019 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2019:MIREX2019_Results&amp;diff=13286"/>
		<updated>2020-10-14T10:37:46Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Add direct link to ACE CASD results&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Overall Results Poster==&lt;br /&gt;
Coming soon&lt;br /&gt;
&lt;br /&gt;
==Results by Task (More results are coming) ==&lt;br /&gt;
* Multiple Fundamental Frequency Estimation &amp;amp; Tracking Results&lt;br /&gt;
** [[2019:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results_%2D_MIREX_Dataset | MIREX Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2019:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results_%2D_Su_Dataset | Su Dataset]] &amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2019:Music_Detection_Results Music Detection Results] &amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2019:Patterns_for_Prediction_Results Patterns for Prediction Results] &amp;amp;nbsp;&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2019/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2019/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2019/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2019/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2019/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2019/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2019/results/ame/orchset/ ORCHSET15 Dataset] &amp;amp;nbsp;&lt;br /&gt;
* [[2019:Automatic Lyrics-to-Audio Alignment Results | Automatic Lyrics-to-Audio Alignment Results]] &amp;amp;nbsp;&lt;br /&gt;
* [[2019:Drum_Transcription_Results | Drum Transcription]] &amp;amp;nbsp;&lt;br /&gt;
* [[2019:Audio Fingerprinting Results|Audio Fingerprinting Results]]&lt;br /&gt;
* [[2019:Audio_Key_Detection_Results | Audio Key Detection]] &amp;amp;nbsp;&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Results | Audio Chord Estimation]]&lt;br /&gt;
** [[2019:Audio_Chord_Estimation_Results#Isophonics2009 | Isophonics2009 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2019:Audio_Chord_Estimation_Results#Billboard2012 | Billboard2012 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2019:Audio_Chord_Estimation_Results#Billboard_013 | Billboard2013 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2019:Audio_Chord_Estimation_Results#JayChou29 | JayChou29 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2019:Audio_Chord_Estimation_Results#RobbieWilliams | RobbieWilliams Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2019:Audio_Chord_Estimation_Results#RWC-Popular | RWC-Popular Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2019:Audio_Chord_Estimation_Results#USPOP2002Chords | USPOP2002Chords Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2019:Audio_Chord_Estimation_Results#CASD-Annotator1 | CASD Dataset]] &amp;amp;nbsp;&lt;br /&gt;
*Train-Test Task Set&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2019/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2019/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2019/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2019/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2019/results/act/kgenrek_report/ Audio KPOP Genre (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2019/results/act/kgenrea_report/ Audio KPOP Genre (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2019/results/act/kmoodk_report/ Audio KPOP Mood (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2019/results/act/kmooda_report/ Audio KPOP Mood (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2019/results/abt/smc/ SMC Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2019/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2019:Audio Cover Song Identification Results|Audio Cover Song Identification Results]]&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2019:Audio_Chord_Estimation_Results&amp;diff=13285</id>
		<title>2019:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2019:Audio_Chord_Estimation_Results&amp;diff=13285"/>
		<updated>2020-10-14T10:27:04Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Update year count&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2019 edition of the MIREX automatic chord estimation tasks. This edition was the seventh one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last six years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* All datasets and evaluation procedures are the same as last year's MIREX.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding binaries and code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CLSYJ1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2019/CLSYJ1.pdf PDF]&lt;br /&gt;
| [http://mirlab.org/users/eden.chien/ i Chien], [http://mirlab.org/users/hubert.lee/ Song Rong Lee], [http://mirlab.org/ Yeh Ssuhung], [http://mirlab.org/users/kenshincs/index.htm Tzu-Chun Yeh], [http://mirlab.org/jang Jyh-Shing Roger Jang]&lt;br /&gt;
|-&lt;br /&gt;
| CM1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2019/CM1.pdf PDF]&lt;br /&gt;
| [https://code.soundsoftware.ac.uk/users/3 Chris Cannam], [http://matthiasmauch.net Matthias Mauch]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best).&lt;br /&gt;
&lt;br /&gt;
=====Isophonics2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou29=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-JayChou29.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RobbieWilliams=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-RobbieWilliams.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RWC-Popular=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-RWC-Popular.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====USPOP2002Chords=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-USPOP2002Chords.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator1=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator1.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator2=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator2.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator3=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator3.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator4=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator4.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics2009 Dataset]]&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard2012 Dataset]]&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard2013 Dataset]]&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_JayChou29 | JayChou29 Dataset]]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from [https://github.com/ismir-mirex/ace-results/tree/master/2019 this repository].&lt;br /&gt;
.&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in [https://github.com/ismir-mirex/ace-output/tree/master/2019 this repository]. They can be used to experiment with alternative evaluation measures and statistics.&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2018:Audio_Chord_Estimation_Results&amp;diff=13284</id>
		<title>2018:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2018:Audio_Chord_Estimation_Results&amp;diff=13284"/>
		<updated>2020-10-14T10:26:07Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Add link to Github repositories&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2018 edition of the MIREX automatic chord estimation tasks. This edition was the sixth one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last five years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* All datasets and evaluation procedures are the same as last year's MIREX.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding binaries and code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2018/CM1.pdf PDF]&lt;br /&gt;
| [https://code.soundsoftware.ac.uk/users/3 Chris Cannam], [http://c4dm.eecs.qmul.ac.uk/ Matthias Mauch]&lt;br /&gt;
|-&lt;br /&gt;
| JLCX1, JLCX2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2018/JLCX1.pdf PDF]&lt;br /&gt;
| [https://github.com/instr3/ Junyan Jiang],  [https://github.com/RetroCirce Ke Chen], [http://www.cs.fudan.edu.cn/ Wei Li], [http://www.cs.cmu.edu/~gxia Guangyu Xia]&lt;br /&gt;
|-&lt;br /&gt;
| SG1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2018/SG1.pdf PDF]&lt;br /&gt;
| [https://www.fsit.services Franz Strasser], [http://www.jku.at/ Stefan Gaser] &lt;br /&gt;
|-&lt;br /&gt;
| FK2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [PDF]&lt;br /&gt;
| [http://www.cp.jku.at Florian Krebs], [http://www.cp.jku.at Filip Korzeniowski], [http://www.ofai.at Sebastian Böck]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best).&lt;br /&gt;
&lt;br /&gt;
=====Isophonics2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/ace/2018-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/ace/2018-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/ace/2018-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou29=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/ace/2018-JayChou29.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RobbieWilliams=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/ace/2018-RobbieWilliams.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RWC-Popular=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/ace/2018-RWC-Popular.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====USPOP2002Chords=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/ace/2018-USPOP2002Chords.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2018:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics2009 Dataset]]&lt;br /&gt;
* [[2018:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard2012 Dataset]]&lt;br /&gt;
* [[2018:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard2013 Dataset]]&lt;br /&gt;
* [[2018:Audio_Chord_Estimation_Statistical_Analysis_JayChou29 | JayChou29 Dataset]]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from [https://github.com/ismir-mirex/ace-results/tree/master/2018 this repository].&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in [https://github.com/ismir-mirex/ace-output/tree/master/2018 this repository]. They can be used to experiment with alternative evaluation measures and statistics.&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2019:Audio_Chord_Estimation_Results&amp;diff=13283</id>
		<title>2019:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2019:Audio_Chord_Estimation_Results&amp;diff=13283"/>
		<updated>2020-10-14T10:23:20Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Undo overzealous correction&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2019 edition of the MIREX automatic chord estimation tasks. This edition was the sixth one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last five years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* All datasets and evaluation procedures are the same as last year's MIREX.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding binaries and code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CLSYJ1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2019/CLSYJ1.pdf PDF]&lt;br /&gt;
| [http://mirlab.org/users/eden.chien/ i Chien], [http://mirlab.org/users/hubert.lee/ Song Rong Lee], [http://mirlab.org/ Yeh Ssuhung], [http://mirlab.org/users/kenshincs/index.htm Tzu-Chun Yeh], [http://mirlab.org/jang Jyh-Shing Roger Jang]&lt;br /&gt;
|-&lt;br /&gt;
| CM1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2019/CM1.pdf PDF]&lt;br /&gt;
| [https://code.soundsoftware.ac.uk/users/3 Chris Cannam], [http://matthiasmauch.net Matthias Mauch]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best).&lt;br /&gt;
&lt;br /&gt;
=====Isophonics2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou29=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-JayChou29.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RobbieWilliams=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-RobbieWilliams.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RWC-Popular=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-RWC-Popular.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====USPOP2002Chords=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-USPOP2002Chords.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator1=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator1.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator2=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator2.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator3=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator3.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator4=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator4.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics2009 Dataset]]&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard2012 Dataset]]&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard2013 Dataset]]&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_JayChou29 | JayChou29 Dataset]]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from [https://github.com/ismir-mirex/ace-results/tree/master/2019 this repository].&lt;br /&gt;
.&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in [https://github.com/ismir-mirex/ace-output/tree/master/2019 this repository]. They can be used to experiment with alternative evaluation measures and statistics.&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2019:Audio_Chord_Estimation_Results&amp;diff=13282</id>
		<title>2019:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2019:Audio_Chord_Estimation_Results&amp;diff=13282"/>
		<updated>2020-10-14T10:20:48Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Remove outdated reference&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2019 edition of the MIREX automatic chord estimation tasks. This edition was the sixth one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last five years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2013 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* All datasets and evaluation procedures are the same as last year's MIREX.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding binaries and code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CLSYJ1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2019/CLSYJ1.pdf PDF]&lt;br /&gt;
| [http://mirlab.org/users/eden.chien/ i Chien], [http://mirlab.org/users/hubert.lee/ Song Rong Lee], [http://mirlab.org/ Yeh Ssuhung], [http://mirlab.org/users/kenshincs/index.htm Tzu-Chun Yeh], [http://mirlab.org/jang Jyh-Shing Roger Jang]&lt;br /&gt;
|-&lt;br /&gt;
| CM1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2019/CM1.pdf PDF]&lt;br /&gt;
| [https://code.soundsoftware.ac.uk/users/3 Chris Cannam], [http://matthiasmauch.net Matthias Mauch]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best).&lt;br /&gt;
&lt;br /&gt;
=====Isophonics2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou29=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-JayChou29.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RobbieWilliams=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-RobbieWilliams.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RWC-Popular=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-RWC-Popular.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====USPOP2002Chords=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-USPOP2002Chords.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator1=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator1.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator2=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator2.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator3=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator3.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator4=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator4.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics2009 Dataset]]&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard2012 Dataset]]&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard2013 Dataset]]&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_JayChou29 | JayChou29 Dataset]]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from [https://github.com/ismir-mirex/ace-results/tree/master/2019 this repository].&lt;br /&gt;
.&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in [https://github.com/ismir-mirex/ace-output/tree/master/2019 this repository]. They can be used to experiment with alternative evaluation measures and statistics.&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2017:Audio_Chord_Estimation_Results&amp;diff=13281</id>
		<title>2017:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2017:Audio_Chord_Estimation_Results&amp;diff=13281"/>
		<updated>2020-10-14T10:18:43Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Remove outdated reference&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2017 edition of the MIREX automatic chord estimation tasks. This edition was the fifth one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last four years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* This year the algorithms have additionally been evaluated on the &amp;quot;RWC-Popular&amp;quot; and &amp;quot;USPOP2002Chords&amp;quot; dataset annotated at the [http://steinhardt.nyu.edu/marl/ Music and Audio Research Lab] of NYU, whose annotations are [https://github.com/tmc323/Chord-Annotations publicly available]. The [https://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-p.html RWC-Popular dataset] contains 100 pop songs recorded specifically for music information retrieval research. The USPOP2002Chords set is the 195 file subset of the [https://labrosa.ee.columbia.edu/projects/musicsim/uspop2002.html USPOP2002 dataset] that have been annotated with chord sequences.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding binaries and code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/CM2.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JLW1, JLW2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/JLW1.pdf PDF]&lt;br /&gt;
| Junyan Jiang, Wei Li, Yiming Wu&lt;br /&gt;
|-&lt;br /&gt;
| KBK1, KBK2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/KBK1.pdf PDF]&lt;br /&gt;
| Filip Korzeniowski, Sebastian Böck, Florian Krebs&lt;br /&gt;
|-&lt;br /&gt;
| WL1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/WL1.pdf PDF]&lt;br /&gt;
| Yiming Wu, Wei Li&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best).&lt;br /&gt;
&lt;br /&gt;
=====Isophonics2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou29=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-JayChou29.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RobbieWilliams=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-RobbieWilliams.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RWC-Popular=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-RWC-Popular.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====USPOP2002Chords=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-USPOP2002Chords.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2017:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics2009 Dataset]]&lt;br /&gt;
* [[2017:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard2012 Dataset]]&lt;br /&gt;
* [[2017:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard2013 Dataset]]&lt;br /&gt;
* [[2017:Audio_Chord_Estimation_Statistical_Analysis_JayChou29 | JayChou29 Dataset]]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from [https://github.com/ismir-mirex/ace-results/tree/master/2017 this repository].&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in [https://github.com/ismir-mirex/ace-output/tree/master/2017 this repository]. They can be used to experiment with alternative evaluation measures and statistics.&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2019:Audio_Chord_Estimation_Results&amp;diff=13280</id>
		<title>2019:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2019:Audio_Chord_Estimation_Results&amp;diff=13280"/>
		<updated>2020-10-14T10:11:45Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Add link to Github repositories&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2019 edition of the MIREX automatic chord estimation tasks. This edition was the sixth one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last five years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* All datasets and evaluation procedures are the same as last year's MIREX.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding binaries and code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output will be provided later &amp;lt;!--provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below.--&amp;gt; More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CLSYJ1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2019/CLSYJ1.pdf PDF]&lt;br /&gt;
| [http://mirlab.org/users/eden.chien/ i Chien], [http://mirlab.org/users/hubert.lee/ Song Rong Lee], [http://mirlab.org/ Yeh Ssuhung], [http://mirlab.org/users/kenshincs/index.htm Tzu-Chun Yeh], [http://mirlab.org/jang Jyh-Shing Roger Jang]&lt;br /&gt;
|-&lt;br /&gt;
| CM1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2019/CM1.pdf PDF]&lt;br /&gt;
| [https://code.soundsoftware.ac.uk/users/3 Chris Cannam], [http://matthiasmauch.net Matthias Mauch]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best).&lt;br /&gt;
&lt;br /&gt;
=====Isophonics2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou29=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-JayChou29.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RobbieWilliams=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-RobbieWilliams.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RWC-Popular=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-RWC-Popular.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====USPOP2002Chords=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-USPOP2002Chords.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator1=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator1.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator2=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator2.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator3=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator3.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator4=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator4.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics2009 Dataset]]&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard2012 Dataset]]&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard2013 Dataset]]&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_JayChou29 | JayChou29 Dataset]]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from [https://github.com/ismir-mirex/ace-results/tree/master/2019 this repository].&lt;br /&gt;
.&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in [https://github.com/ismir-mirex/ace-output/tree/master/2019 this repository]. They can be used to experiment with alternative evaluation measures and statistics.&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2019:Audio_Chord_Estimation_Results&amp;diff=13279</id>
		<title>2019:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2019:Audio_Chord_Estimation_Results&amp;diff=13279"/>
		<updated>2020-10-14T10:03:00Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Make CASD results visible&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2019 edition of the MIREX automatic chord estimation tasks. This edition was the sixth one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last five years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* All datasets and evaluation procedures are the same as last year's MIREX.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding binaries and code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output will be provided later &amp;lt;!--provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below.--&amp;gt; More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CLSYJ1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2019/CLSYJ1.pdf PDF]&lt;br /&gt;
| [http://mirlab.org/users/eden.chien/ i Chien], [http://mirlab.org/users/hubert.lee/ Song Rong Lee], [http://mirlab.org/ Yeh Ssuhung], [http://mirlab.org/users/kenshincs/index.htm Tzu-Chun Yeh], [http://mirlab.org/jang Jyh-Shing Roger Jang]&lt;br /&gt;
|-&lt;br /&gt;
| CM1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2019/CM1.pdf PDF]&lt;br /&gt;
| [https://code.soundsoftware.ac.uk/users/3 Chris Cannam], [http://matthiasmauch.net Matthias Mauch]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best).&lt;br /&gt;
&lt;br /&gt;
=====Isophonics2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou29=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-JayChou29.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RobbieWilliams=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-RobbieWilliams.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RWC-Popular=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-RWC-Popular.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====USPOP2002Chords=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-USPOP2002Chords.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator1=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator1.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator2=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator2.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator3=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator3.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CASD-Annotator4=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2019/ace/2019-CASD-Annotator4.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics2009 Dataset]]&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard2012 Dataset]]&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard2013 Dataset]]&lt;br /&gt;
* [[2019:Audio_Chord_Estimation_Statistical_Analysis_JayChou29 | JayChou29 Dataset]]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from ?.&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in ?. They can be used to experiment with alternative evaluation measures and statistics.&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2018:Audio_Chord_Estimation_Results&amp;diff=12830</id>
		<title>2018:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2018:Audio_Chord_Estimation_Results&amp;diff=12830"/>
		<updated>2018-09-22T09:09:57Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Add note about comparison with last year&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2018 edition of the MIREX automatic chord estimation tasks. This edition was the sixth one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last five years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* All datasets and evaluation procedures are the same as last year's MIREX.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding binaries and code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output will be provided later &amp;lt;!--provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below.--&amp;gt; More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2018/CM1.pdf PDF]&lt;br /&gt;
| [https://code.soundsoftware.ac.uk/users/3 Chris Cannam], [http://c4dm.eecs.qmul.ac.uk/ Matthias Mauch]&lt;br /&gt;
|-&lt;br /&gt;
| JLCX1, JLCX2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2018/JLCX1.pdf PDF]&lt;br /&gt;
| [https://github.com/instr3/ Junyan Jiang],  [https://github.com/RetroCirce Ke Chen], [http://www.cs.fudan.edu.cn/ Wei Li], [http://www.cs.cmu.edu/~gxia Guangyu Xia]&lt;br /&gt;
|-&lt;br /&gt;
| SG1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2018/SG1.pdf PDF]&lt;br /&gt;
| [https://www.fsit.services Franz Strasser], [http://www.jku.at/ Stefan Gaser] &lt;br /&gt;
|-&lt;br /&gt;
| FK2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/WL1.pdf PDF]&lt;br /&gt;
| [http://www.cp.jku.at Florian Krebs], [http://www.cp.jku.at Filip Korzeniowski], [http://www.ofai.at Sebastian Böck]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best).&lt;br /&gt;
&lt;br /&gt;
=====Isophonics2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/ace/2018-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/ace/2018-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/ace/2018-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou29=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/ace/2018-JayChou29.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RobbieWilliams=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/ace/2018-RobbieWilliams.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RWC-Popular=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/ace/2018-RWC-Popular.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====USPOP2002Chords=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/ace/2018-USPOP2002Chords.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2018:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics2009 Dataset]]&lt;br /&gt;
* [[2018:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard2012 Dataset]]&lt;br /&gt;
* [[2018:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard2013 Dataset]]&lt;br /&gt;
* [[2018:Audio_Chord_Estimation_Statistical_Analysis_JayChou29 | JayChou29 Dataset]]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from ?.&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in ?. They can be used to experiment with alternative evaluation measures and statistics.&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2018:Audio_Key_Detection_Results&amp;diff=12829</id>
		<title>2018:Audio Key Detection Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2018:Audio_Key_Detection_Results&amp;diff=12829"/>
		<updated>2018-09-22T09:05:44Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Add note about comparison with last year&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2018 edition of the MIREX automatic key detection estimation task.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* Nothing has changed, which means the results can be directly compared to last year's.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CG1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2018/CG1.pdf PDF]&lt;br /&gt;
| [http://www.uab.cat David Castells-Rufas], [http://www.uab.cat/enginyeria/ Adria Galin]&lt;br /&gt;
|-&lt;br /&gt;
| GC1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2018/GC1.pdf PDF]&lt;br /&gt;
| [http://www.uab.cat/enginyeria/ Adria Galin], [http://www.uab.cat David Castells-Rufas]&lt;br /&gt;
|-&lt;br /&gt;
| CN1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2018/CN1.pdf PDF]&lt;br /&gt;
| [https://code.soundsoftware.ac.uk/users/3 Chris Cannam], [http://c4dm.eecs.qmul.ac.uk/ Katy Noland] &lt;br /&gt;
|-&lt;br /&gt;
| NA1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2018/NA1.pdf PDF]&lt;br /&gt;
| [http://ddmal.music.mcgill.ca/ Nestor Napoles Lopez] , [https://music.gatech.edu/ Claire Arthur]&lt;br /&gt;
|-&lt;br /&gt;
| FK1, FK3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2018/FK1.pdf PDF]&lt;br /&gt;
| [http://www.cp.jku.at Filip Korzeniowski]&lt;br /&gt;
|-&lt;br /&gt;
| OM1-OM3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2018/OM1.pdf PDF]&lt;br /&gt;
| [http://JamesOwers.github.io James Owers], [http://homepages.inf.ed.ac.uk/amcleod8 Andrew McLeod]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best).&lt;br /&gt;
&lt;br /&gt;
====MIREX2005Key====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/akd/2018-MIREX2005Key.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
====GiantStepsKey====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/akd/2018-GiantStepsKey.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
====PresegmentedKeyIsophonics====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/akd/2018-PresegmentedKeyIsophonics.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
====PresegmentedKeyRobbieWilliams====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/akd/2018-PresegmentedKeyRobbieWilliams.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
====Billboard2012Key====&lt;br /&gt;
&amp;lt;csv&amp;gt;2018/akd/2018-Billboard2012Key.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2017:Audio_Chord_Estimation_Results&amp;diff=12368</id>
		<title>2017:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2017:Audio_Chord_Estimation_Results&amp;diff=12368"/>
		<updated>2017-11-28T17:01:01Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Add links to repositories&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2017 edition of the MIREX automatic chord estimation tasks. This edition was the fifth one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last four years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* This year the algorithms have additionally been evaluated on the &amp;quot;RWC-Popular&amp;quot; and &amp;quot;USPOP2002Chords&amp;quot; dataset annotated at the [http://steinhardt.nyu.edu/marl/ Music and Audio Research Lab] of NYU, whose annotations are [https://github.com/tmc323/Chord-Annotations publicly available]. The [https://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-p.html RWC-Popular dataset] contains 100 pop songs recorded specifically for music information retrieval research. The USPOP2002Chords set is the 195 file subset of the [https://labrosa.ee.columbia.edu/projects/musicsim/uspop2002.html USPOP2002 dataset] that have been annotated with chord sequences.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding binaries and code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output will be provided later &amp;lt;!--provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below.--&amp;gt; More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/CM2.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JLW1, JLW2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/JLW1.pdf PDF]&lt;br /&gt;
| Junyan Jiang, Wei Li, Yiming Wu&lt;br /&gt;
|-&lt;br /&gt;
| KBK1, KBK2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/KBK1.pdf PDF]&lt;br /&gt;
| Filip Korzeniowski, Sebastian Böck, Florian Krebs&lt;br /&gt;
|-&lt;br /&gt;
| WL1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/WL1.pdf PDF]&lt;br /&gt;
| Yiming Wu, Wei Li&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best).&lt;br /&gt;
&lt;br /&gt;
=====Isophonics2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou29=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-JayChou29.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RobbieWilliams=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-RobbieWilliams.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RWC-Popular=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-RWC-Popular.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====USPOP2002Chords=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-USPOP2002Chords.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2017:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics2009 Dataset]]&lt;br /&gt;
* [[2017:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard2012 Dataset]]&lt;br /&gt;
* [[2017:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard2013 Dataset]]&lt;br /&gt;
* [[2017:Audio_Chord_Estimation_Statistical_Analysis_JayChou29 | JayChou29 Dataset]]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from [https://github.com/ismir-mirex/ace-results/tree/master/2017 this repository].&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in [https://github.com/ismir-mirex/ace-output/tree/master/2017 this repository]. They can be used to experiment with alternative evaluation measures and statistics.&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2017:Audio_Key_Detection_Results&amp;diff=12367</id>
		<title>2017:Audio Key Detection Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2017:Audio_Key_Detection_Results&amp;diff=12367"/>
		<updated>2017-11-21T17:28:06Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2017 edition of the MIREX automatic key detection estimation task.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* The NEMA system was retired this year, since a bug has been found in the calculation of the results. Keys with tonics related by a fifth and the same mode (a.k.a. adjacent keys) are supposed to get a score of 0.5, but only ascending fifths (going from ground-truth to estimation) were counted, not descending ones. It has been brought to my attention that the description of the measure on the wiki has been ambiguous for years, and probably the NEMA implementer got confused by this. However, the intention has always been to count ascending and descending fifth (or fourth) relationships between the tonics (in my humble opinion).&lt;br /&gt;
* New datasets: &amp;quot;PresegmentedKeyIsophonics&amp;quot; and &amp;quot;PresegmentedKeyRobbieWilliams&amp;quot; use the local key annotations for the [http://isophonics.net/content/reference-annotations Isophonics set] and the [http://ispg.deib.polimi.it/mir-software.html Robbie Williams set], but have been split into separate files according to the local key annotations. The segments annotated with major and minor modes have been retained and were presented to the submissions. Therefore their results are slightly optimistic in the sense that the segments are guaranteed to contain just a single key, which is not the case for real-world songs. Keep also in mind that some files are strongly correlated (different segments or even repeated chorusses of the same song). Any statistical analysis of the results (e.g. pairwise significance tests) that relies on independence between files is consequently invalid.&lt;br /&gt;
* New dataset: &amp;quot;Billboard2012Key&amp;quot; is the subset of the Billboard2012 chord dataset for which it was possible to derive the key automatically from the chord annotations (using the procedure outlined by Korzeniowski &amp;amp; Widmer in their [https://arxiv.org/abs/1706.02921 2017 EUSIPCO paper]). The annotations are [http://www.cp.jku.at/people/korzeniowski/bb.zip freely available]&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| BD1, BD2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/BD1.pdf PDF]&lt;br /&gt;
| Gilberto Bernardes, Matthew Davies&lt;br /&gt;
|-&lt;br /&gt;
| CN1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/CN1.pdf PDF]&lt;br /&gt;
| Chris Cannam, Katy Noland&lt;br /&gt;
|-&lt;br /&gt;
| FK1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/FK1.pdf PDF]&lt;br /&gt;
| Filip Korzeniowski&lt;br /&gt;
|-&lt;br /&gt;
| HS1-HS3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/HS1.pdf PDF]&lt;br /&gt;
| Hendrik Schreiber&lt;br /&gt;
|-&lt;br /&gt;
| PRGR5&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/PRGR5.pdf PDF]&lt;br /&gt;
| Adam Pluta, Marcin Gawrysz&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best).&lt;br /&gt;
&lt;br /&gt;
Note: until the table display problems are resolved, you can download the results from my [http://eecs.qmul.ac.uk/~johan/akd17results.tar.gz personal website]. --Johan&lt;br /&gt;
&lt;br /&gt;
=====MIREX2005Key=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/akd/2017-MIREX2005Key.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====GiantStepsKey=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/akd/2017-GiantStepsKey.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====PresegmentedKeyIsophonics=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/akd/2017-PresegmentedKeyIsophonics.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====PresegmentedKeyRobbieWilliams=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/akd/2017-PresegmentedKeyRobbieWilliams.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2012Key=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/akd/2017-Billboard2012Key.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The submission PRGR5 is currently not able to complete the task without crashing, but hopefully this can still be remedied. These tables will be updated as soon as this is the case.&lt;br /&gt;
&lt;br /&gt;
==Note==&lt;br /&gt;
This page will be further updated with more detailed info and extended results (extra statistics, per-file results, confusion matrices) once I get back to a country where the wifi is better and Google's services aren't blocked (which includes the captcha's for this bloody wiki). That will be around November 10th. --Johan&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2017:Audio_Key_Detection_Results&amp;diff=12366</id>
		<title>2017:Audio Key Detection Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2017:Audio_Key_Detection_Results&amp;diff=12366"/>
		<updated>2017-11-21T17:27:37Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Temporarily host result csvs at QMUL&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2017 edition of the MIREX automatic key detection estimation task.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* The NEMA system was retired this year, since a bug has been found in the calculation of the results. Keys with tonics related by a fifth and the same mode (a.k.a. adjacent keys) are supposed to get a score of 0.5, but only ascending fifths (going from ground-truth to estimation) were counted, not descending ones. It has been brought to my attention that the description of the measure on the wiki has been ambiguous for years, and probably the NEMA implementer got confused by this. However, the intention has always been to count ascending and descending fifth (or fourth) relationships between the tonics (in my humble opinion).&lt;br /&gt;
* New datasets: &amp;quot;PresegmentedKeyIsophonics&amp;quot; and &amp;quot;PresegmentedKeyRobbieWilliams&amp;quot; use the local key annotations for the [http://isophonics.net/content/reference-annotations Isophonics set] and the [http://ispg.deib.polimi.it/mir-software.html Robbie Williams set], but have been split into separate files according to the local key annotations. The segments annotated with major and minor modes have been retained and were presented to the submissions. Therefore their results are slightly optimistic in the sense that the segments are guaranteed to contain just a single key, which is not the case for real-world songs. Keep also in mind that some files are strongly correlated (different segments or even repeated chorusses of the same song). Any statistical analysis of the results (e.g. pairwise significance tests) that relies on independence between files is consequently invalid.&lt;br /&gt;
* New dataset: &amp;quot;Billboard2012Key&amp;quot; is the subset of the Billboard2012 chord dataset for which it was possible to derive the key automatically from the chord annotations (using the procedure outlined by Korzeniowski &amp;amp; Widmer in their [https://arxiv.org/abs/1706.02921 2017 EUSIPCO paper]). The annotations are [http://www.cp.jku.at/people/korzeniowski/bb.zip freely available]&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| BD1, BD2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/BD1.pdf PDF]&lt;br /&gt;
| Gilberto Bernardes, Matthew Davies&lt;br /&gt;
|-&lt;br /&gt;
| CN1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/CN1.pdf PDF]&lt;br /&gt;
| Chris Cannam, Katy Noland&lt;br /&gt;
|-&lt;br /&gt;
| FK1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/FK1.pdf PDF]&lt;br /&gt;
| Filip Korzeniowski&lt;br /&gt;
|-&lt;br /&gt;
| HS1-HS3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/HS1.pdf PDF]&lt;br /&gt;
| Hendrik Schreiber&lt;br /&gt;
|-&lt;br /&gt;
| PRGR5&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/PRGR5.pdf PDF]&lt;br /&gt;
| Adam Pluta, Marcin Gawrysz&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best).&lt;br /&gt;
&lt;br /&gt;
Note: until the table display problems are resolved, you can download the results from my [http://eecs.qmul.ac.uk/~johan/akd2017results.tar.gz personal website]. --Johan&lt;br /&gt;
&lt;br /&gt;
=====MIREX2005Key=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/akd/2017-MIREX2005Key.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====GiantStepsKey=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/akd/2017-GiantStepsKey.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====PresegmentedKeyIsophonics=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/akd/2017-PresegmentedKeyIsophonics.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====PresegmentedKeyRobbieWilliams=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/akd/2017-PresegmentedKeyRobbieWilliams.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2012Key=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/akd/2017-Billboard2012Key.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The submission PRGR5 is currently not able to complete the task without crashing, but hopefully this can still be remedied. These tables will be updated as soon as this is the case.&lt;br /&gt;
&lt;br /&gt;
==Note==&lt;br /&gt;
This page will be further updated with more detailed info and extended results (extra statistics, per-file results, confusion matrices) once I get back to a country where the wifi is better and Google's services aren't blocked (which includes the captcha's for this bloody wiki). That will be around November 10th. --Johan&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2017:Audio_Key_Detection_Results&amp;diff=12365</id>
		<title>2017:Audio Key Detection Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2017:Audio_Key_Detection_Results&amp;diff=12365"/>
		<updated>2017-11-02T03:09:23Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: First addition of AKD results&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2017 edition of the MIREX automatic key detection estimation task.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* The NEMA system was retired this year, since a bug has been found in the calculation of the results. Keys with tonics related by a fifth and the same mode (a.k.a. adjacent keys) are supposed to get a score of 0.5, but only ascending fifths (going from ground-truth to estimation) were counted, not descending ones. It has been brought to my attention that the description of the measure on the wiki has been ambiguous for years, and probably the NEMA implementer got confused by this. However, the intention has always been to count ascending and descending fifth (or fourth) relationships between the tonics (in my humble opinion).&lt;br /&gt;
* New datasets: &amp;quot;PresegmentedKeyIsophonics&amp;quot; and &amp;quot;PresegmentedKeyRobbieWilliams&amp;quot; use the local key annotations for the [http://isophonics.net/content/reference-annotations Isophonics set] and the [http://ispg.deib.polimi.it/mir-software.html Robbie Williams set], but have been split into separate files according to the local key annotations. The segments annotated with major and minor modes have been retained and were presented to the submissions. Therefore their results are slightly optimistic in the sense that the segments are guaranteed to contain just a single key, which is not the case for real-world songs. Keep also in mind that some files are strongly correlated (different segments or even repeated chorusses of the same song). Any statistical analysis of the results (e.g. pairwise significance tests) that relies on independence between files is consequently invalid.&lt;br /&gt;
* New dataset: &amp;quot;Billboard2012Key&amp;quot; is the subset of the Billboard2012 chord dataset for which it was possible to derive the key automatically from the chord annotations (using the procedure outlined by Korzeniowski &amp;amp; Widmer in their [https://arxiv.org/abs/1706.02921 2017 EUSIPCO paper]). The annotations are [http://www.cp.jku.at/people/korzeniowski/bb.zip freely available]&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| BD1, BD2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/BD1.pdf PDF]&lt;br /&gt;
| Gilberto Bernardes, Matthew Davies&lt;br /&gt;
|-&lt;br /&gt;
| CN1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/CN1.pdf PDF]&lt;br /&gt;
| Chris Cannam, Katy Noland&lt;br /&gt;
|-&lt;br /&gt;
| FK1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/FK1.pdf PDF]&lt;br /&gt;
| Filip Korzeniowski&lt;br /&gt;
|-&lt;br /&gt;
| HS1-HS3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/HS1.pdf PDF]&lt;br /&gt;
| Hendrik Schreiber&lt;br /&gt;
|-&lt;br /&gt;
| PRGR5&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/PRGR5.pdf PDF]&lt;br /&gt;
| Adam Pluta, Marcin Gawrysz&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best).&lt;br /&gt;
&lt;br /&gt;
=====MIREX2005Key=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/akd/2017-MIREX2005Key.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====GiantStepsKey=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/akd/2017-GiantStepsKey.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====PresegmentedKeyIsophonics=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/akd/2017-PresegmentedKeyIsophonics.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====PresegmentedKeyRobbieWilliams=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/akd/2017-PresegmentedKeyRobbieWilliams.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2012Key=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/akd/2017-Billboard2012Key.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The submission PRGR5 is currently not able to complete the task without crashing, but hopefully this can still be remedied. These tables will be updated as soon as this is the case.&lt;br /&gt;
&lt;br /&gt;
==Note==&lt;br /&gt;
This page will be further updated with more detailed info and extended results (extra statistics, per-file results, confusion matrices) once I get back to a country where the wifi is better and Google's services aren't blocked (which includes the captcha's for this bloody wiki). That will be around November 10th. --Johan&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2017:MIREX2017_Results&amp;diff=12364</id>
		<title>2017:MIREX2017 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2017:MIREX2017_Results&amp;diff=12364"/>
		<updated>2017-11-02T03:08:24Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Add AKD results&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Overall Results Poster==&lt;br /&gt;
Coming soon&lt;br /&gt;
&lt;br /&gt;
==Results by Task (More results are coming) ==&lt;br /&gt;
* [[2017:Audio Fingerprinting Results|Audio Fingerprinting Results]]&lt;br /&gt;
* [[2017:Set List Identification Results | Set List Identification Results]]&lt;br /&gt;
&lt;br /&gt;
*Train-Test Task Set&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/kgenrek_report/ Audio KPOP Genre (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/kgenrea_report/ Audio KPOP Genre (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/kmoodk_report/ Audio KPOP Mood (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/kmooda_report/ Audio KPOP Mood (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2017/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2017/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2017:Automatic Lyrics-to-Audio Alignment Results | Automatic Lyrics-to-Audio Alignment Results]]&lt;br /&gt;
* [[2017:Discovery of Repeated Themes &amp;amp; Sections Results|Discovery of Repeated Themes &amp;amp; Sections Results]]&lt;br /&gt;
&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2017/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2017/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2017/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2017/results/struct/salami/ SALAMI dataset] (partial) &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2017/results/abt/smc/ SMC Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2017/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2017/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2017:Audio_Chord_Estimation_Results | Audio Chord Estimation]]&lt;br /&gt;
** [[2017:Audio_Chord_Estimation_Results#Isophonics2009 | Isophonics2009 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2017:Audio_Chord_Estimation_Results#Billboard2012 | Billboard2012 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2017:Audio_Chord_Estimation_Results#Billboard_013 | Billboard2013 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2017:Audio_Chord_Estimation_Results#JayChou29 | JayChou29 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2017:Audio_Chord_Estimation_Results#RobbieWilliams | RobbieWilliams Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2017:Audio_Chord_Estimation_Results#RWC-Popular | RWC-Popular Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2017:Audio_Chord_Estimation_Results#USPOP2002Chords | USPOP2002Chords Dataset]] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Multiple Fundamental Frequency Estimation &amp;amp; Tracking Results&lt;br /&gt;
** [[2017:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results_%2D_MIREX_Dataset | MIREX Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2017:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results_%2D_Su_Dataset | Su Dataset]] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2017:Drum_Transcription_Results | Drum Transcription]] &amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2017:Real-time_Audio_to_Score_Alignment_(a.k.a._Score_Following)_Results Real-time Audio to Score Alignment (a.k.a. Score Following) Results] &amp;amp;nbsp;&lt;br /&gt;
* [[2017:Audio_Key_Detection_Results | Audio Key Detection]] &amp;amp;nbsp;&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2017:Audio_Chord_Estimation_Results&amp;diff=12267</id>
		<title>2017:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2017:Audio_Chord_Estimation_Results&amp;diff=12267"/>
		<updated>2017-10-16T17:33:44Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Creation of 2017 ACE results&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2017 edition of the MIREX automatic chord estimation tasks. This edition was the fifth one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last four years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* This year the algorithms have additionally been evaluated on the &amp;quot;RWC-Popular&amp;quot; and &amp;quot;USPOP2002Chords&amp;quot; dataset annotated at the [http://steinhardt.nyu.edu/marl/ Music and Audio Research Lab] of NYU, whose annotations are [https://github.com/tmc323/Chord-Annotations publicly available]. The [https://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-p.html RWC-Popular dataset] contains 100 pop songs recorded specifically for music information retrieval research. The USPOP2002Chords set is the 195 file subset of the [https://labrosa.ee.columbia.edu/projects/musicsim/uspop2002.html USPOP2002 dataset] that have been annotated with chord sequences.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding binaries and code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output will be provided later &amp;lt;!--provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below.--&amp;gt; More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/CM2.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JLW1, JLW2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/JLW1.pdf PDF]&lt;br /&gt;
| Junyan Jiang, Wei Li, Yiming Wu&lt;br /&gt;
|-&lt;br /&gt;
| KBK1, KBK2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/KBK1.pdf PDF]&lt;br /&gt;
| Filip Korzeniowski, Sebastian Böck, Florian Krebs&lt;br /&gt;
|-&lt;br /&gt;
| WL1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2017/WL1.pdf PDF]&lt;br /&gt;
| Yiming Wu, Wei Li&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best).&lt;br /&gt;
&lt;br /&gt;
=====Isophonics2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou29=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-JayChou29.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RobbieWilliams=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-RobbieWilliams.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RWC-Popular=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-RWC-Popular.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====USPOP2002Chords=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2017/ace/2017-USPOP2002Chords.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2017:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics2009 Dataset]]&lt;br /&gt;
* [[2017:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard2012 Dataset]]&lt;br /&gt;
* [[2017:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard2013 Dataset]]&lt;br /&gt;
* [[2017:Audio_Chord_Estimation_Statistical_Analysis_JayChou29 | JayChou29 Dataset]]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
Coming later&lt;br /&gt;
&amp;lt;!--More details about the performance of the algorithms, including per-song performance, confusion matrices and supplementary statistics, are available in this [https://music-ir.org/mirex/results/2016/ace/detailled-results-2016.zip zip-file].--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
Coming later&lt;br /&gt;
&amp;lt;!--The raw output of the algorithms are available on [https://github.com/ismir-mirex/ace-output/tree/master/2016 GitHub]. They can be used to experiment with alternative evaluation measures and statistics.--&amp;gt;&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2017:MIREX2017_Results&amp;diff=12205</id>
		<title>2017:MIREX2017 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2017:MIREX2017_Results&amp;diff=12205"/>
		<updated>2017-10-12T11:43:09Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: /* Results by Task (More results are coming) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Overall Results Poster==&lt;br /&gt;
Coming soon&lt;br /&gt;
&lt;br /&gt;
==Results by Task (More results are coming) ==&lt;br /&gt;
* [[2017:Audio Fingerprinting Results|Audio Fingerprinting Results]]&lt;br /&gt;
* [[2017:Set List Identification Results | Set List Identification Results]]&lt;br /&gt;
&lt;br /&gt;
*Train-Test Task Set&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/kgenrek_report/ Audio KPOP Genre (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/kgenrea_report/ Audio KPOP Genre (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/kmoodk_report/ Audio KPOP Mood (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
** [https://www.music-ir.org/nema_out/mirex2017/results/act/kmooda_report/ Audio KPOP Mood (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2017/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2017/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2017:Discovery of Repeated Themes &amp;amp; Sections Results|Discovery of Repeated Themes &amp;amp; Sections Results]]&lt;br /&gt;
&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2017/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2017/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2017/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2017/results/struct/salami/ SALAMI dataset] (partial) &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2017/results/abt/smc/ SMC Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2017/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2017/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2017:Audio_Chord_Estimation_Results | Audio Chord Estimation]]&lt;br /&gt;
** [[2017:Audio_Chord_Estimation_Results#Isophonics2009 | Isophonics2009 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2017:Audio_Chord_Estimation_Results#Billboard2012 | Billboard2012 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2017:Audio_Chord_Estimation_Results#Billboard_013 | Billboard2013 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2017:Audio_Chord_Estimation_Results#JayChou29 | JayChou29 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2017:Audio_Chord_Estimation_Results#RobbieWilliams | RobbieWilliams Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2017:Audio_Chord_Estimation_Results#RWC-Popular | RWC-Popular Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2017:Audio_Chord_Estimation_Results#USPOP2002Chords | USPOP2002Chords Dataset]] &amp;amp;nbsp;&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Audio_Chord_Estimation_Results_Billboard_2013&amp;diff=11958</id>
		<title>2013:Audio Chord Estimation Results Billboard 2013</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Audio_Chord_Estimation_Results_Billboard_2013&amp;diff=11958"/>
		<updated>2016-08-31T10:37:09Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Add significant figures&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This year, we have started a new evaluation battery for audio chord estimation. This page contains the results of these new evaluations for a special subset of the ''Billboard'' dataset from McGill University that has never been made available to the public. Further subsets have been withheld to support the ACE task through MIREX 2015.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CB3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CB3.pdf PDF]&lt;br /&gt;
| Taemin Cho &amp;amp; Juan P. Bello&lt;br /&gt;
|-&lt;br /&gt;
| CB4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CB4.pdf PDF]&lt;br /&gt;
| Taemin Cho &amp;amp; Juan P. Bello&lt;br /&gt;
|-&lt;br /&gt;
| CF2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CF2.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch, Matthew E. P. Davies, Simon Dixon, Christian Landone, Katy Noland, Mark Levy, Massimiliano Zanoni, Dan Stowell &amp;amp; Luís A. Figueira&lt;br /&gt;
|-&lt;br /&gt;
| KO1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich &amp;amp; Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| KO2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/KO2.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich &amp;amp; Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| NG1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NG1.pdf PDF]&lt;br /&gt;
| Nikolay Glazyrin&lt;br /&gt;
|-&lt;br /&gt;
| NG2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NG2.pdf PDF]&lt;br /&gt;
| Nikolay Glazyrin&lt;br /&gt;
|-&lt;br /&gt;
| NMSD1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NMSD1.pdf PDF]&lt;br /&gt;
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez &amp;amp; Tijl De Bie&lt;br /&gt;
|-&lt;br /&gt;
| NMSD2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NMSD2.pdf PDF]&lt;br /&gt;
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez &amp;amp; Tijl De Bie&lt;br /&gt;
|-&lt;br /&gt;
| PP3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/PP3.pdf PDF] &lt;br /&gt;
| Johan Pauwels &amp;amp; Geoffroy Peeters&lt;br /&gt;
|-&lt;br /&gt;
| PP4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/PP4.pdf PDF] &lt;br /&gt;
| Johan Pauwels &amp;amp; Geoffroy Peeters&lt;br /&gt;
|-&lt;br /&gt;
| SB8&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/SB8.pdf PDF] &lt;br /&gt;
| Nikolaas Steenbergen &amp;amp; John Ashley Burgoyne&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2013/ace/2013-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2013.zip BillboardTest2013.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2013Output.zip BillboardTest2013Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Audio_Chord_Estimation_Results_Billboard_2012&amp;diff=11957</id>
		<title>2013:Audio Chord Estimation Results Billboard 2012</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Audio_Chord_Estimation_Results_Billboard_2012&amp;diff=11957"/>
		<updated>2016-08-31T10:36:27Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Add significant figures&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This year, we have started a new evaluation battery for audio chord estimation. This page contains the results of these new evaluations for an abridged version of the ''Billboard'' dataset from McGill University, including a representative sample of American popular music from the 1950s through the 1990s, as used for MIREX 2012.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CB3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CB3.pdf PDF]&lt;br /&gt;
| Taemin Cho &amp;amp; Juan P. Bello&lt;br /&gt;
|-&lt;br /&gt;
| CB4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CB4.pdf PDF]&lt;br /&gt;
| Taemin Cho &amp;amp; Juan P. Bello&lt;br /&gt;
|-&lt;br /&gt;
| CF2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CF2.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch, Matthew E. P. Davies, Simon Dixon, Christian Landone, Katy Noland, Mark Levy, Massimiliano Zanoni, Dan Stowell &amp;amp; Luís A. Figueira&lt;br /&gt;
|-&lt;br /&gt;
| KO1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich &amp;amp; Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| KO2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/KO2.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich &amp;amp; Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| NG1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NG1.pdf PDF]&lt;br /&gt;
| Nikolay Glazyrin&lt;br /&gt;
|-&lt;br /&gt;
| NG2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NG2.pdf PDF]&lt;br /&gt;
| Nikolay Glazyrin&lt;br /&gt;
|-&lt;br /&gt;
| NMSD1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NMSD1.pdf PDF]&lt;br /&gt;
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez &amp;amp; Tijl De Bie&lt;br /&gt;
|-&lt;br /&gt;
| NMSD2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NMSD2.pdf PDF]&lt;br /&gt;
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez &amp;amp; Tijl De Bie&lt;br /&gt;
|-&lt;br /&gt;
| PP3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/PP3.pdf PDF] &lt;br /&gt;
| Johan Pauwels &amp;amp; Geoffroy Peeters&lt;br /&gt;
|-&lt;br /&gt;
| PP4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/PP4.pdf PDF] &lt;br /&gt;
| Johan Pauwels &amp;amp; Geoffroy Peeters&lt;br /&gt;
|-&lt;br /&gt;
| SB8&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/SB8.pdf PDF] &lt;br /&gt;
| Nikolaas Steenbergen &amp;amp; John Ashley Burgoyne&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2013/ace/2013-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2012.zip BillboardTest2012.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2012Output.zip BillboardTest2012Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Audio_Chord_Estimation_Results_MIREX_2009&amp;diff=11956</id>
		<title>2013:Audio Chord Estimation Results MIREX 2009</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Audio_Chord_Estimation_Results_MIREX_2009&amp;diff=11956"/>
		<updated>2016-08-31T10:35:33Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Add significant figures&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This year, we have started a new evaluation battery for audio chord estimation. This page contains the results of these new evaluations for the Isophonics dataset, a.k.a. the MIREX 2009 dataset. It comprises the collected Beatles, Queen, and Zweieck datasets from Queen Mary, University of London, and has been used for audio chord estimation in MIREX for many years.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CB3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CB3.pdf PDF]&lt;br /&gt;
| Taemin Cho &amp;amp; Juan P. Bello&lt;br /&gt;
|-&lt;br /&gt;
| CB4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CB4.pdf PDF]&lt;br /&gt;
| Taemin Cho &amp;amp; Juan P. Bello&lt;br /&gt;
|-&lt;br /&gt;
| CF2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/CF2.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch, Matthew E. P. Davies, Simon Dixon, Christian Landone, Katy Noland, Mark Levy, Massimiliano Zanoni, Dan Stowell &amp;amp; Luís A. Figueira&lt;br /&gt;
|-&lt;br /&gt;
| KO1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich &amp;amp; Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| KO2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/KO2.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich &amp;amp; Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| NG1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NG1.pdf PDF]&lt;br /&gt;
| Nikolay Glazyrin&lt;br /&gt;
|-&lt;br /&gt;
| NG2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NG2.pdf PDF]&lt;br /&gt;
| Nikolay Glazyrin&lt;br /&gt;
|-&lt;br /&gt;
| NMSD1&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NMSD1.pdf PDF]&lt;br /&gt;
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez &amp;amp; Tijl De Bie&lt;br /&gt;
|-&lt;br /&gt;
| NMSD2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/NMSD2.pdf PDF]&lt;br /&gt;
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez &amp;amp; Tijl De Bie&lt;br /&gt;
|-&lt;br /&gt;
| PP3&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/PP3.pdf PDF] &lt;br /&gt;
| Johan Pauwels &amp;amp; Geoffroy Peeters&lt;br /&gt;
|-&lt;br /&gt;
| PP4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/PP4.pdf PDF] &lt;br /&gt;
| Johan Pauwels &amp;amp; Geoffroy Peeters&lt;br /&gt;
|-&lt;br /&gt;
| SB8&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2013/SB8.pdf PDF] &lt;br /&gt;
| Nikolaas Steenbergen &amp;amp; John Ashley Burgoyne&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2013/ace/2013-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/MirexChord2009.zip MirexChord2009.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/MirexChord2009Output.zip MirexChord2009Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Audio_Chord_Estimation_Results&amp;diff=11955</id>
		<title>2016:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Audio_Chord_Estimation_Results&amp;diff=11955"/>
		<updated>2016-08-30T12:29:32Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: /* Detailed Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2016 edition of the MIREX automatic chord estimation tasks. This edition was the fourth one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last three years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* This year the algorithms have been evaluated on the &amp;quot;Robbie Williams&amp;quot; dataset annotated at the [http://ispg.deib.polimi.it/ Image and Sound Processing Group] of Politecnico di Milano, which is [http://ispg.deib.polimi.it/mir-software.html publicly available]. A detailed description of this set can be found in [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&amp;amp;arnumber=6623838 DiGiorgi et al. (2013)].&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding binaries and code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM1 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/CM1.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| DK1-DK4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/DK1.pdf PDF]&lt;br /&gt;
| Junqi Deng, Yu-Kwong Kwok&lt;br /&gt;
|-&lt;br /&gt;
| FK2, FK4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/FK2.pdf PDF]&lt;br /&gt;
| Filip Korzeniowski&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). &amp;lt;!--The table is sorted on WCSR for the major-minor vocabulary.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Isophonics 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou 2015=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-JayChou2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RobbieWilliams 2016=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-RobbieWilliams2016.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
In progress&amp;lt;!--An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics 2009 Dataset]]&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard 2012 Dataset]]&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard 2013 Dataset]]&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_JayChou2015 | JayChou 2015 Dataset]]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance, confusion matrices and supplementary statistics, are available in this [https://music-ir.org/mirex/results/2016/ace/detailled-results-2016.zip zip-file].&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available on [https://github.com/ismir-mirex/ace-output/tree/master/2016 GitHub]. They can be used to experiment with alternative evaluation measures and statistics.&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Audio_Chord_Estimation_Results&amp;diff=11954</id>
		<title>2016:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Audio_Chord_Estimation_Results&amp;diff=11954"/>
		<updated>2016-08-30T12:24:19Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Add algorithmic output link&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2016 edition of the MIREX automatic chord estimation tasks. This edition was the fourth one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last three years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* This year the algorithms have been evaluated on the &amp;quot;Robbie Williams&amp;quot; dataset annotated at the [http://ispg.deib.polimi.it/ Image and Sound Processing Group] of Politecnico di Milano, which is [http://ispg.deib.polimi.it/mir-software.html publicly available]. A detailed description of this set can be found in [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&amp;amp;arnumber=6623838 DiGiorgi et al. (2013)].&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding binaries and code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM1 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/CM1.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| DK1-DK4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/DK1.pdf PDF]&lt;br /&gt;
| Junqi Deng, Yu-Kwong Kwok&lt;br /&gt;
|-&lt;br /&gt;
| FK2, FK4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/FK2.pdf PDF]&lt;br /&gt;
| Filip Korzeniowski&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). &amp;lt;!--The table is sorted on WCSR for the major-minor vocabulary.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Isophonics 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou 2015=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-JayChou2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RobbieWilliams 2016=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-RobbieWilliams2016.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
In progress&amp;lt;!--An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics 2009 Dataset]]&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard 2012 Dataset]]&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard 2013 Dataset]]&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_JayChou2015 | JayChou 2015 Dataset]]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
In progress&amp;lt;!--More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Results.zip Isophonics2009Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Results.zip Billboard2012Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Results.zip Billboard2013Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/JayChou2015Results.zip JayChou2015Results.zip]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available on [https://github.com/ismir-mirex/ace-output/tree/master/2016 GitHub]. They can be used to experiment with alternative evaluation measures and statistics.&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Task_Captains&amp;diff=11927</id>
		<title>2016:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Task_Captains&amp;diff=11927"/>
		<updated>2016-08-10T20:12:43Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Update email address&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Like ISMIR 2015, we are prepared to improve the distribution of tasks for the upcoming MIREX 2016.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead one or more tasks, please add your name in the &amp;quot;Captains&amp;quot; column.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain(s)&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2016:Audio Beat Tracking]]&lt;br /&gt;
|Sebastian Böck (sebastian.boeck@jku.at), Florian Krebs (florian.krebs@jku.at)&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2016:Audio Chord Estimation]]&lt;br /&gt;
|Johan Pauwels (j.pauwels@qmul.ac.uk)&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2016:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|IMIRSEL (mirproject@lists.lis.illinois.edu)&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2016:Audio Cover Song Identification]]&lt;br /&gt;
|Chris Tralie (chris.tralie@gmail.com)&lt;br /&gt;
|-&lt;br /&gt;
|ade&lt;br /&gt;
|[[2016:Audio Downbeat Estimation]]&lt;br /&gt;
|Florian Krebs (florian.krebs@jku.at), Sebastian Böck (sebastian.boeck@jku.at)&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2016:Audio Key Detection]]&lt;br /&gt;
|Johan Pauwels (j.pauwels@qmul.ac.uk)&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2016:Audio Melody Extraction]]&lt;br /&gt;
|KETI (dalwon@keti.re.kr)&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2016:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|IMIRSEL (mirproject@lists.lis.illinois.edu)&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2016:Audio Onset Detection]]&lt;br /&gt;
|Sebastian Böck (sebastian.boeck@jku.at)&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2016:Audio Tempo Estimation]]&lt;br /&gt;
|Aggelos Gkiokas (agkiokas@ilsp.gr)&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2016:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|Yun Hao (yunhao2@illinois.edu)&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2016:Query by Singing/Humming]]&lt;br /&gt;
|KETI (dalwon@keti.re.kr)&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2016:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|Julio Carabias (julio.carabias@upf.edu)&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2016:Structural Segmentation]]&lt;br /&gt;
|Piotr Organisciak (organis2@illinois.edu)&lt;br /&gt;
|-&lt;br /&gt;
|drts&lt;br /&gt;
|[[2016:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|Tom Collins (tom.collins@jku.at)&lt;br /&gt;
|-&lt;br /&gt;
|sli&lt;br /&gt;
|[[2016:Set List Identification ]]&lt;br /&gt;
|Ming-Chi Yen (ymchiqq@gmail.com)&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2016:Audio Offset Detection ]]&lt;br /&gt;
|David Heise (HeiseD@lincolnu.edu)&lt;br /&gt;
|-&lt;br /&gt;
|afp&lt;br /&gt;
|[[2016:Audio_Fingerprinting]]&lt;br /&gt;
|Chung-Che Wang (geniusturtle@mirlab.org)&lt;br /&gt;
|-&lt;br /&gt;
|svs&lt;br /&gt;
|[[2016:Singing Voice Separation]]&lt;br /&gt;
|Tak-Shing Chan (takshingchan@gmail.com)&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Audio_Chord_Estimation_Results&amp;diff=11863</id>
		<title>2016:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Audio_Chord_Estimation_Results&amp;diff=11863"/>
		<updated>2016-08-06T21:00:58Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: typo fix&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2016 edition of the MIREX automatic chord estimation tasks. This edition was the fourth one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last three years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* This year the algorithms have been evaluated on the &amp;quot;Robbie Williams&amp;quot; dataset annotated at the [http://ispg.deib.polimi.it/ Image and Sound Processing Group] of Politecnico di Milano, which is [http://ispg.deib.polimi.it/mir-software.html publicly available]. A detailed description of this set can be found in [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&amp;amp;arnumber=6623838 DiGiorgi et al. (2013)].&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding binaries and code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM1 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/CM1.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| DK1-DK4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/DK1.pdf PDF]&lt;br /&gt;
| Junqi Deng, Yu-Kwong Kwok&lt;br /&gt;
|-&lt;br /&gt;
| FK2, FK4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/FK2.pdf PDF]&lt;br /&gt;
| Filip Korzeniowski&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). &amp;lt;!--The table is sorted on WCSR for the major-minor vocabulary.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Isophonics 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou 2015=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-JayChou2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RobbieWilliams 2016=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-RobbieWilliams2016.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
In progress&amp;lt;!--An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics 2009 Dataset]]&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard 2012 Dataset]]&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard 2013 Dataset]]&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_JayChou2015 | JayChou 2015 Dataset]]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
In progress&amp;lt;!--More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Results.zip Isophonics2009Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Results.zip Billboard2012Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Results.zip Billboard2013Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/JayChou2015Results.zip JayChou2015Results.zip]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
In progress &amp;lt;!--The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Output.zip Isophonics2009Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Output.zip Billboard2012Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Output.zip Billboard2013Output.zip]--&amp;gt;&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Audio_Chord_Estimation_Results&amp;diff=11793</id>
		<title>2016:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Audio_Chord_Estimation_Results&amp;diff=11793"/>
		<updated>2016-07-30T16:42:20Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Correct external link formatting&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2016 edition of the MIREX automatic chord estimation tasks. This edition was the fourth one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last three years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* This year the algorithms have been evaluated on the &amp;quot;Robbie Williams&amp;quot; dataset annotated at the [http://ispg.deib.polimi.it/ Image and Sound Processing Group] of Politecnico di Milano, which is [http://ispg.deib.polimi.it/mir-software.html publicly available]. A detailed description of this set can be found in [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&amp;amp;arnumber=6623838 DiGiorgi et al. (2013)].&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding binaries and code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM1 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/CM1.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| DK1-DK4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/DK1.pdf PDF]&lt;br /&gt;
| Junqi Deng, Yu-Kwong Kwok&lt;br /&gt;
|-&lt;br /&gt;
| FK2, FK4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/FK2.pdf PDF]&lt;br /&gt;
| Filip Korzeniowski&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). &amp;lt;!--The table is sorted on WCSR for the major-minor vocabulary.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Isophonics 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Bllboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou 2015=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-JayChou2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RobbieWilliams 2016=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-RobbieWilliams2016.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
In progress&amp;lt;!--An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics 2009 Dataset]]&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard 2012 Dataset]]&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard 2013 Dataset]]&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_JayChou2015 | JayChou 2015 Dataset]]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
In progress&amp;lt;!--More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Results.zip Isophonics2009Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Results.zip Billboard2012Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Results.zip Billboard2013Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/JayChou2015Results.zip JayChou2015Results.zip]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
In progress &amp;lt;!--The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Output.zip Isophonics2009Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Output.zip Billboard2012Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Output.zip Billboard2013Output.zip]--&amp;gt;&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Audio_Chord_Estimation_Results&amp;diff=11792</id>
		<title>2016:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Audio_Chord_Estimation_Results&amp;diff=11792"/>
		<updated>2016-07-30T16:36:59Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Create ACE results page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2016 edition of the MIREX automatic chord estimation tasks. This edition was the fourth one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last three years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* This year the algorithms have been evaluated on the &amp;quot;Robbie Williams&amp;quot; dataset annotated at the [http://ispg.deib.polimi.it/ | Image and Sound Processing Group] of Politecnico di Milano, which is [http://ispg.deib.polimi.it/mir-software.html | publicly available]. A detailed description of this set can be found in [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&amp;amp;arnumber=6623838 | DiGiorgi et al. (2013)].&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM1 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/CM1.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| DK1-DK4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/DK1.pdf PDF]&lt;br /&gt;
| Junqi Deng, Yu-Kwong Kwok&lt;br /&gt;
|-&lt;br /&gt;
| FK2, FK4&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/FK2.pdf PDF]&lt;br /&gt;
| Filip Korzeniowski&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). &amp;lt;!--The table is sorted on WCSR for the major-minor vocabulary.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Isophonics 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Bllboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou 2015=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-JayChou2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====RobbieWilliams 2016=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/ace/2016-RobbieWilliams2016.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
In progress&amp;lt;!--An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics 2009 Dataset]]&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard 2012 Dataset]]&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard 2013 Dataset]]&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Statistical_Analysis_JayChou2015 | JayChou 2015 Dataset]]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
In progress&amp;lt;!--More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Results.zip Isophonics2009Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Results.zip Billboard2012Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Results.zip Billboard2013Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/JayChou2015Results.zip JayChou2015Results.zip]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
In progress &amp;lt;!--The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Output.zip Isophonics2009Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Output.zip Billboard2012Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Output.zip Billboard2013Output.zip]--&amp;gt;&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:MIREX2016_Results&amp;diff=11791</id>
		<title>2016:MIREX2016 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:MIREX2016_Results&amp;diff=11791"/>
		<updated>2016-07-30T16:16:39Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Added link to main ACE page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* Audio Key Detection Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/akd/mrx_05 MIREX 2005 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/akd/gsteps GiantSteps Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/orchset/ ORCHSET15 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Query-by-Singing/Humming Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/qbsh/qbsh_task1_hidden/  Hidden Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/qbsh/qbsh_task1_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/qbsh/qbsh_task1_thinkit/ ThinkIt Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/qbsh/qbsh_task1_ioacas/ IOACAS Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/qbsh/qbsh_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2016:Audio Downbeat Estimation Results|Audio Downbeat Estimation Results]]&lt;br /&gt;
* [[2016:Audio Fingerprinting Results|Audio Fingerprinting Results]]&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/abt/smc/ SMC Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2016/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2016:Audio_Chord_Estimation_Results | Audio Chord Estimation]]&lt;br /&gt;
** [[2016:Audio_Chord_Estimation_Results#Isophonics_2009 | Isophonics 2009 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2016:Audio_Chord_Estimation_Results#Billboard_2012 | Billboard 2012 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2016:Audio_Chord_Estimation_Results#Billboard_2013 | Billboard 2013 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2016:Audio_Chord_Estimation_Results#JayChou_2015 | JayChou 2015 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2016:Audio_Chord_Estimation_Results#RobbieWilliams_2016 | RobbieWilliams 2016 Dataset]] &amp;amp;nbsp;&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:MIREX2016_Results&amp;diff=11790</id>
		<title>2016:MIREX2016 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:MIREX2016_Results&amp;diff=11790"/>
		<updated>2016-07-30T16:14:47Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: Add ACE results&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* Audio Key Detection Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/akd/mrx_05 MIREX 2005 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/akd/gsteps GiantSteps Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ame/orchset/ ORCHSET15 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Query-by-Singing/Humming Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/qbsh/qbsh_task1_hidden/  Hidden Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/qbsh/qbsh_task1_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/qbsh/qbsh_task1_thinkit/ ThinkIt Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/qbsh/qbsh_task1_ioacas/ IOACAS Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/qbsh/qbsh_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2016:Audio Downbeat Estimation Results|Audio Downbeat Estimation Results]]&lt;br /&gt;
* [[2016:Audio Fingerprinting Results|Audio Fingerprinting Results]]&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/abt/smc/ SMC Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2016/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2016/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2016/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Chord Estimation&lt;br /&gt;
** [[2016:Audio_Chord_Estimation_Results#Isophonics_2009 | Isophonics 2009 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2016:Audio_Chord_Estimation_Results#Billboard_2012 | Billboard 2012 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2016:Audio_Chord_Estimation_Results#Billboard_2013 | Billboard 2013 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2016:Audio_Chord_Estimation_Results#JayChou_2015 | JayChou 2015 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2016:Audio_Chord_Estimation_Results#RobbieWilliams_2016 | RobbieWilliams 2016 Dataset]] &amp;amp;nbsp;&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2010:MIREX_2010_Poster_List&amp;diff=7742</id>
		<title>2010:MIREX 2010 Poster List</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2010:MIREX_2010_Poster_List&amp;diff=7742"/>
		<updated>2010-08-10T09:25:53Z</updated>

		<summary type="html">&lt;p&gt;JohanPauwels: /* Add your author names here, once for each poster along with &amp;quot;title of some sort&amp;quot; and (Task(s) covered) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==MIREX 2010 Poster Session Planning List==&lt;br /&gt;
The MIREX 2010 Poster Session will be held Wednesday, 11 August: 16:00 - 17:45. We will be holding the MIREX plenary meeting 13:00-14:00 as a working lunch on the same day.&lt;br /&gt;
&lt;br /&gt;
Our hosts in Utrecht need to know the number of posters so they can set up the room. Please add you name and the task(s) dealt with in your poster. &lt;br /&gt;
&lt;br /&gt;
We had many groups/individuals submit across tasks. You can choose to create one ISMIR poster bringing all your data together or can split up your data across, say, two or three posters. If you have questions, please contact me at jdownie@illinois.edu or the MIREX mailing list about task poster options.&lt;br /&gt;
&lt;br /&gt;
As a reminder, the MIREX posters need to follow the [http://ismir2010.ismir.net/information-for-authors/information-for-presenters/ ISMIR 2010 poster guidelines] (i.e., A0, portrait orientation).&lt;br /&gt;
&lt;br /&gt;
==Add your author names here, once for each poster along with &amp;quot;title of some sort&amp;quot; and (Task(s) covered)==&lt;br /&gt;
# IMIRSEL: ''MIREX 2010 Overview, Part I'' (Train Test Tasks)&lt;br /&gt;
# IMIRSEL: ''MIREX 2010 Overview, Part II'' (All Other Tasks)&lt;br /&gt;
# Andreas Arzt and Gerhard Widmer: &amp;quot;Real-time Music Tracking using Tempo-aware On-line Dynamic Time Warping&amp;quot; (Real-time Audio to Score Alignment (a.k.a Score Following))&lt;br /&gt;
# Pasi Saari and Olivier Lartillot: &amp;quot;SubEnsemble - Classification framework based on the Ensemble Approach and Feature Selection&amp;quot; (Train Test Tasks)&lt;br /&gt;
# Gabriel Sargent, Frédéric Bimbot and Emmanuel Vincent: &amp;quot;Structural segmentation of songs using multi-criteria generalized likelihood ratio and regularity constraints&amp;quot; (Structural Segmentation Task)&lt;br /&gt;
# Emmanouil Benetos and Simon Dixon: &amp;quot;Multiple fundamental frequency estimation using spectral structure and temporal evolution rules&amp;quot; (Multiple Fundamental Frequency Estimation &amp;amp; Tracking Task)&lt;br /&gt;
# J. Urbano, J. Lloréns, J. Morato and S. Sánchez-Cuadrado: ''Local Alignment with Geometric Representations'' (Symbolic Melodic Similarity)&lt;br /&gt;
# F.J.Rodriguez-Serrano, P.Vera-Candeas, P.Cabanas-Molero, J.J.Carabias-Orti, N.Ruiz-Reyes: ''AM Sinusoidal Modeling for Onset Detection FOR ONSET DETECTION'' (Audio Onset Detection)&lt;br /&gt;
# R.Mata-Campos, F.J.Rodriguez-Serrano, P.Vera-Candeas, J.J.Carabias-Orti, F.J.Canadas-Quesada: ''Beat Tracking improved by AM Sinusoidal Modeled Onsets'' (Audio Beat Tracking)&lt;br /&gt;
# F.J.Rodriguez-Serrano, P.Vera-Candeas,  J.J.Carabias-Orti,P.Cabanas-Molero, N.Ruiz-Reyes: ''Real time audio to score alignment based on NLS multipitch estimation'' (Real-time Audio to Score Alignment (a.k.a Score Following))&lt;br /&gt;
# F.J. Cañadas-Quesada, F. Rodríguez-Serrano, P. Vera-Candeas, N. Ruiz-Reyes and J. Carabias-Orti: ''Multiple Fundamental Frequency Estimation &amp;amp; Tracking in Polyphonic Music for MIREX 2010'' (Multiple Fundamental Frequency Estimation &amp;amp; Tracking)&lt;br /&gt;
# Zhiyao Duan and Bryan Pardo: &amp;quot;A Real-time Score Follower for MIREX 2010&amp;quot; (Real-time Audio to Score Alignment (a.k.a Score Following))&lt;br /&gt;
# Zhiyao Duan, Jinyu Han and Bryan Pardo: &amp;quot;A Multi-pitch Estimation and Tracking System&amp;quot; (Multiple Fundamental Frequency Estimation &amp;amp; Tracking Task)&lt;br /&gt;
# I.S.H.Suyoto and A.L.Uitdenbogerd: &amp;quot;Orthogonal Pitch with IOI Symbolic Music Matching&amp;quot; (Symbolic Melodic Similarity)&lt;br /&gt;
# J.-C. Wang, H.-Y. Lo, S.-K. Jeng and H.-M. Wang: &amp;quot;IISSLG Team: Audio Train/Test and Tag Classification for MIREX 2010&amp;quot; (Audio Train/Test Classification, Audio Tag Classification)&lt;br /&gt;
# E. Di Buccio, N. Montecchio and N. Orio: &amp;quot;Applying Text-Based IR Techniques to Cover Song Identification&amp;quot; (Audio Cover Song Identification)&lt;br /&gt;
# F. Eyben, B. Schuller:  &amp;quot;MIREX 2010: Music Classification with the Munich openSMILE toolkit.&amp;quot; (Audio Train/Test Tasks; Audio Tempo Estimation)&lt;br /&gt;
# A. Gkiokas, V. Katsouros, G. Carayannis : &amp;quot;MIREX 2010 : Tempo Induction Using Filterbank Analysis and Tonal Features&amp;quot; (Audio Tempo Estimation)&lt;br /&gt;
# Y. Zhu, H. Tan, L. Chaisorn: &amp;quot;Poster #1&amp;quot; on Audio Beat Tracking&lt;br /&gt;
# H. Tan, Y. Zhu, L. Chaisorn: &amp;quot;Poster #2&amp;quot; on Audio Onset Detection&lt;br /&gt;
# J. Salamon, E. Gómez: ''MIREX 2010: Melody Extraction from Polyphonic Audio Music'' (Audio Melody Extraction)&lt;br /&gt;
# Aylon E., Bogdanov D., Herrera P., Laurier C., Serrà J., Wack N.: ''MIREX 2010: Train/Test, Audio Music Similarity and Tempo/Beat estimation'' (Audio Train/Test, Audio Music Similarity, Tempo/Beat estimation)&lt;br /&gt;
# P. Hamel, D. Eck : &amp;quot;Learning features from music audio with Deep Belief Networks&amp;quot; (Audio genre and tag classification)&lt;br /&gt;
# R. Weiss, J. Bello : &amp;quot;Music Structure Segmentation Using Shift-Invariant Probabilistic Latent Component Analysis&amp;quot; (Audio Structural Segmentation)&lt;br /&gt;
# K. Seyerlehner, M. Schedl, T. Pohle, P. Knees, : &amp;quot;Using Block-Level Features for Genre Classification, Tag Classification and Music Similarity Estimation&amp;quot; (Train-Test Task Set, Audio Music Similarity and Retrieval, Audio Tag Classification)&lt;br /&gt;
# K. Suzuki, Y. Ueda, S. Raczynski, N. Ono, S. Sagayama : ''Real-time Audio to Score Alignment Using Locally-constrained Dynamic Time Warping of Chromagrams'' (Real-time Audio to Score Alignment (a.k.a Score Following))&lt;br /&gt;
# H. Rump, S. Miyabe, E. Tsunoo, N. Ono, S. Sagayama : ''Autoregressive MFCC Models for Genre Classification Improved by Harmonic-Percussion Separation'' (Audio Train/Test)&lt;br /&gt;
# Johan Pauwels and Jean-Pierre Martens: &amp;quot;Integrating musicological knowledge into a probabilistic system for chord and key extraction&amp;quot; (Audio Chord Estimation/Audio Key Extraction))&lt;br /&gt;
&lt;br /&gt;
==Below are some examples from MIREX 2009==&lt;br /&gt;
&lt;br /&gt;
# Matt Hoffman: ''Using CBA to Automatically Tag Songs'' (Audio tag classification/retrieval)&lt;br /&gt;
# Suman Ravuri, Dan Ellis: ''The Hydra System of Cover Song Classification'' (Cover Song Identification)&lt;br /&gt;
# Joan Serra, Massimiliano Zanin, Ralph G Andrzejak: ''Cover song retrieval by recurrence quantification and unsupervised set detection'' (Cover Song Identification)&lt;br /&gt;
# MTG Team: &amp;quot;Music Type Groupers (MTG): Generic Music Classification Algorithms&amp;quot; (Audio Genre Classification, Mood Classification, Artist Identification, Classical Composer Identification)&lt;br /&gt;
# R. Jang: &amp;quot;Poster #2&amp;quot; (placeholder to get the auto-counter to increment)&lt;br /&gt;
# R. Jang: &amp;quot;Poster #3&amp;quot; (placeholder to get the auto-counter to increment)&lt;/div&gt;</summary>
		<author><name>JohanPauwels</name></author>
		
	</entry>
</feed>