<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Johan+Pauwels</id>
	<title>MIREX Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Johan+Pauwels"/>
	<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/wiki/Special:Contributions/Johan_Pauwels"/>
	<updated>2026-04-13T18:48:19Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Task_Captains&amp;diff=11670</id>
		<title>2016:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Task_Captains&amp;diff=11670"/>
		<updated>2016-02-22T18:17:27Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Added myself as team captain&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Like ISMIR 2015, we are prepared to improve the distribution of tasks for the upcoming MIREX 2016.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead one or more tasks, please add your name in the &amp;quot;Captains&amp;quot; column.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain(s)&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2016:Audio Beat Tracking]]&lt;br /&gt;
|Sebastian Böck, Florian Krebs&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2016:Audio Chord Estimation]]&lt;br /&gt;
|Johan Pauwels&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2016:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2016:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ade&lt;br /&gt;
|[[2016:Audio Downbeat Estimation]]&lt;br /&gt;
|Florian Krebs, Sebastian Böck&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2016:Audio Key Detection]]&lt;br /&gt;
|Johan Pauwels&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2016:Audio Melody Extraction]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2016:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2016:Audio Onset Detection]]&lt;br /&gt;
|Sebastian Böck&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2016:Audio Tempo Estimation]]&lt;br /&gt;
|Aggelos Gkiokas&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2016:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2016:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2016:Query by Singing/Humming]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2016:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
| Julio Carabias&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2016:Symbolic Melodic Similarity]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2016:Structural Segmentation]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|drts&lt;br /&gt;
|[[2016:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sli&lt;br /&gt;
|[[2016:Set List Identification ]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mscd&lt;br /&gt;
|[[2016:Music/Speech Classification and Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2016:Audio Offset Detection ]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|afp&lt;br /&gt;
|[[2016:Audio_Fingerprinting]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|svs&lt;br /&gt;
|[[2016:Singing Voice Separation]]&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Chord_Estimation_Results&amp;diff=11574</id>
		<title>2015:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Chord_Estimation_Results&amp;diff=11574"/>
		<updated>2015-10-22T22:32:49Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Added note about CM3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2015 edition of the MIREX automatic chord estimation tasks. This edition was the third one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last two years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* A new data set, called &amp;quot;JayChou 2015&amp;quot; has been donated by [http://tangkk.net Junqi Deng] of the [http://www.hku.hk University of Hong Kong]. It consists of 29 Mandopop songs taken from various albums by [https://en.wikipedia.org/wiki/Jay_Chou Jay Chou]. Most of the songs are ballads and special attention has been paid to the annotation of extended chords and inversions. Because Junqi was kind enough to provide this set before official publication, the algorithmic output on these files and their ground-truth has been withheld for the time being. Also the file names in the per track results have been anonymized.&lt;br /&gt;
* The algorithmic output and per track results on the Isophonics set now display the unmasked song name, such that an evaluation per artist/album can be performed.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CM3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| DK4-DK9&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/DK4.pdf PDF]&lt;br /&gt;
| Junqi Deng, Yu-Kwong Kwok&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary.&lt;br /&gt;
&lt;br /&gt;
=====Isophonics 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou 2015=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/JayChou2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====CM3 update=====&lt;br /&gt;
Chris Cannam and Matthias Mauch have informed us that their intention was to resubmit last year's system. Unfortunately, a small change that wasn't supposed to affect the output did introduce a serious bug, which they only realised after seeing these results. They want the community to know that [[2014:Audio_Chord_Estimation_Results | last year's results]] are more representative of the capabilities of their system.&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics 2009 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard 2012 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard 2013 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_JayChou2015 | JayChou 2015 Dataset]]&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Results.zip Isophonics2009Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Results.zip Billboard2012Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Results.zip Billboard2013Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/JayChou2015Results.zip JayChou2015Results.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Output.zip Isophonics2009Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Output.zip Billboard2012Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Output.zip Billboard2013Output.zip]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Chord_Estimation_Results&amp;diff=11566</id>
		<title>2015:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Chord_Estimation_Results&amp;diff=11566"/>
		<updated>2015-10-21T18:45:55Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Fix latest edit&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2015 edition of the MIREX automatic chord estimation tasks. This edition was the third one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last two years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* A new data set, called &amp;quot;JayChou 2015&amp;quot; has been donated by [http://tangkk.net Junqi Deng] of the [http://www.hku.hk University of Hong Kong]. It consists of 29 Mandopop songs taken from various albums by [https://en.wikipedia.org/wiki/Jay_Chou Jay Chou]. Most of the songs are ballads and special attention has been paid to the annotation of extended chords and inversions. Because Junqi was kind enough to provide this set before official publication, the algorithmic output on these files and their ground-truth has been withheld for the time being. Also the file names in the per track results have been anonymized.&lt;br /&gt;
* The algorithmic output and per track results on the Isophonics set now display the unmasked song name, such that an evaluation per artist/album can be performed.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CM3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| DK4-DK9&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/DK4.pdf PDF]&lt;br /&gt;
| Junqi Deng, Yu-Kwong Kwok&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary.&lt;br /&gt;
&lt;br /&gt;
=====Isophonics 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou 2015=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/JayChou2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics 2009 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard 2012 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard 2013 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_JayChou2015 | JayChou 2015 Dataset]]&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Results.zip Isophonics2009Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Results.zip Billboard2012Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Results.zip Billboard2013Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/JayChou2015Results.zip JayChou2015Results.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Output.zip Isophonics2009Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Output.zip Billboard2012Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Output.zip Billboard2013Output.zip]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Chord_Estimation_Results&amp;diff=11565</id>
		<title>2015:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Chord_Estimation_Results&amp;diff=11565"/>
		<updated>2015-10-21T18:44:08Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: External link syntax cleanup&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2015 edition of the MIREX automatic chord estimation tasks. This edition was the third one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last two years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* A new data set, called &amp;quot;JayChou 2015&amp;quot; has been donated by [http://tangkk.net Junqi Deng] of the [http://www.hku.hk University of Hong Kong]. It consists of 29 Mandopop songs taken from various albums by [https://en.wikipedia.org/wiki/Jay_Chou | Jay Chou]. Most of the songs are ballads and special attention has been paid to the annotation of extended chords and inversions. Because Junqi was kind enough to provide this set before official publication, the algorithmic output on these files and their ground-truth has been withheld for the time being. Also the file names in the per track results have been anonymized.&lt;br /&gt;
* The algorithmic output and per track results on the Isophonics set now display the unmasked song name, such that an evaluation per artist/album can be performed.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CM3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| DK4-DK9&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/DK4.pdf PDF]&lt;br /&gt;
| Junqi Deng, Yu-Kwong Kwok&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary.&lt;br /&gt;
&lt;br /&gt;
=====Isophonics 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou 2015=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/JayChou2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics 2009 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard 2012 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard 2013 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_JayChou2015 | JayChou 2015 Dataset]]&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Results.zip Isophonics2009Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Results.zip Billboard2012Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Results.zip Billboard2013Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/JayChou2015Results.zip JayChou2015Results.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Output.zip Isophonics2009Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Output.zip Billboard2012Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Output.zip Billboard2013Output.zip]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:MIREX2015_Results&amp;diff=11564</id>
		<title>2015:MIREX2015 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:MIREX2015_Results&amp;diff=11564"/>
		<updated>2015-10-21T18:42:47Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Fixed internal link&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Results by Task ==&lt;br /&gt;
&lt;br /&gt;
==OVERALL RESULTS POSTERS &amp;lt;!--(First Version: Will need updating as last runs are completed)--&amp;gt;==&lt;br /&gt;
&lt;br /&gt;
[https://www.music-ir.org/mirex/results/2015/mirex_2015_poster.pdf MIREX 2015 Overall Results Posters (PDF)]&lt;br /&gt;
&lt;br /&gt;
===Train-Test Task Set===&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/kgenrek_report/ Audio KPOP Genre (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/kgenrea_report/ Audio KPOP Genre (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/kmoodk_report/ Audio KPOP Mood (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/kmooda_report/ Audio KPOP Mood (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
===Other Tasks===&lt;br /&gt;
&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/struct/sal/ SALAMI dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2015:Discovery of Repeated Themes &amp;amp; Sections Results | Discovery of Repeated Themes &amp;amp; Sections Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2015/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2015:Audio Cover Song Identification Results|Audio Cover Song Identification Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Key Detection Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/akd/mrx_05 MIREX 2015 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/akd/gsteps GiantSteps Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/abt/smc/ SMC Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Chord Estimation&lt;br /&gt;
** [[2015:Audio_Chord_Estimation_Results#Isophonics_2009 | Isophonics 2009 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2015:Audio_Chord_Estimation_Results#Billboard_2012 | Billboard 2012 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2015:Audio_Chord_Estimation_Results#Billboard_2013 | Billboard 2013 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2015:Audio_Chord_Estimation_Results#JayChou_2015 | JayChou 2015 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/orchset/ Orcheset15 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Query-by-Singing/Humming Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbsh/qbsh_task1_hidden/  Hidden Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbsh/qbsh_task1_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbsh/qbsh_task1_thinkit/ ThinkIt Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbsh/qbsh_task1_ioacas/ IOACAS Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbsh/qbsh_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2015:Audio Downbeat Estimation Results|Audio Downbeat Estimation Results]]&lt;br /&gt;
* [[2015:Audio Fingerprinting Results|Audio Fingerprinting Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2015:Real-time_Audio_to_Score_Alignment_(a.k.a._Score_Following)_Results#Summary_Results Real-time Audio to Score Alignment (a.k.a. Score Following) Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2015:Audio_Music_Similarity_and_Retrieval_Results Audio Music Similarity and Retrieval Results]&amp;amp;nbsp;&lt;br /&gt;
* [[2015:Symbolic_Melodic_Similarity_Results | Symbolic Melodic Similarity Results]]&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2015:Singing_Voice_Separation_Results Singing Voice Separation]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Multiple Fundamental Frequency Estimation &amp;amp; Tracking Results&lt;br /&gt;
** [[2015:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results_%2D_MIREX_Dataset | MIREX Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2015:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results_%2D_Su_Dataset |Su Dataset]] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2015:Music/Speech_Classification_and_Detection_Results Music/Speech Classification and Detection]&lt;br /&gt;
&lt;br /&gt;
* [[2015:Set List Identification Results | Set List Identification Results]]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Chord_Estimation_Results&amp;diff=11563</id>
		<title>2015:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Chord_Estimation_Results&amp;diff=11563"/>
		<updated>2015-10-21T18:42:06Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Improved spelling consistency&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2015 edition of the MIREX automatic chord estimation tasks. This edition was the third one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last two years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* A new data set, called &amp;quot;JayChou 2015&amp;quot; has been donated by [http://tangkk.net | Junqi Deng] of the [http://www.hku.hk | University of Hong Kong]. It consists of 29 Mandopop songs taken from various albums by [https://en.wikipedia.org/wiki/Jay_Chou | Jay Chou]. Most of the songs are ballads and special attention has been paid to the annotation of extended chords and inversions. Because Junqi was kind enough to provide this set before official publication, the algorithmic output on these files and their ground-truth has been withheld for the time being. Also the file names in the per track results have been anonymized.&lt;br /&gt;
* The algorithmic output and per track results on the Isophonics set now display the unmasked song name, such that an evaluation per artist/album can be performed.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CM3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| DK4-DK9&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/DK4.pdf PDF]&lt;br /&gt;
| Junqi Deng, Yu-Kwong Kwok&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary.&lt;br /&gt;
&lt;br /&gt;
=====Isophonics 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou 2015=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/JayChou2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics 2009 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard 2012 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard 2013 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_JayChou2015 | JayChou 2015 Dataset]]&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Results.zip Isophonics2009Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Results.zip Billboard2012Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Results.zip Billboard2013Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/JayChou2015Results.zip JayChou2015Results.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Output.zip Isophonics2009Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Output.zip Billboard2012Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Output.zip Billboard2013Output.zip]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:MIREX2015_Results&amp;diff=11562</id>
		<title>2015:MIREX2015 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:MIREX2015_Results&amp;diff=11562"/>
		<updated>2015-10-21T18:39:22Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Added new ACE dataset&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Results by Task ==&lt;br /&gt;
&lt;br /&gt;
==OVERALL RESULTS POSTERS &amp;lt;!--(First Version: Will need updating as last runs are completed)--&amp;gt;==&lt;br /&gt;
&lt;br /&gt;
[https://www.music-ir.org/mirex/results/2015/mirex_2015_poster.pdf MIREX 2015 Overall Results Posters (PDF)]&lt;br /&gt;
&lt;br /&gt;
===Train-Test Task Set===&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/kgenrek_report/ Audio KPOP Genre (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/kgenrea_report/ Audio KPOP Genre (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/kmoodk_report/ Audio KPOP Mood (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2015/results/act/kmooda_report/ Audio KPOP Mood (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
===Other Tasks===&lt;br /&gt;
&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/struct/sal/ SALAMI dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2015:Discovery of Repeated Themes &amp;amp; Sections Results | Discovery of Repeated Themes &amp;amp; Sections Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2015/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2015:Audio Cover Song Identification Results|Audio Cover Song Identification Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Key Detection Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/akd/mrx_05 MIREX 2015 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/akd/gsteps GiantSteps Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/abt/smc/ SMC Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Chord Estimation&lt;br /&gt;
** [[2015:Audio_Chord_Estimation_Results#Isophonics_2009 | Isophonics 2009 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2015:Audio_Chord_Estimation_Results#Billboard_2012 | Billboard 2012 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2015:Audio_Chord_Estimation_Results#Billboard_2013 | Billboard 2013 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2015:Audio_Chord_Estimation_Results#Jay_Chou_2015 | Jay Chou 2015 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/ame/orchset/ Orcheset15 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Query-by-Singing/Humming Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbsh/qbsh_task1_hidden/  Hidden Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbsh/qbsh_task1_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbsh/qbsh_task1_thinkit/ ThinkIt Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbsh/qbsh_task1_ioacas/ IOACAS Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2015/results/qbsh/qbsh_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2015:Audio Downbeat Estimation Results|Audio Downbeat Estimation Results]]&lt;br /&gt;
* [[2015:Audio Fingerprinting Results|Audio Fingerprinting Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2015:Real-time_Audio_to_Score_Alignment_(a.k.a._Score_Following)_Results#Summary_Results Real-time Audio to Score Alignment (a.k.a. Score Following) Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2015:Audio_Music_Similarity_and_Retrieval_Results Audio Music Similarity and Retrieval Results]&amp;amp;nbsp;&lt;br /&gt;
* [[2015:Symbolic_Melodic_Similarity_Results | Symbolic Melodic Similarity Results]]&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2015:Singing_Voice_Separation_Results Singing Voice Separation]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Multiple Fundamental Frequency Estimation &amp;amp; Tracking Results&lt;br /&gt;
** [[2015:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results_%2D_MIREX_Dataset | MIREX Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2015:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results_%2D_Su_Dataset |Su Dataset]] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2015:Music/Speech_Classification_and_Detection_Results Music/Speech Classification and Detection]&lt;br /&gt;
&lt;br /&gt;
* [[2015:Set List Identification Results | Set List Identification Results]]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Chord_Estimation_Results&amp;diff=11561</id>
		<title>2015:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Chord_Estimation_Results&amp;diff=11561"/>
		<updated>2015-10-21T18:37:27Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Fixed internal link&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2015 edition of the MIREX automatic chord estimation tasks. This edition was the third one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last two years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* A new data set, called &amp;quot;Jay Chou 2015&amp;quot; has been donated by [http://tangkk.net | Junqi Deng] of the [http://www.hku.hk | University of Hong Kong]. It consists of 29 Mandopop songs taken from various albums by [https://en.wikipedia.org/wiki/Jay_Chou | Jay Chou]. Most of the songs are ballads and special attention has been paid to the annotation of extended chords and inversions. Because Junqi was kind enough to provide this set before official publication, the algorithmic output on these files and their ground-truth has been withheld for the time being. Also the file names in the per track results have been anonymized.&lt;br /&gt;
* The algorithmic output and per track results on the Isophonics set now display the unmasked song name, such that an evaluation per artist/album can be performed.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CM3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| DK4-DK9&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/DK4.pdf PDF]&lt;br /&gt;
| Junqi Deng, Yu-Kwong Kwok&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary.&lt;br /&gt;
&lt;br /&gt;
=====Isophonics 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou 2015=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/JayChou2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics 2009 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard 2012 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard 2013 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_JayChou2015 | Jay Chou 2015 Dataset]]&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Results.zip Isophonics2009Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Results.zip Billboard2012Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Results.zip Billboard2013Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/JayChou2015Results.zip JayChou2015Results.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Output.zip Isophonics2009Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Output.zip Billboard2012Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Output.zip Billboard2013Output.zip]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Chord_Estimation_Results&amp;diff=11560</id>
		<title>2015:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Chord_Estimation_Results&amp;diff=11560"/>
		<updated>2015-10-21T18:35:53Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Creation of the 2015 ace results page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of the 2015 edition of the MIREX automatic chord estimation tasks. This edition was the third one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last two years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results | 2013 edition]]. &lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
* A new data set, called &amp;quot;Jay Chou 2015&amp;quot; has been donated by [http://tangkk.net | Junqi Deng] of the [http://www.hku.hk | University of Hong Kong]. It consists of 29 Mandopop songs taken from various albums by [https://en.wikipedia.org/wiki/Jay_Chou | Jay Chou]. Most of the songs are ballads and special attention has been paid to the annotation of extended chords and inversions. Because Junqi was kind enough to provide this set before official publication, the algorithmic output on these files and their ground-truth has been withheld for the time being. Also the file names in the per track results have been anonymized.&lt;br /&gt;
* The algorithmic output and per track results on the Isophonics set now display the unmasked song name, such that an evaluation per artist/album can be performed.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| CM3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CM3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| DK4-DK9&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/DK4.pdf PDF]&lt;br /&gt;
| Junqi Deng, Yu-Kwong Kwok&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/KO1.pdf PDF]&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary.&lt;br /&gt;
&lt;br /&gt;
=====Isophonics 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Isophonics2009.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Billboard2012.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/Billboard2013.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====JayChou 2015=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/ace/JayChou2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Isophonics2009 | Isophonics 2009 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard 2012 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard 2013 Dataset]]&lt;br /&gt;
* [[2015:Audio_Chord_Estimation_Statistical_Analysis_JayChou2015 | Jay Chou 2015 Dataset]]&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Results.zip Isophonics2009Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Results.zip Billboard2012Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Results.zip Billboard2013Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/JayChou2015Results.zip JayChou2015Results.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Isophonics2009Output.zip Isophonics2009Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2012Output.zip Billboard2012Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2015/ace/Billboard2013Output.zip Billboard2013Output.zip]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Task_Captains&amp;diff=10867</id>
		<title>2015:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Task_Captains&amp;diff=10867"/>
		<updated>2015-03-30T14:30:36Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Like ISMIR 2014, we are prepared to improve the distribution of tasks for the upcoming MIREX 2015.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead one or more tasks, please add your name in the &amp;quot;Captains&amp;quot; column.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain(s)&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2015:Audio Beat Tracking]]&lt;br /&gt;
|, Sebastian Böck&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2015:Audio Chord Estimation]]&lt;br /&gt;
|Johan Pauwels&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2015:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2015:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ade&lt;br /&gt;
|[[2015:Audio Downbeat Estimation]]&lt;br /&gt;
|Florian Krebs, Sebastian Böck&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2015:Audio Key Detection]]&lt;br /&gt;
|Johan Pauwels&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2015:Audio Melody Extraction]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2015:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2015:Audio Onset Detection]]&lt;br /&gt;
|Sebastian Böck&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2015:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2015:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2015:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2015:Query by Singing/Humming]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2015:Query by Tapping]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2015:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2015:Symbolic Melodic Similarity]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2015:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|drts&lt;br /&gt;
|[[2015:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|afp&lt;br /&gt;
|[[2015:Audio_Fingerprinting]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|svs&lt;br /&gt;
|[[2015:Singing_Voice_Separation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|kgc&lt;br /&gt;
|[[2015:Audio K-POP Genre Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|kmc&lt;br /&gt;
|[[2015:Audio K-POP Mood Classification]]&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Statistical_Analysis_MirexChord2009&amp;diff=10798</id>
		<title>2014:Audio Chord Estimation Statistical Analysis MirexChord2009</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Statistical_Analysis_MirexChord2009&amp;diff=10798"/>
		<updated>2014-11-24T21:59:52Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Added segmentation plot&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:MirexChord2009-Root.png]]&lt;br /&gt;
[[File:MirexChord2009-MajMin.png]]&lt;br /&gt;
[[File:MirexChord2009-MajMinBass.png]]&lt;br /&gt;
[[File:MirexChord2009-Sevenths.png]]&lt;br /&gt;
[[File:MirexChord2009-SeventhsBass.png]]&lt;br /&gt;
[[File:MirexChord2009-Segmentation.png]]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013&amp;diff=10797</id>
		<title>2014:Audio Chord Estimation Statistical Analysis Billboard2013</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013&amp;diff=10797"/>
		<updated>2014-11-24T21:58:57Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Added segmentation plot&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Billboard2013-Root.png]]&lt;br /&gt;
[[File:Billboard2013-MajMin.png]]&lt;br /&gt;
[[File:Billboard2013-MajMinBass.png]]&lt;br /&gt;
[[File:Billboard2013-Sevenths.png]]&lt;br /&gt;
[[File:Billboard2013-SeventhsBass.png]]&lt;br /&gt;
[[File:Billboard2013-Segmentation.png]]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012&amp;diff=10796</id>
		<title>2014:Audio Chord Estimation Statistical Analysis Billboard2012</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012&amp;diff=10796"/>
		<updated>2014-11-24T21:58:17Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Added segmentation plot&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Billboard2012-Root.png]]&lt;br /&gt;
[[File:Billboard2012-MajMin.png]]&lt;br /&gt;
[[File:Billboard2012-MajMinBass.png]]&lt;br /&gt;
[[File:Billboard2012-Sevenths.png]]&lt;br /&gt;
[[File:Billboard2012-SeventhsBass.png]]&lt;br /&gt;
[[File:Billboard2012-Segmentation.png]]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:MirexChord2009-SeventhsBass.png&amp;diff=10795</id>
		<title>File:MirexChord2009-SeventhsBass.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:MirexChord2009-SeventhsBass.png&amp;diff=10795"/>
		<updated>2014-11-24T21:57:05Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:MirexChord2009-Sevenths.png&amp;diff=10794</id>
		<title>File:MirexChord2009-Sevenths.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:MirexChord2009-Sevenths.png&amp;diff=10794"/>
		<updated>2014-11-24T21:56:51Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:MirexChord2009-Segmentation.png&amp;diff=10793</id>
		<title>File:MirexChord2009-Segmentation.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:MirexChord2009-Segmentation.png&amp;diff=10793"/>
		<updated>2014-11-24T21:56:36Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:MirexChord2009-Root.png&amp;diff=10792</id>
		<title>File:MirexChord2009-Root.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:MirexChord2009-Root.png&amp;diff=10792"/>
		<updated>2014-11-24T21:56:23Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:MirexChord2009-MajMinBass.png&amp;diff=10791</id>
		<title>File:MirexChord2009-MajMinBass.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:MirexChord2009-MajMinBass.png&amp;diff=10791"/>
		<updated>2014-11-24T21:55:56Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:MirexChord2009-MajMin.png&amp;diff=10790</id>
		<title>File:MirexChord2009-MajMin.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:MirexChord2009-MajMin.png&amp;diff=10790"/>
		<updated>2014-11-24T21:55:37Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:Billboard2013-SeventhsBass.png&amp;diff=10789</id>
		<title>File:Billboard2013-SeventhsBass.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:Billboard2013-SeventhsBass.png&amp;diff=10789"/>
		<updated>2014-11-24T21:55:25Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:Billboard2013-Sevenths.png&amp;diff=10788</id>
		<title>File:Billboard2013-Sevenths.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:Billboard2013-Sevenths.png&amp;diff=10788"/>
		<updated>2014-11-24T21:55:12Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:Billboard2013-Segmentation.png&amp;diff=10787</id>
		<title>File:Billboard2013-Segmentation.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:Billboard2013-Segmentation.png&amp;diff=10787"/>
		<updated>2014-11-24T21:54:59Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:Billboard2013-Root.png&amp;diff=10786</id>
		<title>File:Billboard2013-Root.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:Billboard2013-Root.png&amp;diff=10786"/>
		<updated>2014-11-24T21:54:49Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:Billboard2013-MajMinBass.png&amp;diff=10785</id>
		<title>File:Billboard2013-MajMinBass.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:Billboard2013-MajMinBass.png&amp;diff=10785"/>
		<updated>2014-11-24T21:54:37Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:Billboard2013-MajMin.png&amp;diff=10784</id>
		<title>File:Billboard2013-MajMin.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:Billboard2013-MajMin.png&amp;diff=10784"/>
		<updated>2014-11-24T21:54:25Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:Billboard2012-SeventhsBass.png&amp;diff=10783</id>
		<title>File:Billboard2012-SeventhsBass.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:Billboard2012-SeventhsBass.png&amp;diff=10783"/>
		<updated>2014-11-24T21:54:06Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:Billboard2012-Sevenths.png&amp;diff=10782</id>
		<title>File:Billboard2012-Sevenths.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:Billboard2012-Sevenths.png&amp;diff=10782"/>
		<updated>2014-11-24T21:53:53Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:Billboard2012-Segmentation.png&amp;diff=10781</id>
		<title>File:Billboard2012-Segmentation.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:Billboard2012-Segmentation.png&amp;diff=10781"/>
		<updated>2014-11-24T21:53:42Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:Billboard2012-Root.png&amp;diff=10780</id>
		<title>File:Billboard2012-Root.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:Billboard2012-Root.png&amp;diff=10780"/>
		<updated>2014-11-24T21:53:32Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:Billboard2012-MajMinBass.png&amp;diff=10779</id>
		<title>File:Billboard2012-MajMinBass.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:Billboard2012-MajMinBass.png&amp;diff=10779"/>
		<updated>2014-11-24T21:53:19Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=File:Billboard2012-MajMin.png&amp;diff=10778</id>
		<title>File:Billboard2012-MajMin.png</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=File:Billboard2012-MajMin.png&amp;diff=10778"/>
		<updated>2014-11-24T21:52:35Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10777</id>
		<title>2014:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10777"/>
		<updated>2014-11-12T22:13:47Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for the Isophonics dataset, a.k.a. the MIREX 2009 dataset. It comprises the collected Beatles, Queen, and Zweieck datasets from Queen Mary, University of London, and has been used for audio chord estimation in MIREX for many years.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation framework is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub] and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
=====MIREX Chord 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/mirex09.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard12.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard13.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2014:Audio_Chord_Estimation_Statistical_Analysis_MirexChord2009 | MIREX Chord 2009 Dataset]]&lt;br /&gt;
* [[2014:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard2012 Dataset]]&lt;br /&gt;
* [[2014:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard2013 Dataset]]&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/MirexChord2009Results.zip MirexChord2009Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2012Results.zip Billboard2012Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2013Results.zip Billboard2013Results.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/MirexChord2009Output.zip MirexChord2009Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2012Output.zip Billboard2012Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2013Output.zip Billboard2013Output.zip]&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
The evaluation procedure of this year was exactly the same as that of last year, so the results can be compared with each other. Even more, two of the three submissions were resubmissions of last year: KO1 (2014) = KO1 (2013) and CM3 (2014) = CF2 (2013) and consequently have the same scores.&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10776</id>
		<title>2014:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10776"/>
		<updated>2014-11-12T22:02:36Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Added description and links to software&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for the Isophonics dataset, a.k.a. the MIREX 2009 dataset. It comprises the collected Beatles, Queen, and Zweieck datasets from Queen Mary, University of London, and has been used for audio chord estimation in MIREX for many years.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
All software used for the evaluation has been made open-source. The evaluation procedure itself is described by [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6637748 Pauwels and Peeters (2013)]. The corresponding code repository can be found on [https://github.com/jpauwels/MusOOEvaluator GitHub]. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the [https://github.com/jpauwels/MusOOEvaluator/blob/master/README.md readme].&lt;br /&gt;
&lt;br /&gt;
The statistical comparison between the different submissions is explained in [http://www.terasoft.com.tw/conf/ismir2014/proceedings/T095_250_Paper.pdf Burgoyne et al. (2014)]. The software is available at [https://bitbucket.org/jaburgoyne/mirexace BitBucket]. It uses the detailed results provided below as input.&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
=====MIREX Chord 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/mirex09.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard12.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard13.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2014:Audio_Chord_Estimation_Statistical_Analysis_MirexChord2009 | MIREX Chord 2009 Dataset]]&lt;br /&gt;
* [[2014:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard2012 Dataset]]&lt;br /&gt;
* [[2014:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard2013 Dataset]]&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/MirexChord2009Results.zip MirexChord2009Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2012Results.zip Billboard2012Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2013Results.zip Billboard2013Results.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/MirexChord2009Output.zip MirexChord2009Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2012Output.zip Billboard2012Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2013Output.zip Billboard2013Output.zip]&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
The evaluation procedure of this year was exactly the same as that of last year, so the results can be compared with each other. Even more, two of the three submissions were resubmissions of last year: KO1 (2014) = KO1 (2013) and CM3 (2014) = CF2 (2013) and consequently have the same scores.&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013&amp;diff=10775</id>
		<title>2014:Audio Chord Estimation Statistical Analysis Billboard2013</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013&amp;diff=10775"/>
		<updated>2014-11-12T21:42:58Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Added links to comparative statistics pages&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Billboard2013-Root.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Billboard2013-MajMin.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Billboard2013-MajMinBass.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Billboard2013-Sevenths.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Billboard2013-SeventhsBass.png]]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012&amp;diff=10774</id>
		<title>2014:Audio Chord Estimation Statistical Analysis Billboard2012</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012&amp;diff=10774"/>
		<updated>2014-11-12T21:42:18Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Added statistical significance plots&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Billboard2012-Root.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Billboard2012-MajMin.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Billboard2012-MajMinBass.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Billboard2012-Sevenths.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Billboard2012-SeventhsBass.png]]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Statistical_Analysis_MirexChord2009&amp;diff=10773</id>
		<title>2014:Audio Chord Estimation Statistical Analysis MirexChord2009</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Statistical_Analysis_MirexChord2009&amp;diff=10773"/>
		<updated>2014-11-12T21:39:17Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Added statistical significance plots&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:MirexChord2009-Root.png]]&lt;br /&gt;
[[File:MirexChord2009-MajMin.png]]&lt;br /&gt;
[[File:MirexChord2009-MajMinBass.png]]&lt;br /&gt;
[[File:MirexChord2009-Sevenths.png]]&lt;br /&gt;
[[File:MirexChord2009-SeventhsBass.png]]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10772</id>
		<title>2014:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10772"/>
		<updated>2014-11-12T21:31:46Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Added links to comparative statistics pages&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for the Isophonics dataset, a.k.a. the MIREX 2009 dataset. It comprises the collected Beatles, Queen, and Zweieck datasets from Queen Mary, University of London, and has been used for audio chord estimation in MIREX for many years.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
=====MIREX Chord 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/mirex09.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard12.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard13.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
An analysis of the statistical difference between all submissions can be found on the following pages:&lt;br /&gt;
&lt;br /&gt;
* [[2014:Audio_Chord_Estimation_Statistical_Analysis_MirexChord2009 | MIREX Chord 2009 Dataset]]&lt;br /&gt;
* [[2014:Audio_Chord_Estimation_Statistical_Analysis_Billboard2012 | Billboard2012 Dataset]]&lt;br /&gt;
* [[2014:Audio_Chord_Estimation_Statistical_Analysis_Billboard2013 | Billboard2013 Dataset]]&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/MirexChord2009Results.zip MirexChord2009Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2012Results.zip Billboard2012Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2013Results.zip Billboard2013Results.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/MirexChord2009Output.zip MirexChord2009Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2012Output.zip Billboard2012Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2013Output.zip Billboard2013Output.zip]&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
The evaluation procedure of this year was exactly the same as those of last year, so the results can be compared with each other. Even more, two of the three submissions were resubmissions of last year: KO1 (2014) = KO1 (2013) and CM3 (2014) = CF2 (2013) and consequently have the same scores.&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10771</id>
		<title>2014:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10771"/>
		<updated>2014-11-12T21:15:46Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Publish detailled results and output&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for the Isophonics dataset, a.k.a. the MIREX 2009 dataset. It comprises the collected Beatles, Queen, and Zweieck datasets from Queen Mary, University of London, and has been used for audio chord estimation in MIREX for many years.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
=====MIREX Chord 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/mirex09.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard12.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard13.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Detailed Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/MirexChord2009Results.zip MirexChord2009Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2012Results.zip Billboard2012Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2013Results.zip Billboard2013Results.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/MirexChord2009Output.zip MirexChord2009Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2012Output.zip Billboard2012Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2013Output.zip Billboard2013Output.zip]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10770</id>
		<title>2014:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10770"/>
		<updated>2014-10-31T20:03:20Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for the Isophonics dataset, a.k.a. the MIREX 2009 dataset. It comprises the collected Beatles, Queen, and Zweieck datasets from Queen Mary, University of London, and has been used for audio chord estimation in MIREX for many years.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
=====MIREX Chord 2009=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/mirex09.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2012=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard12.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
=====Billboard 2013=====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard13.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/MirexChord2009Results.zip MirexChord2009Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2012Results.zip Billboard2012Results.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2013Results.zip Billboard2013Results.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/MirexChord2009Output.zip MirexChord2009Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2012Output.zip Billboard2012Output.zip]&lt;br /&gt;
* [https://music-ir.org/mirex/results/2014/ace/Billboard2013Output.zip Billboard2013Output.zip]&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:MIREX2014_Results&amp;diff=10769</id>
		<title>2014:MIREX2014 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:MIREX2014_Results&amp;diff=10769"/>
		<updated>2014-10-31T17:56:02Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Redirected three data set pages of ace to new unified page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Results by Task ==&lt;br /&gt;
&lt;br /&gt;
==OVERALL RESULTS POSTERS &amp;lt;!--(First Version: Will need updating as last runs are completed)--&amp;gt;==&lt;br /&gt;
&lt;br /&gt;
[https://www.music-ir.org/mirex/results/2014/mirex_2014_poster.pdf MIREX 2014 Overall Results Posters (PDF)]&lt;br /&gt;
&lt;br /&gt;
===Train-Test Task Set===&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kgenrek_report/ Audio KPOP Genre (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kgenrea_report/ Audio KPOP Genre (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kmoodk_report/ Audio KPOP Mood (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kmooda_report/ Audio KPOP Mood (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
===Other Tasks===&lt;br /&gt;
&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/sal/ SALAMI dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2014:Discovery of Repeated Themes &amp;amp; Sections Results | Discovery of Repeated Themes &amp;amp; Sections Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2014/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2014:Audio Cover Song Identification Results|Audio Cover Song Identification Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2014/results/akd/ Audio Key Detection Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/abt/smc/ SMC Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Chord Estimation&lt;br /&gt;
** [[2014:Audio_Chord_Estimation_Results#MIREX_Chord_2009 | MIREX Chord &amp;amp;rsquo;09 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2014:Audio_Chord_Estimation_Results#Billboard_2012 | Billboard &amp;amp;rsquo;12 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2014:Audio_Chord_Estimation_Results#Billboard_2013 | Billboard &amp;amp;rsquo;13 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Tag Classification Results&lt;br /&gt;
** Major Miner Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask1_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask1_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
** Mood Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask2_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask2_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Query-by-Singing/Humming Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_hidden/  Hidden Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_thinkit/ ThinkIt Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_ioacas/ IOACAS Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Query-by-Tapping Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbt/qbt_task1_jang/ Subtask 1, Jang dataset]&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbt/qbt_task1_hsiao/ Subtask 1, Hsiao dataset]&lt;br /&gt;
** Subtask 1, QBT-Extended dataset&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbt/qbt_task2_jang/ Subtask 2, Jang dataset]&lt;br /&gt;
** Subtask 3, QBT-Extended dataset&lt;br /&gt;
&lt;br /&gt;
* [[2014:Audio Downbeat Estimation Results|Audio Downbeat Estimation Results]]&lt;br /&gt;
* [[2014:Audio Fingerprinting Results|Audio Fingerprinting Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Real-time_Audio_to_Score_Alignment_(a.k.a._Score_Following)_Results#Summary_Results Real-time Audio to Score Alignment (a.k.a. Score Following) Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Audio_Music_Similarity_and_Retrieval_Results Audio Music Similarity and Retrieval Results]&amp;amp;nbsp;&lt;br /&gt;
* [[2014:Symbolic_Melodic_Similarity_Results | Symbolic Melodic Similarity Results]]&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Singing_Voice_Separation_Results Singing Voice Separation]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results Multiple Fundamental Frequency Estimation &amp;amp; Tracking Results]&amp;amp;nbsp;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10767</id>
		<title>2014:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10767"/>
		<updated>2014-10-31T17:51:55Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: moved 2014:Audio Chord Estimation Results MIREX 2009 to 2014:Audio Chord Estimation Results: Merging three pages into one for easier maintenance&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for the Isophonics dataset, a.k.a. the MIREX 2009 dataset. It comprises the collected Beatles, Queen, and Zweieck datasets from Queen Mary, University of London, and has been used for audio chord estimation in MIREX for many years.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
====Mirex Chord 2009====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/mirex09.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
====Billboard 2012====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard12.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
====Billboard2013====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard13.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/MirexChord2009.zip MirexChord2009.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/MirexChord2009Output.zip MirexChord2009Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_MIREX_2009&amp;diff=10768</id>
		<title>2014:Audio Chord Estimation Results MIREX 2009</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_MIREX_2009&amp;diff=10768"/>
		<updated>2014-10-31T17:51:55Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: moved 2014:Audio Chord Estimation Results MIREX 2009 to 2014:Audio Chord Estimation Results: Merging three pages into one for easier maintenance&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[2014:Audio Chord Estimation Results]]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10766</id>
		<title>2014:Audio Chord Estimation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results&amp;diff=10766"/>
		<updated>2014-10-31T17:51:24Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Merging three pages into one for easier maintenance&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for the Isophonics dataset, a.k.a. the MIREX 2009 dataset. It comprises the collected Beatles, Queen, and Zweieck datasets from Queen Mary, University of London, and has been used for audio chord estimation in MIREX for many years.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
====Mirex Chord 2009====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/mirex09.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
====Billboard 2012====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard12.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
====Billboard2013====&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard13.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/MirexChord2009.zip MirexChord2009.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/MirexChord2009Output.zip MirexChord2009Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:MIREX2013_Results&amp;diff=10728</id>
		<title>2013:MIREX2013 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:MIREX2013_Results&amp;diff=10728"/>
		<updated>2014-10-21T18:07:05Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Removed links to partial and buggy old style evaluations for ace&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==OVERALL RESULTS POSTERS &amp;lt;!--(First Version: Will need updating as last runs are completed)--&amp;gt;==&lt;br /&gt;
&lt;br /&gt;
This page is under construction. &lt;br /&gt;
&lt;br /&gt;
[https://www.music-ir.org/mirex/results/2013/mirex_2013_poster.pdf MIREX 2013 Overall Results Posters (PDF)]&lt;br /&gt;
&lt;br /&gt;
==Results by Task ==&lt;br /&gt;
&lt;br /&gt;
===Train-Test Task Set===&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
===Other Tasks===&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/abt/dav/ DAV Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
* Audio Chord Detection Results&lt;br /&gt;
** [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | MIREX &amp;amp;rsquo;09 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2013:Audio_Chord_Estimation_Results_Billboard_2012 | Billboard &amp;amp;rsquo;12 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2013:Audio_Chord_Estimation_Results_Billboard_2013 | Billboard &amp;amp;rsquo;13 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2013/results/akd/ Audio Key Detection Results] &amp;amp;nbsp;&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
* [[2013:Audio_Music_Similarity_and_Retrieval_Results | Audio Music Similarity and Retrieval Results]] &lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2013/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
* Audio Tag Classification Results&lt;br /&gt;
** Major Miner Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask1_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask1_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
** Mood Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask2_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask2_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
* [[2013:Multiple_Fundamental_Frequency_Estimation_&amp;amp;_Tracking_Results | Multiple Fundamental Frequency Estimation &amp;amp; Tracking Results]]&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/sal/ SALAMI dataset] &amp;amp;nbsp;&lt;br /&gt;
* Query-by-Singing/Humming Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1_hidden/  Hidden Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1a_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1b_thinkit/ ThinkIt Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1c_ioacas/ IOACAS Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
* Query-by-Tapping Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbt/qbt_task1_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbt/qbt_task1_hsiao/ HSIAO Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbt/qbt_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
*[[2013:Real-time_Audio_to_Score_Alignment_(a.k.a._Score_Following)_Results | Real-time Audio to Score Alignment (a.k.a. Score Following) Results ]]&lt;br /&gt;
* [[2013:Symbolic_Melodic_Similarity_Results | Symbolic Melodic Similarity Results]]&lt;br /&gt;
* [[2013:Discovery of Repeated Themes &amp;amp; Sections Results | Discovery of Repeated Themes &amp;amp; Sections Results]]&lt;br /&gt;
* [[2013:Audio Cover Song Identification Results]]&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2013&amp;diff=10727</id>
		<title>2014:Audio Chord Estimation Results Billboard 2013</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2013&amp;diff=10727"/>
		<updated>2014-10-21T17:52:31Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: Read reformatted results from table&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for a special subset of the ''Billboard'' dataset from McGill University that has never been made available to the public. Further subsets have been withheld to support the ACE task through MIREX 2015.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard13.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2013.zip BillboardTest2013.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2013Output.zip BillboardTest2013Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2013&amp;diff=10726</id>
		<title>2014:Audio Chord Estimation Results Billboard 2013</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Chord_Estimation_Results_Billboard_2013&amp;diff=10726"/>
		<updated>2014-10-21T17:18:02Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
This page contains the results of these new evaluations for a special subset of the ''Billboard'' dataset from McGill University that has never been made available to the public. Further subsets have been withheld to support the ACE task through MIREX 2015.&lt;br /&gt;
&lt;br /&gt;
==Why evaluate differently?==&lt;br /&gt;
&lt;br /&gt;
* Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.&lt;br /&gt;
&lt;br /&gt;
* Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels &amp;amp; Peeters, 2013). &lt;br /&gt;
&lt;br /&gt;
* At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the [[The_Utrecht_Agreement_on_Chord_Evaluation | Utrecht Agreement]] for updating the task, but until this year, nobody had implemented any of the suggestions.&lt;br /&gt;
&lt;br /&gt;
==What’s new?==&lt;br /&gt;
&lt;br /&gt;
===More precise recall estimation===&lt;br /&gt;
&lt;br /&gt;
* MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song. &lt;br /&gt;
&lt;br /&gt;
* In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.&lt;br /&gt;
&lt;br /&gt;
* Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient. &lt;br /&gt;
&lt;br /&gt;
* Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).&lt;br /&gt;
&lt;br /&gt;
===Advanced chord vocabularies===&lt;br /&gt;
&lt;br /&gt;
* We computed WCSR with five different chord vocabulary mappings: &lt;br /&gt;
# Chord root note only;&lt;br /&gt;
# Major and minor;&lt;br /&gt;
# Seventh chords;&lt;br /&gt;
# Major and minor with inversions; and&lt;br /&gt;
# Seventh chords with inversions. &lt;br /&gt;
&lt;br /&gt;
* With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. &lt;br /&gt;
&lt;br /&gt;
* A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. &lt;br /&gt;
&lt;br /&gt;
* For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.&lt;br /&gt;
&lt;br /&gt;
* Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus of American popular music (Burgoyne et al., 2011).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Most Frequent Chord Qualities in the ''Billboard'' Corpus&lt;br /&gt;
|- &lt;br /&gt;
! Quality&lt;br /&gt;
! Freq.&lt;br /&gt;
! Cum. Freq.&lt;br /&gt;
|-&lt;br /&gt;
| maj&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 52&lt;br /&gt;
|-&lt;br /&gt;
| min&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 13&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 65&lt;br /&gt;
|-&lt;br /&gt;
| 7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 10&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 75&lt;br /&gt;
|-&lt;br /&gt;
| min7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 8&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 83&lt;br /&gt;
|-&lt;br /&gt;
| maj7&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 3&lt;br /&gt;
| align=&amp;quot;right&amp;quot;| 86&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Evaluation of segmentation===&lt;br /&gt;
&lt;br /&gt;
* The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.&lt;br /&gt;
&lt;br /&gt;
* We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010). &lt;br /&gt;
&lt;br /&gt;
* Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).&lt;br /&gt;
&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
==Submissions==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! Abstract&lt;br /&gt;
! Contributors&lt;br /&gt;
|-&lt;br /&gt;
| KO1 (&amp;lt;em&amp;gt;shineChords&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
| Maksim Khadkevich, Maurizio Omologo&lt;br /&gt;
|-&lt;br /&gt;
| CB3 (&amp;lt;em&amp;gt;Chordino&amp;lt;/em&amp;gt;)&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/CB3.pdf PDF]&lt;br /&gt;
| Chris Cannam, Matthias Mauch&lt;br /&gt;
|-&lt;br /&gt;
| JR2&lt;br /&gt;
| style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/JR2.pdf PDF]&lt;br /&gt;
| Jean-Baptiste Rolland&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/ace/billboard13.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- writetest&lt;br /&gt;
===Comparative Statistics===&lt;br /&gt;
&lt;br /&gt;
* ''coming soon...''&lt;br /&gt;
&lt;br /&gt;
===Complete Results===&lt;br /&gt;
&lt;br /&gt;
More detailed about the performance of the algorithms, including per-song performance and the breakdown of the WCSR calculations, is available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2013.zip BillboardTest2013.zip]&lt;br /&gt;
&lt;br /&gt;
===Algorithmic Output===&lt;br /&gt;
&lt;br /&gt;
The recognition output and the ground-truth files are available from this archive:&lt;br /&gt;
&lt;br /&gt;
* [https://music-ir.org/mirex/results/2013/ace/BillboardTest2013Output.zip BillboardTest2013Output.zip]&lt;br /&gt;
&lt;br /&gt;
We hope to generate a graphical comparison of all algorithms against the ground truth early in 2014.&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9465</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9465"/>
		<updated>2013-07-09T12:41:23Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In response to discussions at ISMIR 2012, we are prepared to improve the distribution of tasks for the upcoming MIREX 2013.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead one or more tasks, please add your name in the &amp;quot;Captains&amp;quot; column.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain(s)&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|Fu-Hai Frank Wu&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|John Ashley Burgoyne, W. Bas de Haas, Johan Pauwels&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|Johan Pauwels&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|Sebastian Böck&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|Aggelos Gkiokas, Anders Elowsson&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|Mert Bay&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|drts&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|Tom Collins&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9412</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9412"/>
		<updated>2013-06-14T09:17:02Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In response to discussions at ISMIR 2012, we are prepared to improve the distribution of tasks for the upcoming MIREX 2013.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead one or more tasks, please add your name in the &amp;quot;Captains&amp;quot; column.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain(s)&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|Johan Pauwels&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|Johan Pauwels&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|Sebastian Böck&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|Mert Bay&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|drts&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|Tom Collins&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9411</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9411"/>
		<updated>2013-06-14T09:16:26Z</updated>

		<summary type="html">&lt;p&gt;Johan Pauwels: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In response to discussions at ISMIR 2012, we are prepared to improve the distribution of tasks for the upcoming MIREX 2013.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead one or more tasks, please add your name in the &amp;quot;Captains&amp;quot; column.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain(s)&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|Johan Pauwels&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|Sebastian Böck&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|Mert Bay&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|drts&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|Tom Collins&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Johan Pauwels</name></author>
		
	</entry>
</feed>