<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Chris+Maden</id>
	<title>MIREX Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Chris+Maden"/>
	<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/wiki/Special:Contributions/Chris_Maden"/>
	<updated>2026-04-29T19:23:11Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:GC15UX:JDISC&amp;diff=11218</id>
		<title>2015:GC15UX:JDISC</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:GC15UX:JDISC&amp;diff=11218"/>
		<updated>2015-08-19T22:03:44Z</updated>

		<summary type="html">&lt;p&gt;Chris Maden: /* Dataset */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Grand Challenge 2015: User Experience with J-DISC}}&lt;br /&gt;
=Purpose=&lt;br /&gt;
''Holistic, user-centered evaluation of the user experience in interacting with complete, user-facing music information retrieval (MIR) systems.''&lt;br /&gt;
&lt;br /&gt;
=Goals=&lt;br /&gt;
# ''To inspire the development of complete MIR systems.''&lt;br /&gt;
# ''To promote the notion of user experience as a first-class research objective in the MIR community.''&lt;br /&gt;
&lt;br /&gt;
=About J-DISC=&lt;br /&gt;
J-DISC ([http://jdisc.columbia.edu http://jdisc.columbia.edu]) is a resource for searching and exploring jazz recordings created by the Center for Jazz Studies at Columbia University. It is organized to present complete information on jazz recording sessions, and merge a large corpus of session data into a single easily accessible repository, in a manner that can be easily searched, cross-searched, navigated and cited. In addition to the focus on recording artist/leaders of traditional discography, J-DISC incorporates extensive cultural, geographic, biographical, composer and studio information that can also be easily searched and accessed.&lt;br /&gt;
&lt;br /&gt;
=Dataset=&lt;br /&gt;
J-DISC contains fully structured and searchable metadata. Key entities in the dataset include '''person, skill, session, track, composition, and issue'''. There are 19 tables in the dataset representing various relationships between those entities. Below are brief descriptions for each of the 19 tables (table names in alphabetical order, numbers at the end showing the number of rows in each table):&lt;br /&gt;
&lt;br /&gt;
*'''composition: '''Compositions, which may be recorded as tracks at sessions. (7,104)&lt;br /&gt;
*'''composition_composer: '''Associations between compositions and their composers. (8,622)&lt;br /&gt;
* '''composition_lyricist: '''Associations between compositions and their lyricists. (880)&lt;br /&gt;
* ''' composition_title: '''Alternative titles for compositions. (811)&lt;br /&gt;
*''' issue: '''Releases or issues of tracks e.g. as albums. (545)&lt;br /&gt;
*'''issue_leader: '''Associations between releases and their leaders. (610)&lt;br /&gt;
*'''issue_overdub_people: '''Associations between overdubbed releases and people working on them. (17)&lt;br /&gt;
*'''issue_track: '''Associations between releases and the included tracks. (3,769)&lt;br /&gt;
*'''person: '''People associated with musical recordings, including musicians and composers. (5,734)&lt;br /&gt;
* '''person_ethnicity: '''Association of people with ethnic descriptions. (70)&lt;br /&gt;
*'''person_session_skill: '''Correlation between people, sessions, and skills or instruments. (21,424)&lt;br /&gt;
*'''person_skill: '''Correlation between people and primary skills or instruments. (6,450)&lt;br /&gt;
*'''person_track_skill: '''Variation from sessions in correlation between people, tracks, and skills or instruments. (7,044)&lt;br /&gt;
*'''session: '''Recording sessions, at which one or more musicians produced one or more tracks. (2,711)&lt;br /&gt;
* '''session_leader: '''Associations between sessions and their leader(s). (3,123)&lt;br /&gt;
* '''skill: '''Skills associated with musical recordings, including instruments played, conducting, composing. (209)&lt;br /&gt;
*'''track: '''Tracks laid down at recording sessions by musicians. (15,361)&lt;br /&gt;
*'''track_composition: '''Associations between tracks and compositions. (15,672)&lt;br /&gt;
*'''track_soloist: '''Associations between tracks and soloists. (630)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[2015:GC15UX:JDISC_Schema]] presents more details about each table.&lt;br /&gt;
&lt;br /&gt;
==ER Diagram of the Schema==&lt;br /&gt;
[[File:jdisc_schema.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[https://www.music-ir.org/mirex/gc15ux_jdisc/jdisc_schema.pdf Click to see the full version of the diagram (PDF)]&lt;br /&gt;
&lt;br /&gt;
=Download the Dataset=&lt;br /&gt;
# &amp;lt;span style=&amp;quot;color:#808080&amp;quot;&amp;gt;user agreement signed during download?&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Participating Systems=&lt;br /&gt;
''Unlike conventional MIREX tasks, participants are not asked to submit their systems. Instead, the systems will be hosted by their developers. All participating systems need to be constructed as websites accessible to users through normal web browsers. Participating teams will submit the URLs to their systems to the GC15UX team.''&lt;br /&gt;
&lt;br /&gt;
To ensure a consistent experience, evaluators will see participating systems in fixed size window: '''1024x768'''. Please test your system for this screen size.&lt;br /&gt;
&lt;br /&gt;
See the [[#Evaluation Webforms]] below for a better understanding of our E6K-inpsired evaluation system design.&lt;br /&gt;
&lt;br /&gt;
==Potential Participants==&lt;br /&gt;
&lt;br /&gt;
Please put your names and email contacts in the following table. It is encouraged that you give your team a cool name! &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! (Cool) Team Name&lt;br /&gt;
! Name(s)&lt;br /&gt;
! Email(s)&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
As written in the name of the Grand Challenge, the evaluation will be user-centered. All systems will be used by a number of human evaluators and be rated by them on several most important criteria in evaluating user experience. &lt;br /&gt;
&lt;br /&gt;
==Criteria==&lt;br /&gt;
&lt;br /&gt;
''Note that the evaluation criteria or its descriptions may be slightly changed in the months leading up to the submission deadline, as we test it and work to improve it.''&lt;br /&gt;
&lt;br /&gt;
Given the GC15UX is all about how users perceive their experiences of the systems, we intend to capture the user perceptions in a minimally intrusive manner and not to burden the users/evaluators with too many questions or required data inputs. The following criteria are grounded on the literature of Human Computer Interaction (HCI) and User Experience (UX), with a careful consideration on striking a balance between being comprehensive and minimizing evaluators' cognitive load. &lt;br /&gt;
&lt;br /&gt;
Evaluators will rate systems on the following criteria: &lt;br /&gt;
&lt;br /&gt;
* '''Overall satisfaction''': How would you rate your overall satisfaction with the system?&lt;br /&gt;
Very unsatisfactory / Unsatisfactory / Slightly unsatisfactory / Neutral / Slightly satisfactory / Satisfactory / Very satisfactory&lt;br /&gt;
&lt;br /&gt;
* '''Aesthetics''': How would you rate the visual attractiveness of the system?&lt;br /&gt;
Very Poor / Poor / Slightly Poor / Neutral / Slightly Good / Good / Excellent&lt;br /&gt;
&lt;br /&gt;
* '''Ease of use''': How easy was it to figure out how to use the system? &lt;br /&gt;
Very difficult / Difficult / Slightly difficult / Neutral / Slightly easy / Easy / Very easy&lt;br /&gt;
&lt;br /&gt;
* '''Clarity''': How well does the system communicate what is going on?&lt;br /&gt;
Very Poor / Poor / Slightly Poor / Neutral / Slightly Good / Good / Excellent&lt;br /&gt;
&lt;br /&gt;
* '''Affordances''': How well does the system allow you to perform what you want to do?&lt;br /&gt;
Very Poor / Poor / Slightly Poor / Neutral / Slightly Good / Good / Excellent &lt;br /&gt;
&lt;br /&gt;
* '''Performance''': Does the system work efficiently and without bugs/glitches?&lt;br /&gt;
Very Poor / Poor / Slightly Poor / Neutral / Slightly Good / Good / Excellent &lt;br /&gt;
 &lt;br /&gt;
* '''Open Text Feedback''': An open-ended question is provided for evaluators to give feedback if they wish to do so.&lt;br /&gt;
&lt;br /&gt;
==Evaluators==&lt;br /&gt;
Evaluators will be users aged 18 and above. For this round, evaluators will be drawn primarily from the MIR community through solicitations via the ISMIR-community mailing list. The [[#Evaluation Webforms]] developed by the GC15UX team will ensure all participating systems will get equal number of evaluators.&lt;br /&gt;
&lt;br /&gt;
==Tasks for Evaluators==&lt;br /&gt;
&lt;br /&gt;
''To motivate the evaluators, a defined yet open task is given to the evaluators:&lt;br /&gt;
&lt;br /&gt;
''You need to put together a playlist for a particular event (e.g., dinner party at your house, workout session). Try to use the assigned system to make playlists for at least a couple of different events.''&lt;br /&gt;
&lt;br /&gt;
''The task is to ensure that evaluators have a (more or less) consistent goal when they interact with the systems. The goal is flexible and authentic to the evaluators' purposes (&amp;quot;music for their own situation&amp;quot;). As the task is not too specific, evaluators can potentially look for a wide range of music in terms of genre, mood and other aspects. This allows great flexibility and virtually unlimited possibility in system or service design. '' &lt;br /&gt;
&lt;br /&gt;
''Another important consideration in designing the task is the music collection available for this GC15UX: the Jamando collection. Jamando music is not well-known to most users/evaluators, whereas many more commonly seen music information tasks are more or less influenced by users' familiarity to the songs and song popularity. Through this task of &amp;quot;finding (copyright-free) background music for a self-made video&amp;quot;, we strive to minimize the need of looking for familiar or popular music.''&lt;br /&gt;
&lt;br /&gt;
==Evaluation Results==&lt;br /&gt;
Statistics of the scores given by all evaluators will be reported: mean, average deviation. Meaningful text comments from the evaluators will also be reported.&lt;br /&gt;
&lt;br /&gt;
==Evaluation Webforms==&lt;br /&gt;
Graders can take as many assignments as they wish in the My Assignments page. They are allowed to go back to the evaluation page anytime by clicking the thumbnail of the submission.   &lt;br /&gt;
&lt;br /&gt;
[[File:GCUX_wireframe_my_assignments.png|800px]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
To facilitate the evaluators and minimize their burden, the GC15UX team will provide a set of evaluation forms which wrap around the participating systems. As shown in the following image, the evaluation webforms are for scoring the participating systems, with their client interfaces embedded within an iframe in the left side of the webform.&lt;br /&gt;
&lt;br /&gt;
[[File:GCUX wireframe evaluation.png|800px]]&lt;br /&gt;
&lt;br /&gt;
=Organization=&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
This year GC15UX:JDISC adopts the two-phase model with two evaluations. The first phase will end by the ISMIR conference and we will disclose preliminary results at the conference. Then, phase II will start. Participating developers can continue improving their systems based on the feedback from the first phase and another round of evaluation will be conducted in February. We believe that this model serves the developers well since it is in accordance with the iterative nature of user-centered design. In this way, the developers will also have enough time to develop their complete MIR systems.&lt;br /&gt;
&lt;br /&gt;
*July ?: announce the GC15UX:JDISC&lt;br /&gt;
*Sep. 28st: the first deadline for system submission  &lt;br /&gt;
*Feb. 28st: the second deadline for system submission&lt;br /&gt;
&lt;br /&gt;
==What to Submit==&lt;br /&gt;
&lt;br /&gt;
A URL to the participanting system.&lt;br /&gt;
&lt;br /&gt;
==Contacts==&lt;br /&gt;
''The GC15UX team consists of: &lt;br /&gt;
:J. Stephen Downie, University of Illinois (MIREX director)&lt;br /&gt;
:Xiao Hu, University of Hong Kong (ISMIR2014 co-chair)&lt;br /&gt;
:Jin Ha Lee, University of Washington (ISMIR2014 program co-chair)&lt;br /&gt;
:David Bainbridge, Waikato University, New Zealand&lt;br /&gt;
:Christopher R. Maden, University of Illinois&lt;br /&gt;
:Kahyun Choi, University of Illinois&lt;br /&gt;
:Peter Organisciak, University of Illinois&lt;br /&gt;
:Yun Hao, University of Illinois''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Inquiries, suggestions, questions, comments are all highly welcome! Please contact Prof. Downie [mailto:jdownie@illinois.edu] or anyone in the team.&lt;/div&gt;</summary>
		<author><name>Chris Maden</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:GC15UX:JDISC&amp;diff=11217</id>
		<title>2015:GC15UX:JDISC</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:GC15UX:JDISC&amp;diff=11217"/>
		<updated>2015-08-19T22:03:15Z</updated>

		<summary type="html">&lt;p&gt;Chris Maden: /* About J-DISC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Grand Challenge 2015: User Experience with J-DISC}}&lt;br /&gt;
=Purpose=&lt;br /&gt;
''Holistic, user-centered evaluation of the user experience in interacting with complete, user-facing music information retrieval (MIR) systems.''&lt;br /&gt;
&lt;br /&gt;
=Goals=&lt;br /&gt;
# ''To inspire the development of complete MIR systems.''&lt;br /&gt;
# ''To promote the notion of user experience as a first-class research objective in the MIR community.''&lt;br /&gt;
&lt;br /&gt;
=About J-DISC=&lt;br /&gt;
J-DISC ([http://jdisc.columbia.edu http://jdisc.columbia.edu]) is a resource for searching and exploring jazz recordings created by the Center for Jazz Studies at Columbia University. It is organized to present complete information on jazz recording sessions, and merge a large corpus of session data into a single easily accessible repository, in a manner that can be easily searched, cross-searched, navigated and cited. In addition to the focus on recording artist/leaders of traditional discography, J-DISC incorporates extensive cultural, geographic, biographical, composer and studio information that can also be easily searched and accessed.&lt;br /&gt;
&lt;br /&gt;
=Dataset=&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#808080&amp;quot;&amp;gt;J-DISC contains fully structured and searchable metadata.&amp;lt;/span&amp;gt; Key entities in the dataset include '''person, skill, session, track, composition, and issue'''. There are 19 tables in the dataset representing various relationships between those entities. Below are brief descriptions for each of the 19 tables (table names in alphabetical order, numbers at the end showing the number of rows in each table):&lt;br /&gt;
&lt;br /&gt;
*'''composition: '''Compositions, which may be recorded as tracks at sessions. (7,104)&lt;br /&gt;
*'''composition_composer: '''Associations between compositions and their composers. (8,622)&lt;br /&gt;
* '''composition_lyricist: '''Associations between compositions and their lyricists. (880)&lt;br /&gt;
* ''' composition_title: '''Alternative titles for compositions. (811)&lt;br /&gt;
*''' issue: '''Releases or issues of tracks e.g. as albums. (545)&lt;br /&gt;
*'''issue_leader: '''Associations between releases and their leaders. (610)&lt;br /&gt;
*'''issue_overdub_people: '''Associations between overdubbed releases and people working on them. (17)&lt;br /&gt;
*'''issue_track: '''Associations between releases and the included tracks. (3,769)&lt;br /&gt;
*'''person: '''People associated with musical recordings, including musicians and composers. (5,734)&lt;br /&gt;
* '''person_ethnicity: '''Association of people with ethnic descriptions. (70)&lt;br /&gt;
*'''person_session_skill: '''Correlation between people, sessions, and skills or instruments. (21,424)&lt;br /&gt;
*'''person_skill: '''Correlation between people and primary skills or instruments. (6,450)&lt;br /&gt;
*'''person_track_skill: '''Variation from sessions in correlation between people, tracks, and skills or instruments. (7,044)&lt;br /&gt;
*'''session: '''Recording sessions, at which one or more musicians produced one or more tracks. (2,711)&lt;br /&gt;
* '''session_leader: '''Associations between sessions and their leader(s). (3,123)&lt;br /&gt;
* '''skill: '''Skills associated with musical recordings, including instruments played, conducting, composing. (209)&lt;br /&gt;
*'''track: '''Tracks laid down at recording sessions by musicians. (15,361)&lt;br /&gt;
*'''track_composition: '''Associations between tracks and compositions. (15,672)&lt;br /&gt;
*'''track_soloist: '''Associations between tracks and soloists. (630)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[2015:GC15UX:JDISC_Schema]] presents more details about each table.&lt;br /&gt;
&lt;br /&gt;
==ER Diagram of the Schema==&lt;br /&gt;
[[File:jdisc_schema.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[https://www.music-ir.org/mirex/gc15ux_jdisc/jdisc_schema.pdf Click to see the full version of the diagram (PDF)]&lt;br /&gt;
&lt;br /&gt;
=Download the Dataset=&lt;br /&gt;
# &amp;lt;span style=&amp;quot;color:#808080&amp;quot;&amp;gt;user agreement signed during download?&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Participating Systems=&lt;br /&gt;
''Unlike conventional MIREX tasks, participants are not asked to submit their systems. Instead, the systems will be hosted by their developers. All participating systems need to be constructed as websites accessible to users through normal web browsers. Participating teams will submit the URLs to their systems to the GC15UX team.''&lt;br /&gt;
&lt;br /&gt;
To ensure a consistent experience, evaluators will see participating systems in fixed size window: '''1024x768'''. Please test your system for this screen size.&lt;br /&gt;
&lt;br /&gt;
See the [[#Evaluation Webforms]] below for a better understanding of our E6K-inpsired evaluation system design.&lt;br /&gt;
&lt;br /&gt;
==Potential Participants==&lt;br /&gt;
&lt;br /&gt;
Please put your names and email contacts in the following table. It is encouraged that you give your team a cool name! &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! (Cool) Team Name&lt;br /&gt;
! Name(s)&lt;br /&gt;
! Email(s)&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
As written in the name of the Grand Challenge, the evaluation will be user-centered. All systems will be used by a number of human evaluators and be rated by them on several most important criteria in evaluating user experience. &lt;br /&gt;
&lt;br /&gt;
==Criteria==&lt;br /&gt;
&lt;br /&gt;
''Note that the evaluation criteria or its descriptions may be slightly changed in the months leading up to the submission deadline, as we test it and work to improve it.''&lt;br /&gt;
&lt;br /&gt;
Given the GC15UX is all about how users perceive their experiences of the systems, we intend to capture the user perceptions in a minimally intrusive manner and not to burden the users/evaluators with too many questions or required data inputs. The following criteria are grounded on the literature of Human Computer Interaction (HCI) and User Experience (UX), with a careful consideration on striking a balance between being comprehensive and minimizing evaluators' cognitive load. &lt;br /&gt;
&lt;br /&gt;
Evaluators will rate systems on the following criteria: &lt;br /&gt;
&lt;br /&gt;
* '''Overall satisfaction''': How would you rate your overall satisfaction with the system?&lt;br /&gt;
Very unsatisfactory / Unsatisfactory / Slightly unsatisfactory / Neutral / Slightly satisfactory / Satisfactory / Very satisfactory&lt;br /&gt;
&lt;br /&gt;
* '''Aesthetics''': How would you rate the visual attractiveness of the system?&lt;br /&gt;
Very Poor / Poor / Slightly Poor / Neutral / Slightly Good / Good / Excellent&lt;br /&gt;
&lt;br /&gt;
* '''Ease of use''': How easy was it to figure out how to use the system? &lt;br /&gt;
Very difficult / Difficult / Slightly difficult / Neutral / Slightly easy / Easy / Very easy&lt;br /&gt;
&lt;br /&gt;
* '''Clarity''': How well does the system communicate what is going on?&lt;br /&gt;
Very Poor / Poor / Slightly Poor / Neutral / Slightly Good / Good / Excellent&lt;br /&gt;
&lt;br /&gt;
* '''Affordances''': How well does the system allow you to perform what you want to do?&lt;br /&gt;
Very Poor / Poor / Slightly Poor / Neutral / Slightly Good / Good / Excellent &lt;br /&gt;
&lt;br /&gt;
* '''Performance''': Does the system work efficiently and without bugs/glitches?&lt;br /&gt;
Very Poor / Poor / Slightly Poor / Neutral / Slightly Good / Good / Excellent &lt;br /&gt;
 &lt;br /&gt;
* '''Open Text Feedback''': An open-ended question is provided for evaluators to give feedback if they wish to do so.&lt;br /&gt;
&lt;br /&gt;
==Evaluators==&lt;br /&gt;
Evaluators will be users aged 18 and above. For this round, evaluators will be drawn primarily from the MIR community through solicitations via the ISMIR-community mailing list. The [[#Evaluation Webforms]] developed by the GC15UX team will ensure all participating systems will get equal number of evaluators.&lt;br /&gt;
&lt;br /&gt;
==Tasks for Evaluators==&lt;br /&gt;
&lt;br /&gt;
''To motivate the evaluators, a defined yet open task is given to the evaluators:&lt;br /&gt;
&lt;br /&gt;
''You need to put together a playlist for a particular event (e.g., dinner party at your house, workout session). Try to use the assigned system to make playlists for at least a couple of different events.''&lt;br /&gt;
&lt;br /&gt;
''The task is to ensure that evaluators have a (more or less) consistent goal when they interact with the systems. The goal is flexible and authentic to the evaluators' purposes (&amp;quot;music for their own situation&amp;quot;). As the task is not too specific, evaluators can potentially look for a wide range of music in terms of genre, mood and other aspects. This allows great flexibility and virtually unlimited possibility in system or service design. '' &lt;br /&gt;
&lt;br /&gt;
''Another important consideration in designing the task is the music collection available for this GC15UX: the Jamando collection. Jamando music is not well-known to most users/evaluators, whereas many more commonly seen music information tasks are more or less influenced by users' familiarity to the songs and song popularity. Through this task of &amp;quot;finding (copyright-free) background music for a self-made video&amp;quot;, we strive to minimize the need of looking for familiar or popular music.''&lt;br /&gt;
&lt;br /&gt;
==Evaluation Results==&lt;br /&gt;
Statistics of the scores given by all evaluators will be reported: mean, average deviation. Meaningful text comments from the evaluators will also be reported.&lt;br /&gt;
&lt;br /&gt;
==Evaluation Webforms==&lt;br /&gt;
Graders can take as many assignments as they wish in the My Assignments page. They are allowed to go back to the evaluation page anytime by clicking the thumbnail of the submission.   &lt;br /&gt;
&lt;br /&gt;
[[File:GCUX_wireframe_my_assignments.png|800px]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
To facilitate the evaluators and minimize their burden, the GC15UX team will provide a set of evaluation forms which wrap around the participating systems. As shown in the following image, the evaluation webforms are for scoring the participating systems, with their client interfaces embedded within an iframe in the left side of the webform.&lt;br /&gt;
&lt;br /&gt;
[[File:GCUX wireframe evaluation.png|800px]]&lt;br /&gt;
&lt;br /&gt;
=Organization=&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
This year GC15UX:JDISC adopts the two-phase model with two evaluations. The first phase will end by the ISMIR conference and we will disclose preliminary results at the conference. Then, phase II will start. Participating developers can continue improving their systems based on the feedback from the first phase and another round of evaluation will be conducted in February. We believe that this model serves the developers well since it is in accordance with the iterative nature of user-centered design. In this way, the developers will also have enough time to develop their complete MIR systems.&lt;br /&gt;
&lt;br /&gt;
*July ?: announce the GC15UX:JDISC&lt;br /&gt;
*Sep. 28st: the first deadline for system submission  &lt;br /&gt;
*Feb. 28st: the second deadline for system submission&lt;br /&gt;
&lt;br /&gt;
==What to Submit==&lt;br /&gt;
&lt;br /&gt;
A URL to the participanting system.&lt;br /&gt;
&lt;br /&gt;
==Contacts==&lt;br /&gt;
''The GC15UX team consists of: &lt;br /&gt;
:J. Stephen Downie, University of Illinois (MIREX director)&lt;br /&gt;
:Xiao Hu, University of Hong Kong (ISMIR2014 co-chair)&lt;br /&gt;
:Jin Ha Lee, University of Washington (ISMIR2014 program co-chair)&lt;br /&gt;
:David Bainbridge, Waikato University, New Zealand&lt;br /&gt;
:Christopher R. Maden, University of Illinois&lt;br /&gt;
:Kahyun Choi, University of Illinois&lt;br /&gt;
:Peter Organisciak, University of Illinois&lt;br /&gt;
:Yun Hao, University of Illinois''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Inquiries, suggestions, questions, comments are all highly welcome! Please contact Prof. Downie [mailto:jdownie@illinois.edu] or anyone in the team.&lt;/div&gt;</summary>
		<author><name>Chris Maden</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=MIREX_HOME&amp;diff=10884</id>
		<title>MIREX HOME</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=MIREX_HOME&amp;diff=10884"/>
		<updated>2015-04-09T20:56:19Z</updated>

		<summary type="html">&lt;p&gt;Chris Maden: /* MIREX 2005 - 2014 Wikis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to MIREX 2015==&lt;br /&gt;
&lt;br /&gt;
This is the main page for the eleventh running of the Music Information Retrieval Evaluation eXchange (MIREX 2015). The International Music Information Retrieval Systems Evaluation Laboratory ([https://music-ir.org/evaluation IMIRSEL]) at the Graduate School of Library and Information Science ([http://www.lis.illinois.edu GSLIS]), University of Illinois at Urbana-Champaign ([http://www.illinois.edu UIUC]) is the principal organizer of MIREX 2015. &lt;br /&gt;
&lt;br /&gt;
The MIREX 2015 community will hold its annual meeting as part of [http://ismir2015.uma.es The 16th International Conference on Music Information Retrieval], ISMIR 2015, which will be held in Malaga, Spain, October 26th-30th, 2015. The MIREX plenary and poster sessions will be held during the conference.&lt;br /&gt;
&lt;br /&gt;
J. Stephen Downie&amp;lt;br&amp;gt;&lt;br /&gt;
Director, IMIRSEL&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Task Leadership Model==&lt;br /&gt;
&lt;br /&gt;
Like ISMIR 2014, we are prepared to improve the distribution of tasks for the upcoming MIREX 2015.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead a task, please add your name to the &amp;quot;Captains&amp;quot; column on the new [[2015:Task Captains]] page. Please direct any communication to the [https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest] mailing list.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
&lt;br /&gt;
We really need leaders to help us this year!&lt;br /&gt;
&lt;br /&gt;
==MIREX 2015 Deadline Dates==&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==MIREX 2015 Submission Instructions==&lt;br /&gt;
* Be sure to read through the rest of this page&lt;br /&gt;
* Be sure to read though the task pages for which you are submitting&lt;br /&gt;
* Be sure to follow the [[2009:Best Coding Practices for MIREX | Best Coding Practices for MIREX]]&lt;br /&gt;
* Be sure to follow the  [[MIREX 2014 Submission Instructions]] including both the tutorial video and the text&lt;br /&gt;
&lt;br /&gt;
==MIREX 2015 Possible Evaluation Tasks==&lt;br /&gt;
&lt;br /&gt;
* [[2015:GC15UX|2015:Grand Challenge on User Experience]]&lt;br /&gt;
* [[2015:Audio Classification (Train/Test) Tasks]], incorporating:&lt;br /&gt;
** Audio US Pop Genre Classification&lt;br /&gt;
** Audio Latin Genre Classification&lt;br /&gt;
** Audio Music Mood Classification&lt;br /&gt;
** Audio Classical Composer Identification&lt;br /&gt;
** [[2015:Audio K-POP Mood Classification]]&lt;br /&gt;
** [[2015:Audio K-POP Genre Classification]]&lt;br /&gt;
* [[2015:Audio Cover Song Identification]]&lt;br /&gt;
* [[2015:Audio Tag Classification]] &lt;br /&gt;
* [[2015:Audio Music Similarity and Retrieval]]&lt;br /&gt;
* [[2015:Symbolic Melodic Similarity]]&lt;br /&gt;
* [[2015:Audio Onset Detection]]&lt;br /&gt;
* [[2015:Audio Key Detection]]&lt;br /&gt;
* [[2015:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
* [[2015:Query by Singing/Humming]]&lt;br /&gt;
* [[2015:Audio Melody Extraction]]&lt;br /&gt;
* [[2015:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
* [[2015:Audio Chord Estimation]]&lt;br /&gt;
* [[2015:Query by Tapping]]&lt;br /&gt;
* [[2015:Audio Beat Tracking]]&lt;br /&gt;
* [[2015:Structural Segmentation]]&lt;br /&gt;
* [[2015:Audio Tempo Estimation]]&lt;br /&gt;
* [[2015:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
* [[2015:Audio Downbeat Estimation]]&lt;br /&gt;
* [[2015:Audio Fingerprinting]]&lt;br /&gt;
* [[2015:Singing Voice Separation]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Note to New Participants===&lt;br /&gt;
Please take the time to read the following review articles that explain the history and structure of MIREX.&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen (2008). The Music Information Retrieval Evaluation Exchange (2005-2007):&amp;lt;br&amp;gt;&lt;br /&gt;
A window into music information retrieval research.''Acoustical Science and Technology 29'' (4): 247-255. &amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://dx.doi.org/10.1250/ast.29.247 http://dx.doi.org/10.1250/ast.29.247]&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen, Andreas F. Ehmann, Mert Bay and M. Cameron Jones. (2010).&amp;lt;br&amp;gt;&lt;br /&gt;
The Music Information Retrieval Evaluation eXchange: Some Observations and Insights.&amp;lt;br&amp;gt;&lt;br /&gt;
''Advances in Music Information Retrieval'' Vol. 274, pp. 93-115&amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://bit.ly/KpM5u5 http://bit.ly/KpM5u5]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Runtime Limits===&lt;br /&gt;
&lt;br /&gt;
We reserve the right to stop any process that exceeds runtime limits for each task.  We will do our best to notify you in enough time to allow revisions, but this may not be possible in some cases. Please respect the published runtime limits.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Note to All Participants===&lt;br /&gt;
&lt;br /&gt;
Because MIREX is premised upon the sharing of ideas and results, '''ALL''' MIREX participants are expected to:&lt;br /&gt;
&lt;br /&gt;
# submit a DRAFT 2-3 page extended abstract PDF in the ISMIR format about the submitted programme(s) to help us and the community better understand how the algorithm works when submitting their programme(s).&lt;br /&gt;
# submit a FINALIZED 2-3 page extended abstract PDF in the ISMIR format prior to ISMIR 2014 for posting on the respective results pages (sometimes the same abstract can be used for multiple submissions; in many cases the DRAFT and FINALIZED abstracts are the same)&lt;br /&gt;
# present a poster at the MIREX 2014 poster session at ISMIR 2014&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Software Dependency Requests===&lt;br /&gt;
If you have not submitted to MIREX before or are unsure whether IMIRSEL currently supports some of the software/architecture dependencies for your submission a [https://docs.google.com/forms/d/1ZgC72BhhN0PFFiaj9NDejRdl1eW7wAFLIyMhAtoV0CA/viewform?usp=send_form dependency request form is available]. Please submit details of your dependencies on this form and the IMIRSEL team will attempt to satisfy them for you. &lt;br /&gt;
&lt;br /&gt;
Due to the high volume of submissions expected at MIREX 2015, submissions with difficult to satisfy dependencies that the team has not been given sufficient notice of may result in the submission being rejected.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Finally, you will also be expected to detail your software/architecture dependencies in a README file to be provided to the submission system.&lt;br /&gt;
&lt;br /&gt;
==Getting Involved in MIREX 2015==&lt;br /&gt;
MIREX is a community-based endeavour. Be a part of the community and help make MIREX 2015 the best yet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Mailing List Participation===&lt;br /&gt;
If you are interested in formal MIR evaluation, you should also subscribe to the &amp;quot;MIREX&amp;quot; (aka &amp;quot;EvalFest&amp;quot;) mail list and participate in the community discussions about defining and running MIREX 2015 tasks. Subscription information at: &lt;br /&gt;
[https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest Central]. &lt;br /&gt;
&lt;br /&gt;
If you are participating in MIREX 2015, it is VERY IMPORTANT that you are subscribed to EvalFest. Deadlines, task updates and other important information will be announced via this mailing list. Please use the EvalFest for discussion of MIREX task proposals and other MIREX related issues. This wiki (MIREX 2015 wiki) will be used to embody and disseminate task proposals, however, task related discussions should be conducted on the MIREX organization mailing list (EvalFest) rather than on this wiki, but should be summarized here. &lt;br /&gt;
&lt;br /&gt;
Where possible, definitions or example code for new evaluation metrics or tasks should be provided to the IMIRSEL team who will embody them in software as part of the NEMA analytics framework, which will be released to the community at or before ISMIR 2015 - providing a standardised set of interfaces and output to disciplined evaluation procedures for a great many MIR tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Wiki Participation===&lt;br /&gt;
If you find that you cannot edit a MIREX wiki page, you will need to create a new account via: [[Special:Userlogin]].&lt;br /&gt;
&lt;br /&gt;
Please note that because of &amp;quot;spam-bots&amp;quot;, MIREX wiki registration requests may be moderated by IMIRSEL members. It might take up to 24 hours for approval (Thank you for your patience!).&lt;br /&gt;
&lt;br /&gt;
==MIREX 2005 - 2014 Wikis==&lt;br /&gt;
Content from MIREX 2005 - 2014 are available at:&lt;br /&gt;
'''[[2014:Main_Page|MIREX 2014]]''' &lt;br /&gt;
'''[[2013:Main_Page|MIREX 2013]]''' &lt;br /&gt;
'''[[2012:Main_Page|MIREX 2012]]''' &lt;br /&gt;
'''[[2011:Main_Page|MIREX 2011]]''' &lt;br /&gt;
'''[[2010:Main_Page|MIREX 2010]]''' &lt;br /&gt;
'''[[2009:Main_Page|MIREX 2009]]''' &lt;br /&gt;
'''[[2008:Main_Page|MIREX 2008]]''' &lt;br /&gt;
'''[[2007:Main_Page|MIREX 2007]]''' &lt;br /&gt;
'''[[2006:Main_Page|MIREX 2006]]''' &lt;br /&gt;
'''[[2005:Main_Page|MIREX 2005]]'''&lt;/div&gt;</summary>
		<author><name>Chris Maden</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:MIREX2014_Results&amp;diff=10761</id>
		<title>2014:MIREX2014 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:MIREX2014_Results&amp;diff=10761"/>
		<updated>2014-10-27T20:53:48Z</updated>

		<summary type="html">&lt;p&gt;Chris Maden: added most of QBT results&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Results by Task ==&lt;br /&gt;
&lt;br /&gt;
==OVERALL RESULTS POSTERS &amp;lt;!--(First Version: Will need updating as last runs are completed)--&amp;gt;==&lt;br /&gt;
&lt;br /&gt;
[https://www.music-ir.org/mirex/results/2014/mirex_2014_poster.pdf MIREX 2014 Overall Results Posters (PDF)]&lt;br /&gt;
&lt;br /&gt;
===Train-Test Task Set===&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kgenrek_report/ Audio KPOP Genre (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kgenrea_report/ Audio KPOP Genre (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kmoodk_report/ Audio KPOP Mood (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kmooda_report/ Audio KPOP Mood (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
===Other Tasks===&lt;br /&gt;
&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/sal/ SALAMI dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2014:Discovery of Repeated Themes &amp;amp; Sections Results | Discovery of Repeated Themes &amp;amp; Sections Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2014/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2014:Audio Cover Song Identification Results|Audio Cover Song Identification Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2014/results/akd/ Audio Key Detection Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/abt/smc/ SMC Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Chord Estimation&lt;br /&gt;
** [[2014:Audio_Chord_Estimation_Results_MIREX_2009 | MIREX &amp;amp;rsquo;09 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2014:Audio_Chord_Estimation_Results_Billboard_2012 | Billboard &amp;amp;rsquo;12 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2014:Audio_Chord_Estimation_Results_Billboard_2013 | Billboard &amp;amp;rsquo;13 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Tag Classification Results&lt;br /&gt;
** Major Miner Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask1_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask1_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
** Mood Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask2_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask2_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Query-by-Singing/Humming Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_hidden/  Hidden Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_thinkit/ ThinkIt Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_ioacas/ IOACAS Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Query-by-Tapping Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbt/qbt_task1_jang/ Subtask 1, Jang dataset]&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbt/qbt_task1_hsiao/ Subtask 1, Hsiao dataset]&lt;br /&gt;
** Subtask 1, QBT-Extended dataset&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbt/qbt_task2_jang/ Subtask 2, Jang dataset]&lt;br /&gt;
** Subtask 3, QBT-Extended dataset&lt;br /&gt;
&lt;br /&gt;
* [[2014:Audio Downbeat Estimation Results|Audio Downbeat Estimation Results]]&lt;br /&gt;
* [[2014:Audio Fingerprinting Results|Audio Fingerprinting Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Real-time_Audio_to_Score_Alignment_(a.k.a._Score_Following)_Results#Summary_Results Real-time Audio to Score Alignment (a.k.a. Score Following) Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Audio_Music_Similarity_and_Retrieval_Results Audio Music Similarity and Retrieval Results]&amp;amp;nbsp;&lt;br /&gt;
* [[2014:Symbolic_Melodic_Similarity_Results | Symbolic Melodic Similarity Results]]&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Singing_Voice_Separation_Results Singing Voice Separation]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results Multiple Fundamental Frequency Estimation &amp;amp; Tracking Results]&amp;amp;nbsp;&lt;/div&gt;</summary>
		<author><name>Chris Maden</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:MIREX2014_Results&amp;diff=10608</id>
		<title>2014:MIREX2014 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:MIREX2014_Results&amp;diff=10608"/>
		<updated>2014-10-20T20:00:14Z</updated>

		<summary type="html">&lt;p&gt;Chris Maden: /* Other Tasks */ added QBT result stubs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Results by Task ==&lt;br /&gt;
&lt;br /&gt;
===Train-Test Task Set===&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kgenrek_report/ Audio KPOP Genre (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kgenrea_report/ Audio KPOP Genre (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kmoodk_report/ Audio KPOP Mood (Annotated by Korean Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2014/results/act/kmooda_report/ Audio KPOP Mood (Annotated by American Annotators) Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
===Other Tasks===&lt;br /&gt;
&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/struct/sal/ SALAMI dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2014/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2014:Audio Cover Song Identification Results|Audio Cover Song Identification Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2014/results/akd/ Audio Key Detection Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/abt/smc/ SMC Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Chord Estimation&lt;br /&gt;
** [[2014:Audio_Chord_Estimation_Results_MIREX_2009 | MIREX &amp;amp;rsquo;09 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2014:Audio_Chord_Estimation_Results_Billboard_2012 | Billboard &amp;amp;rsquo;12 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
** [[2014:Audio_Chord_Estimation_Results_Billboard_2013 | Billboard &amp;amp;rsquo;13 Dataset]] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Audio Tag Classification Results&lt;br /&gt;
** Major Miner Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask1_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask1_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
** Mood Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask2_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/atg/subtask2_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Query-by-Singing/Humming Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_hidden/  Hidden Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_thinkit/ ThinkIt Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task1_ioacas/ IOACAS Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2014/results/qbsh/qbsh_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* Query-by-Tapping Results&lt;br /&gt;
** [[2014:Query by Tapping Results Subtask 1 (Jang)|Subtask 1, Jang dataset]]&lt;br /&gt;
** [[2014:Query by Tapping Results Subtask 1 (Hsiao)|Subtask 1, Hsiao dataset]]&lt;br /&gt;
** [[2014:Query by Tapping Results Subtask 2|Subtask 2, Jang dataset]]&lt;br /&gt;
** [[2014:Query by Tapping Results Subtask 3|Subtask 3, QBT-Extended dataset]]&lt;br /&gt;
&lt;br /&gt;
* [[2014:Audio Downbeat Estimation Results|Audio Downbeat Estimation Results]]&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Real-time_Audio_to_Score_Alignment_(a.k.a._Score_Following)_Results#Summary_Results Real-time Audio to Score Alignment (a.k.a. Score Following) Results] &amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Audio_Music_Similarity_and_Retrieval_Results Audio Music Similarity and Retrieval Results]&amp;amp;nbsp;&lt;br /&gt;
* [[2014:Symbolic_Melodic_Similarity_Results | Symbolic Melodic Similarity Results]]&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Singing_Voice_Separation_Results#For_the_Music_Accompaniment_.28dB.29 Singing Voice Separation]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2014:Multiple_Fundamental_Frequency_Estimation_%26_Tracking_Results Multiple Fundamental Frequency Estimation &amp;amp; Tracking Results]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
* [[2014:Discovery of Repeated Themes &amp;amp; Sections Results | Discovery of Repeated Themes &amp;amp; Sections Results]]&lt;/div&gt;</summary>
		<author><name>Chris Maden</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=9975</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=9975"/>
		<updated>2014-02-13T09:44:28Z</updated>

		<summary type="html">&lt;p&gt;Chris Maden: /* Test the query files */ clarifying output format&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have two corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT/HASIO.&lt;br /&gt;
* '''Query files''': About 800 text files of onset time to retrieve target MIDIs in MIR_QBT/HASIO. These onset files can help participant concentrate on similarity matching instead of onset detection. All onset files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate).&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;/div&gt;</summary>
		<author><name>Chris Maden</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:MIREX2013_Results&amp;diff=9814</id>
		<title>2013:MIREX2013 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:MIREX2013_Results&amp;diff=9814"/>
		<updated>2013-10-29T19:15:07Z</updated>

		<summary type="html">&lt;p&gt;Chris Maden: /* Other Tasks */ reordered ABT datasets alphabetically&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==OVERALL RESULTS POSTERS &amp;lt;!--(First Version: Will need updating as last runs are completed)--&amp;gt;==&lt;br /&gt;
&lt;br /&gt;
This page is under construction. &lt;br /&gt;
&lt;br /&gt;
[https://www.music-ir.org/mirex/results/2013/mirex_2013_poster.pdf MIREX 2013 Overall Results Posters (PDF)]&lt;br /&gt;
&lt;br /&gt;
==Results by Task ==&lt;br /&gt;
&lt;br /&gt;
===Train-Test Task Set===&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
===Other Tasks===&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/abt/dav/ DAV Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
* Audio Chord Detection Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ace/mrx/ MIREX Dataset]  &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ace/mcg/ McGill Dataset]  &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2013/results/akd/ Audio Key Detection Results] &amp;amp;nbsp;&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
* [[2013:Audio_Music_Similarity_and_Retrieval_Results | Audio Music Similarity and Retrieval Results]] &lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2013/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
* Audio Tag Classification Results&lt;br /&gt;
** Major Miner Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask1_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask1_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
** Mood Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask2_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask2_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
* [[2013:Multiple_Fundamental_Frequency_Estimation_&amp;amp;_Tracking_Results | Multiple Fundamental Frequency Estimation &amp;amp; Tracking Results]]&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/sal/ SALAMI dataset] &amp;amp;nbsp;&lt;br /&gt;
* Query-by-Singing/Humming Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1_hidden/  Hidden Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1a_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1b_thinkit/ ThinkIt Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1c_ioacas/ IOACAS Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
* Query-by-Tapping Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbt/qbt_task1_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbt/qbt_task1_hsiao/ HSIAO Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbt/qbt_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
*[[2013:Real-time_Audio_to_Score_Alignment_(a.k.a._Score_Following)_Results | Real-time Audio to Score Alignment (a.k.a. Score Following) Results ]]&lt;br /&gt;
* [[2013:Symbolic_Melodic_Similarity_Results | Symbolic Melodic Similarity Results]]&lt;br /&gt;
* [[2013:Discovery of Repeated Themes &amp;amp; Sections Results | Discovery of Repeated Themes &amp;amp; Sections Results]]&lt;/div&gt;</summary>
		<author><name>Chris Maden</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:MIREX2013_Results&amp;diff=9813</id>
		<title>2013:MIREX2013 Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:MIREX2013_Results&amp;diff=9813"/>
		<updated>2013-10-29T19:14:31Z</updated>

		<summary type="html">&lt;p&gt;Chris Maden: changed SMC to DAV under ABT&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==OVERALL RESULTS POSTERS &amp;lt;!--(First Version: Will need updating as last runs are completed)--&amp;gt;==&lt;br /&gt;
&lt;br /&gt;
This page is under construction. &lt;br /&gt;
&lt;br /&gt;
[https://www.music-ir.org/mirex/results/2013/mirex_2013_poster.pdf MIREX 2013 Overall Results Posters (PDF)]&lt;br /&gt;
&lt;br /&gt;
==Results by Task ==&lt;br /&gt;
&lt;br /&gt;
===Train-Test Task Set===&lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/composer_report/ Audio Classical Composer Identification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/latin_report/ Audio Latin Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/mood_report/index.html Audio Music Mood Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
* [https://www.music-ir.org/nema_out/mirex2013/results/act/mixed_report/ Audio Mixed Popular Genre Classification Results ]&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
===Other Tasks===&lt;br /&gt;
&lt;br /&gt;
* Audio Beat Tracking Results &lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/abt/dav/ DAV Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/abt/mck/ MCK Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/abt/maz/ MAZ Dataset] &amp;amp;nbsp;&lt;br /&gt;
* Audio Chord Detection Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ace/mrx/ MIREX Dataset]  &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ace/mcg/ McGill Dataset]  &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2013/results/akd/ Audio Key Detection Results] &amp;amp;nbsp;&lt;br /&gt;
* Audio Melody Extraction Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/adc04/  ADC04 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx05/ MIREX05 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/ind08/ INDIAN08 Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx09_0db/ MIREX09 0dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx09_m5db/ MIREX09 -5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ame/mrx09_p5db/ MIREX09 +5dB Dataset] &amp;amp;nbsp;&lt;br /&gt;
* [[2013:Audio_Music_Similarity_and_Retrieval_Results | Audio Music Similarity and Retrieval Results]] &lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2013/results/aod/ Audio Onset Detection Results] &amp;amp;nbsp;&lt;br /&gt;
* Audio Tag Classification Results&lt;br /&gt;
** Major Miner Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask1_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask1_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
** Mood Tag dataset&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask2_report/bin/ Binary relevance (classification evaluation)] &amp;amp;nbsp;&lt;br /&gt;
*** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/atg/subtask2_report/aff/ Affinity estimation evaluation] &amp;amp;nbsp;&lt;br /&gt;
* [https://nema.lis.illinois.edu/nema_out/mirex2013/results/ate/ Audio Tempo Estimation Results] &amp;amp;nbsp;&lt;br /&gt;
* [[2013:Multiple_Fundamental_Frequency_Estimation_&amp;amp;_Tracking_Results | Multiple Fundamental Frequency Estimation &amp;amp; Tracking Results]]&lt;br /&gt;
* Music Structure Segmentation Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/mrx09/ MIREX09 dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/mrx10_1/ RWC dataset - Quaero (MIREX10) Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/mrx10_2/ RWC dataset - Original RWC Ground-truth] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/struct/sal/ SALAMI dataset] &amp;amp;nbsp;&lt;br /&gt;
* Query-by-Singing/Humming Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1_hidden/  Hidden Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1a_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1b_thinkit/ ThinkIt Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task1c_ioacas/ IOACAS Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbsh/qbsh_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
* Query-by-Tapping Results&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbt/qbt_task1_jang/  Jang Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbt/qbt_task1_hsiao/ HSIAO Dataset] &amp;amp;nbsp;&lt;br /&gt;
** [https://nema.lis.illinois.edu/nema_out/mirex2013/results/qbt/qbt_task2_jang/ Subtask2 Dataset] &amp;amp;nbsp;&lt;br /&gt;
*[[2013:Real-time_Audio_to_Score_Alignment_(a.k.a._Score_Following)_Results | Real-time Audio to Score Alignment (a.k.a. Score Following) Results ]]&lt;br /&gt;
* [[2013:Symbolic_Melodic_Similarity_Results | Symbolic Melodic Similarity Results]]&lt;br /&gt;
* [[2013:Discovery of Repeated Themes &amp;amp; Sections Results | Discovery of Repeated Themes &amp;amp; Sections Results]]&lt;/div&gt;</summary>
		<author><name>Chris Maden</name></author>
		
	</entry>
</feed>