Difference between revisions of "2007:Query by Singing/Humming"

From MIREX Wiki
(Query Data)
(Task description)
Line 20: Line 20:
 
== Task description ==  
 
== Task description ==  
 
Classic QBSH evaluation:
 
Classic QBSH evaluation:
* '''Input''': human singing/humming snippets (.wav)
+
* '''Input''': human singing/humming snippets (.wav). Queries are from Roger Jang's corpus and ThinkIT corpus.
* '''Database''': ground-truth and noise midi files(which are monophonic)  
+
* '''Database''': ground-truth and noise midi files(which are monophonic). Comprised of Roger 48+106 Jang's and ThinkIT's ground-truth along with 5000+ essen noise midifiles.
* '''Output''': candidate list.  
+
* '''Output''': top-20 candidate list.  
* '''Evaluation''': Mean Reciprocal Rank (MMR) and Top-X hit rate.
+
* '''Evaluation''': Mean Reciprocal Rank (MRR) and Top-X hit rate.
  
Rainer Typke also suggests a hybrid symbolic/audio query by humming task which combines a few different algorithm modules (like mono/poly phonic transcriber and rhythm/melody matcher) and evaluates them in a more complex database composed of polyphonic audio files. It could be further discussed.
+
To make algorithms able to share intermediate steps, participants are encouraged to submit separate transcriber and matcher modules instead of integrated ones, which is according to Rainer Typke's suggestion. So transcribers and matchers from different submissions could work together with the same pre-defined interface and thus for us it's possible to find the best combination. Besides, note based approaches (symbolic approaches) and pitch contour based approaches (non-symbolic approaches?) are compared.
 +
 
 +
[[Image:framework.jpg]]
  
 
== Participants ==  
 
== Participants ==  

Revision as of 07:42, 9 May 2007

Status

This is only a very basic draft version of a task proposal. Once more people show interest we can fill in the details.

The goal of the Query-by-Singing/Humming (QBSH) task is the evaluation of MIR systems that take as query input queries sung or hummed by real-world users. More information can be found in:

Please feel free to edit this page.

Query Data

1. Roger Jang's corpus (MIREX2006 QBSH corpus) which is comprised of 2797 queries along with 48 ground-truth MIDI files. All queries are from the beginning of references.

2. ThinkIT corpus comprised of 355 queries and 106 monophonic ground-truth midi files (with MIDI 0 or 1 format). There are no "singing from beginning" gurantee. This corpus will be published after the task running.

3. Noise MIDI will be the 5000+ Essen collection(can be accessed from http://www.esac-data.org/).

To build a large test set which can reflect real-world queries, it is suggested that every participant makes a contribution to the evaluation corpus.

Task description

Classic QBSH evaluation:

  • Input: human singing/humming snippets (.wav). Queries are from Roger Jang's corpus and ThinkIT corpus.
  • Database: ground-truth and noise midi files(which are monophonic). Comprised of Roger 48+106 Jang's and ThinkIT's ground-truth along with 5000+ essen noise midifiles.
  • Output: top-20 candidate list.
  • Evaluation: Mean Reciprocal Rank (MRR) and Top-X hit rate.

To make algorithms able to share intermediate steps, participants are encouraged to submit separate transcriber and matcher modules instead of integrated ones, which is according to Rainer Typke's suggestion. So transcribers and matchers from different submissions could work together with the same pre-defined interface and thus for us it's possible to find the best combination. Besides, note based approaches (symbolic approaches) and pitch contour based approaches (non-symbolic approaches?) are compared.

File:Framework.jpg

Participants

If you think there is a slight chance that you might want to participate, please add your name and e-mail address to this list

  • Xiao Wu (xwu at hccl dot ioa dot ac dot cn)
  • Maarten Grachten (maarten dot grachten at jku dot at)
  • Jiang Danning (jiangdn at cn dot ibm dot com)
  • Niko Mikkila (mikkila at cs dot helsinki dot fi)