Difference between revisions of "2009:Query by Singing/Humming"
Jia-Min Ren (talk | contribs) (→Participants) |
(→Discussions for 2009) |
||
Line 16: | Line 16: | ||
Please feel free to edit this page. | Please feel free to edit this page. | ||
+ | |||
+ | === Roger Jang's Comments 04/09/2009 === | ||
+ | I would like to suggest to extend the submission deadline for a week. There are two reasons for this: | ||
+ | * I have prepare another QBSH dataset which is bigger and more balanced. But I need some time to tidy things up during the weekend. I wish to make it available for the QBSH task this year. | ||
+ | * I propose two subtasks which are the same as what we had on MIREX 2006. Please see below for details. | ||
== Query Data == | == Query Data == | ||
Line 49: | Line 54: | ||
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: | where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: | ||
+ | |||
./QBSH/Database/00001.mid | ./QBSH/Database/00001.mid |
Revision as of 21:23, 3 September 2009
Contents
Description
The text of this section is copied from the 2008 page. Please add your comments and discussions for 2009.
The goal of the Query-by-Singing/Humming (QBSH) task is the evaluation of MIR systems that take as query input queries sung or hummed by real-world users. More information can be found in:
Discussions for 2009
Your comments here.
Please feel free to edit this page.
Roger Jang's Comments 04/09/2009
I would like to suggest to extend the submission deadline for a week. There are two reasons for this:
- I have prepare another QBSH dataset which is bigger and more balanced. But I need some time to tidy things up during the weekend. I wish to make it available for the QBSH task this year.
- I propose two subtasks which are the same as what we had on MIREX 2006. Please see below for details.
Query Data
1. Roger Jang's corpus (MIREX2006 QBSH corpus) which is comprised of 2797 queries along with 48 ground-truth MIDI files. All queries are from the beginning of references.
2. ThinkIT corpus comprised of 355 queries and 106 monophonic ground-truth MIDI files (with MIDI 0 or 1 format). There are no "singing from beginning" gurantee. This corpus will be published after the task running.
3. Noise MIDI will be the 5000+ Essen collection(can be accessed from http://www.esac-data.org/).
To build a large test set which can reflect real-world queries, it is suggested that every participant makes a contribution to the evaluation corpus.
Evaluation Corpus Contribution
Every participant will be asked to contribute 100~200 wave queries (8k 16bits) as well as the ground truth MIDI as test data. Please make your contributed data conformed to the format used in the ThinkIT corpus (TITcorpus). These test data will be released after the competition as a public-domain QBSH dataset.
Here is a simple tool for recording query data. You may need to have .NET 2.0 or above installed in your system in order to run this program. The generated files conform to the format used in the ThinkIT corpus. Of course you are also welcomed to use your own program to record the query data.
Task description
Classic QBSH evaluation:
- Input: human singing/humming snippets (.wav). Queries are from Roger Jang's corpus and ThinkIT corpus.
- Database: ground-truth and noise MIDI files(which are monophonic). Comprised of 48+106 Roger Jang's and ThinkIT's ground-truth along with a cleaned version of Essen Database(2000+ MIDIs which are used last year)
- Output: top-20 candidate list.
- Evaluation: Top-10 hit rate (1 point is scored for a hit in the top 10 and 0 is scored otherwise).
To make algorithms able to share intermediate steps, participants are encouraged to submit separate tracker and matcher modules instead of integrated ones, which is according to Rainer Typke's suggestion. So trackers and matchers from different submissions could work together with the same pre-defined interface and thus for us it's possible to find the best combination.
Interface I: Breakdown Version
The following was based on the suggestion by Xiao Wu last year with some modifications.
1. Database indexing/building. Calling format should look like
indexing %dbMidi.list% %dir_workspace_root%
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example:
./QBSH/Database/00001.mid ./QBSH/Database/00002.mid ./QBSH/Database/00003.mid ./QBSH/Database/00004.mid ...
Output indexed files are placed into %dir_workspace_root%.
2. Pitch tracker. Calling format:
pitch_tracker %queryWave.list% %dir_query_pitch%
Each input file dir_query/query_xxxxx.wav in %queryWave.list% outputs a corresponding transcription %dir_query_pitch%/query_xxxxx.pitch which gives the pitch sequence in midi note scale with the resolution of 10ms:
0 0 62.23 62.25 62.21 ...
Thus a query with x seconds should output a pitch file with 100*x lines. Places of silence/rest are set to be 0.
3. Pitch matcher. Calling format:
pitch_matcher %dbMidi.list% %queryPitch.list% %resultFile%
where %queryPitch.list% looks like
dir_query_pitch/query_00001.pitch dir_query_pitch/query_00002.pitch dir_query_pitch/query_00003.pitch ...
and the result file gives top-20 candidates(if has) for each query:
query_00001: 00025 01003 02200 ... query_00002: 01547 02313 07653 ... query_00003: 03142 00320 00973 ... ...
Interface II: Integrated Version
If you want to pack everything together, the calling format should be much simpler:
qbshMainProgram %dbMidi.list% %queryWave.list% %resultFile% %dir_workspace_root%
You can use %dir_workspace_root% to store any temporary indexing/database structures. The result file should have the same format as mentioned previously.
Participants
If you think there is a slight chance that you might want to participate, please add your name and e-mail address to this list
1. Aurora Marsye (aurora dot marsye at gmail dot com)
2. Michael Kolta (mike at kolta dot net)
3. Pierre Hanna, Julien Allali, Pascal Ferraro, Matthias Robine, SIMBALS University of Bordeaux (hanna at labri dot fr)
4. Alexandra Uitdenbogerd (sandrau at rmit dot edu dot au)