Difference between revisions of "2008:Query by Singing/Humming"
m (→Participants) |
|||
Line 34: | Line 34: | ||
* Lei Wang (leiwang.mir at gmail dot com) | * Lei Wang (leiwang.mir at gmail dot com) | ||
* Xiao Wu (xwu2006 at gmail dot com) | * Xiao Wu (xwu2006 at gmail dot com) | ||
+ | * Matti Ryynänen and Anssi Klapuri (Tampere University of Technology), matti.ryynanen <at> tut.fi, anssi.klapuri <at> tut.fi | ||
== Xiao Wu's Comments == | == Xiao Wu's Comments == |
Revision as of 00:54, 6 August 2008
Contents
Status
The goal of the Query-by-Singing/Humming (QBSH) task is the evaluation of MIR systems that take as query input queries sung or hummed by real-world users. More information can be found in:
Please feel free to edit this page.
Query Data
1. Roger Jang's corpus (MIREX2006 QBSH corpus) which is comprised of 2797 queries along with 48 ground-truth MIDI files. All queries are from the beginning of references.
2. ThinkIT corpus comprised of 355 queries and 106 monophonic ground-truth MIDI files (with MIDI 0 or 1 format). There are no "singing from beginning" gurantee. This corpus will be published after the task running.
3. Noise MIDI will be the 5000+ Essen collection(can be accessed from http://www.esac-data.org/).
To build a large test set which can reflect real-world queries, it is suggested that every participant makes a contribution to the evaluation corpus.
Evaluation Corpus Contribution
Every participant will be asked to contribute 100~200 wave queries as test data. These test data will be released after the competition as a public-domain QBSH dataset. Programs for recording wave queries will be provided shortly. We wish to have an evaluation dataset around 1000 ~ 2000 wave queries in total.
Task description
Classic QBSH evaluation:
- Input: human singing/humming snippets (.wav). Queries are from Roger Jang's corpus and ThinkIT corpus.
- Database: ground-truth and noise MIDI files(which are monophonic). Comprised of 48+106 Roger Jang's and ThinkIT's ground-truth along with a cleaned version of Essen Database(2000+ MIDIs which are used last year)
- Output: top-20 candidate list.
- Evaluation: Mean Reciprocal Rank (MRR) and Top-X hit rate.
To make algorithms able to share intermediate steps, participants are encouraged to submit separate transcriber and matcher modules instead of integrated ones, which is according to Rainer Typke's suggestion. So transcribers and matchers from different submissions could work together with the same pre-defined interface and thus for us it's possible to find the best combination. Besides, note based approaches (symbolic approaches) and pitch contour based approaches (non-symbolic approaches?) are compared.
Participants
If you think there is a slight chance that you might want to participate, please add your name and e-mail address to this list
- Liang-Yu Davidson Chen (davidson833 at mirlab dot org)
- Lei Wang (leiwang.mir at gmail dot com)
- Xiao Wu (xwu2006 at gmail dot com)
- Matti Ryynänen and Anssi Klapuri (Tampere University of Technology), matti.ryynanen <at> tut.fi, anssi.klapuri <at> tut.fi
Xiao Wu's Comments
In my opinion, QBSH (even for QBH in monophonic database) is still far from "a solved problem". Many problems are still chanllenging our systems (robustness in noise environment, efficiency in 10000-larger database, etc.). So, this year we may setup a more tough test for the participants.
Morten Wendelboe's Comment
Where can we find the ThinkIT corpus? The link at the bottom of 2007:Query_by_Singing/Humming don't work - at least not for me.