Difference between revisions of "2008:Query by Singing/Humming"
Davidson833 (talk | contribs) (→Query Data Contribution) |
Davidson833 (talk | contribs) |
||
Line 17: | Line 17: | ||
== Evaluation Corpus Contribution == | == Evaluation Corpus Contribution == | ||
− | Every participant will be asked to contribute 100~200 wave queries as test data. These test data will be released after the competition as a public-domain QBSH dataset. Programs for recording wave queries will be provided shortly. | + | Every participant will be asked to contribute 100~200 wave queries as test data. These test data will be released after the competition as a public-domain QBSH dataset. Programs for recording wave queries will be provided shortly. We wish to have an evaluation dataset around 1000 ~ 2000 wave queries in total. |
== Task description == | == Task description == |
Revision as of 02:20, 17 June 2008
Status
The goal of the Query-by-Singing/Humming (QBSH) task is the evaluation of MIR systems that take as query input queries sung or hummed by real-world users. More information can be found in:
- 2006:QBSH:_Query-by-Singing/Humming MIREX2006 QBSH Task Proposal
- 2006:QBSH_Discussion_Page MIREX2006 QBSH Task Discussion
Please feel free to edit this page.
Query Data
1. Roger Jang's corpus (MIREX2006 QBSH corpus) which is comprised of 2797 queries along with 48 ground-truth MIDI files. All queries are from the beginning of references.
2. ThinkIT corpus comprised of 355 queries and 106 monophonic ground-truth midi files (with MIDI 0 or 1 format). There are no "singing from beginning" gurantee. This corpus will be published after the task running.
3. Noise MIDI will be the 5000+ Essen collection(can be accessed from http://www.esac-data.org/).
To build a large test set which can reflect real-world queries, it is suggested that every participant makes a contribution to the evaluation corpus.
Evaluation Corpus Contribution
Every participant will be asked to contribute 100~200 wave queries as test data. These test data will be released after the competition as a public-domain QBSH dataset. Programs for recording wave queries will be provided shortly. We wish to have an evaluation dataset around 1000 ~ 2000 wave queries in total.
Task description
Classic QBSH evaluation:
- Input: human singing/humming snippets (.wav). Queries are from Roger Jang's corpus and ThinkIT corpus.
- Database: ground-truth and noise midi files(which are monophonic). Comprised of 48+106 Roger Jang's and ThinkIT's ground-truth along with a cleaned version of Essen Database(2000+ MIDIs which are used last year)
- Output: top-20 candidate list.
- Evaluation: Mean Reciprocal Rank (MRR) and Top-X hit rate.
To make algorithms able to share intermediate steps, participants are encouraged to submit separate transcriber and matcher modules instead of integrated ones, which is according to Rainer Typke's suggestion. So transcribers and matchers from different submissions could work together with the same pre-defined interface and thus for us it's possible to find the best combination. Besides, note based approaches (symbolic approaches) and pitch contour based approaches (non-symbolic approaches?) are compared.
Participants
If you think there is a slight chance that you might want to participate, please add your name and e-mail address to this list
- Liang-Yu Davidson Chen (davidson833 at mirlab dot org)