Difference between revisions of "2007:Query-by-Singing/Humming Results"

From MIREX Wiki
(Task Descriptions)
Line 1: Line 1:
 
==Introduction==
 
==Introduction==
These are the results for the 2007 running of the Query-by-Singing/Humming task. For background information about this task set please refer to the [[Query by Singing/Humming]] page. The first subtask is the same as last years. In this subtask, submitted systems take a sung query as input and return a list of songs from the test database. Mean reciprocal rank (MRR) of the ground truth is calculated over the top 20 returns. The test database consists of 48 ground-truth MIDIs + 2000 Essen Collection MIDI noise files. The query database consists of 2797 sung queries.
+
These are the results for the 2007 running of the Query-by-Singing/Humming task. For background information about this task set please refer to the [[Query by Singing/Humming]] page.  
In the second subtask, the same setup as the first subtask used with combination of different transcribers and matchers.
+
 
 +
===Task Descriptions===
 +
 
 +
'''Task 1 [[#Task 1 Results|Goto Task 1 Results]]''': The first subtask is the same as last years. In this subtask, submitted systems take a sung query as input and return a list of songs from the test database. Mean reciprocal rank (MRR) of the ground truth is calculated over the top 20 returns. The test database consists of 48 ground-truth MIDIs + 2000 Essen Collection MIDI noise files. The query database consists of 2797 sung queries.  
 +
 
 +
'''Task 2 [[#Task 2 Results|Goto Task 2 Results]]''': In the second subtask, the same setup as the first subtask used with combination of different transcribers and matchers.
  
 
===General Legend===
 
===General Legend===
Line 16: Line 21:
 
'''AU3''' = [https://www.music-ir.org/mirex2007/abs/QBSH_SMS_uitdenbogerd.pdf Alexandra L. Uitdenbogerd 3]<br />
 
'''AU3''' = [https://www.music-ir.org/mirex2007/abs/QBSH_SMS_uitdenbogerd.pdf Alexandra L. Uitdenbogerd 3]<br />
  
===Task Descriptions===
+
===Task 1 Results===
 
+
The first subtask is the same as last years. In this subtask, submitted systems take a sung query as input and return a list of songs from the test database. Mean reciprocal rank (MRR) of the ground truth is calculated over the top 20 returns. The test database consists of 48 ground-truth MIDIs + 2000 Essen Collection MIDI noise files. The query database consists of 2797 sung queries.  
'''Task 1 [[#Task 1 Results|Goto Task 1 Results]]''': The first subtask is the same as last years. In this subtask, submitted systems take a sung query as input and return a list of songs from the test database. Mean reciprocal rank (MRR) of the ground truth is calculated over the top 20 returns. The test database consists of 48 ground-truth MIDIs + 2000 Essen Collection MIDI noise files. The query database consists of 2797 sung queries.  
 
  
'''Task 2 [[#Task 2 Results|Goto Task 2 Results]]''': In the second subtask, the same setup as the first subtask used with combination of different transcribers and matchers.
 
 
===Task 1 Results===
 
 
====Task 1 Friedman's Test for Significant Differences====
 
====Task 1 Friedman's Test for Significant Differences====
 
The Friedman test was run in MATLAB against the QBSH Task 1 MRR data over the 48 ground truth song groups.
 
The Friedman test was run in MATLAB against the QBSH Task 1 MRR data over the 48 ground truth song groups.
Line 34: Line 35:
  
 
===Task 2 Results===
 
===Task 2 Results===
 +
In this subtask, the same setup as the first subtask used with combination of different transcribers and matchers.
 +
 
====Task 2 Legend====
 
====Task 2 Legend====
 
=====Team ID=====
 
=====Team ID=====

Revision as of 21:30, 16 September 2007

Introduction

These are the results for the 2007 running of the Query-by-Singing/Humming task. For background information about this task set please refer to the Query by Singing/Humming page.

Task Descriptions

Task 1 Goto Task 1 Results: The first subtask is the same as last years. In this subtask, submitted systems take a sung query as input and return a list of songs from the test database. Mean reciprocal rank (MRR) of the ground truth is calculated over the top 20 returns. The test database consists of 48 ground-truth MIDIs + 2000 Essen Collection MIDI noise files. The query database consists of 2797 sung queries.

Task 2 Goto Task 2 Results: In the second subtask, the same setup as the first subtask used with combination of different transcribers and matchers.

General Legend

Team ID

FH = Pascal Ferraro, Pierre Hanna, Julien Allali, Matthias Robine
CG = Carlos G├│mez, Soraya Abad-Mota, Edna Ruckhaus
RJ1 = J.-S. Roger Jang, Nien-Jung Lee, Chao-Ling Hsu 1
RJ2 = J.-S. Roger Jang, Nien-Jung Lee, Chao-Ling Hsu 2
NM = Kjell Lemström, Niko Mikkilä
XW1 = Xiao Wu, Ming Li 1
XW2 = Xiao Wu, Ming Li 2
AU1 = Alexandra L. Uitdenbogerd 1
AU2 = Alexandra L. Uitdenbogerd 2
AU3 = Alexandra L. Uitdenbogerd 3

Task 1 Results

The first subtask is the same as last years. In this subtask, submitted systems take a sung query as input and return a list of songs from the test database. Mean reciprocal rank (MRR) of the ground truth is calculated over the top 20 returns. The test database consists of 48 ground-truth MIDIs + 2000 Essen Collection MIDI noise files. The query database consists of 2797 sung queries.

Task 1 Friedman's Test for Significant Differences

The Friedman test was run in MATLAB against the QBSH Task 1 MRR data over the 48 ground truth song groups. Command: [c,m,h,gnames] = multcompare(stats, 'ctype', 'tukey-kramer','estimate', 'friedman', 'alpha', 0.05); file /nema-raid/www/mirex/results/qbsh07_task1_sum_friedmans.csv not found file /nema-raid/www/mirex/results/qbsh07_task1_detail_friedmans.csv not found File:Qbsh07 task1 friedmans.png

Task 1 Summary Results by Query Group

file /nema-raid/www/mirex/results/qbsh07_task1_avg_per_group.csv not found

Task 2 Results

In this subtask, the same setup as the first subtask used with combination of different transcribers and matchers.

Task 2 Legend

Team ID

FH_XW = Pascal Ferraro, Pierre Hanna, Julien Allali, Matthias Robine based on XW note transcriber
CG_XW = Carlos G├│mez, Soraya Abad-Mota, Edna Ruckhaus based on XW note transcriber
RJ1_RJ = J.-S. Roger Jang, Nien-Jung Lee, Chao-Ling Hsu 1 based on RJ pitch transcriber
RJ1_XW = J.-S. Roger Jang, Nien-Jung Lee, Chao-Ling Hsu 1 based on XW pitch transcriber
RJ2_RJ = J.-S. Roger Jang, Nien-Jung Lee, Chao-Ling Hsu 2 based on RJ pitch transcriber
RJ2_XW = J.-S. Roger Jang, Nien-Jung Lee, Chao-Ling Hsu 2 based on XW pitch transcriber
NM_XW = Kjell Lemström, Niko Mikkilä based on XW note transcriber
XW1_XW = Xiao Wu, Ming Li 1 based on XW note transcriber
XW2_XW = Xiao Wu, Ming Li 2 based on XW pitch transcriber
file /nema-raid/www/mirex/results/qbsh07_task_2_overall.csv not found

Task 2 Friedman's Test for Significant Differences

The Friedman test was run in MATLAB against the QBSH Task 1 MRR data over the 48 ground truth song groups. Command: [c,m,h,gnames] = multcompare(stats, 'ctype', 'tukey-kramer','estimate', 'friedman', 'alpha', 0.05); file /nema-raid/www/mirex/results/qbsh07_task2_sum_friedmans.csv not found file /nema-raid/www/mirex/results/qbsh07_task2_detail_friedmans.csv not found File:Qbsh07 task2 friedmans.png

Task 2 Summary Results by Query Group

file /nema-raid/www/mirex/results/qbsh07_task2_avg_per_group.csv not found