Difference between revisions of "2010:Symbolic Music Similarity and Retrieval"

From MIREX Wiki
(Building the ground truth)
Line 1: Line 1:
 
  
 
== Task suggestion: Symbolic Melodic Similarity ==
 
== Task suggestion: Symbolic Melodic Similarity ==
Line 33: Line 32:
 
Input:
 
Input:
 
Parameters:
 
Parameters:
- the name of a directory containing about  MIDI files  
+
- the name of a directory containing about === Packaging submissions ===
 +
All submissions should be statically linked to all libraries (the presence of
 +
dynamically linked libraries cannot be guarenteed).
 +
 
 +
All submissions should include a README file including the following the
 +
information:
 +
 
 +
* Command line calling format for all executables and an example formatted set of commands
 +
* Number of threads/cores used or whether this should be specified on the command line
 +
* Expected memory footprint
 +
* Expected runtime
 +
* Any required environments (and versions), e.g. python, java, bash, matlab.
 +
  MIDI files  
 
- the name of one MIDI file containing a monophonic query.
 
- the name of one MIDI file containing a monophonic query.
  

Revision as of 15:01, 26 May 2010

Task suggestion: Symbolic Melodic Similarity

Description

Given a query, each system is supposed to return 10 most melodically similar songs from a given collection.

Data

3 different datasets are use for 3 subtasks.

  • RISM (monophonic; 10,000)
  • Karoke (polyphonic; 1,000)
  • Mixed (polyphonic; 15,741).

All in MIDI format.

Evaluation

Human Evaluation

The primary evaluation will involve subjective judgments by human evaluators of the retrieved sets using IMIRSEL's Evalutron 6000 system.

  • Evaluator question: Given a search based on track A, the following set of results was returned by all systems. Please place each returned track into one of three classes (not similar, somewhat similar, very similar) and provide an inidcation on a continuous scale of 0 - 10 of high similar the track is to the query.
  • ~6 queries, 10 results per query, 1 set of eyes, ~10 participating labs
  • Higher number of queries preferred as IR research indicates variance is in queries
  • It will be possible for researchers to use this data for other types of system comparisons after MIREX 2007 results have been finalized.
  • Human evaluation to be designed and led by IMIRSEL following a similar format to that used at MIREX 2006
  • Human evaluators will be drawn from the participating labs (and any volunteers from IMIRSEL or on the MIREX lists)


Submission Format

Inputs/Outputs

Input: Parameters: - the name of a directory containing about === Packaging submissions === All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guarenteed).

All submissions should include a README file including the following the information:

  • Command line calling format for all executables and an example formatted set of commands
  • Number of threads/cores used or whether this should be specified on the command line
  • Expected memory footprint
  • Expected runtime
  • Any required environments (and versions), e.g. python, java, bash, matlab.
MIDI files 

- the name of one MIDI file containing a monophonic query.

The program will be called 6 times. Three of the queries are going to be quantized (produced from symbolic notation) and three produced by humming or whistling, thus with slight rhythmic and pitch deviations.

Output: - a list of the names of the 10 most similar matching MIDI files, ordered by melodic similarity. Write the file name in separate lines, without empty lines in between.



Measures

Use the same measures as [last year] to compare the search results of the various algorithms.