Difference between revisions of "2010:Symbolic Music Similarity and Retrieval"

From MIREX Wiki
m (Data)
 
(16 intermediate revisions by one other user not shown)
Line 2: Line 2:
  
  
Retrieve the most similar items from a collection of symbolic documents, given a query, and rank them by melodic similarity. The following tasks could be defined this year:
+
Retrieve the most similar items from a collection of symbolic documents, given a query, and rank them by melodic similarity. There will be only 1 task this year.
 +
Monophonic to monophonic. Both the query and the documents in the collection will be monophonic.
  
Task 1: Monophonic to monophonic. Both the query and the documents in the collection will be monophonic.
+
Each system will be  given a query and returned the 10 most melodically similar songs from those taken from the Essen Collection (5274 pieces in the MIDI format; see [http://www.esac-data.org/  ESAC Data Homepage] for more information). For of the 6 queries, we made four classes of error-mutations, thus the set comprises the following query classes:
 +
 
 +
* 0. No errors
 +
* 1. One note deleted
 +
* 2. One note inserted
 +
* 3. One interval enlarged
 +
* 4. One interval compressed
  
Task 2: Monophonic to polyphonic. The documents will be polyphonic (i.e. can have simultaneous notes), but the query will still be monophonic.
 
  
Task 3: Polyphonic to polyphonic. Both the query and documents will be polyphonic.
 
  
 
=== Task Specific Mailing List ===
 
=== Task Specific Mailing List ===
Line 15: Line 20:
 
== Data ==
 
== Data ==
  
=== Task 1 ===
 
  
 
* 5,274 tunes belonging to the Essen folksong collection. The tunes are in standard MIDI file format. [http://www.ldc.usb.ve/~cgomez/essen.tar.gz Download] (< 1 MB)
 
* 5,274 tunes belonging to the Essen folksong collection. The tunes are in standard MIDI file format. [http://www.ldc.usb.ve/~cgomez/essen.tar.gz Download] (< 1 MB)
  
=== Tasks 2 and 3 ===
+
* [http://www.essaymill.com termpapers]
 
 
<div style="padding-left:4pt">a.&nbsp;&nbsp;1,000 polyphonic Karaoke files. [http://www.ldc.usb.ve/~cgomez/SMS-kar.tgz Download] (9.5 MB)</div>
 
<div style="padding-left:4pt">b.&nbsp;&nbsp;10,000 mostly polyphonic MIDI files crawled from the Web. [http://www.ldc.usb.ve/~cgomez/SMS-mix.tgz Download] (69 MB)</div>
 
 
 
Here are versions of the above collections consisting of equivalent MIDI files without tempo changes. These will be used in the evaluation.
 
<div style="padding-left:4pt">a.&nbsp;&nbsp;~1,000 polyphonic Karaoke files. [http://www.ldc.usb.ve/~cgomez/SMS-kar-plain.tgz Download] (9.6 MB)</div>
 
<div style="padding-left:4pt">b.&nbsp;&nbsp;~10,000 mostly polyphonic MIDI files crawled from the Web. [http://www.ldc.usb.ve/~cgomez/SMS-mix-plain.tgz Download] (68.7 MB)</div>
 
[http://www.ldc.usb.ve/~cgomez/SMS-plain.md5 MD5 sum]
 
 
 
You can check the [http://www.ldc.usb.ve/~cgomez/unfold.tar.gz program] that was used to "untempify" the MIDI files.
 
 
 
  
 
==Evaluation ==
 
==Evaluation ==
=== Human Evaluation ===
 
The primary evaluation will involve subjective judgments by human evaluators of the retrieved sets using IMIRSEL's Evalutron 6000 system.
 
  
* Evaluator question: Given a search based on track A, the following set of results was returned by all systems. Please place each returned track into one of three classes (not similar, somewhat similar, very similar) and provide an inidcation on a continuous scale of 0 - 10 of high similar the track is to the query.
+
The same method for building the ground truth as in the previous iterations in 2006 and 2007 will be used. This method has the advantage that no ground truth needs to be built in advance. After the algorithms have been submitted, their results are pooled for every query, and human evaluators are asked to judge the relevance of the matches for some queries.
* 6/5/6 queries for RISM/Karaoke/Mixed datasets, 10 results per query, 1 set of eyes, ~10 participating labs
 
* Higher number of queries preferred as IR research indicates variance is in queries
 
* It will be possible for researchers to use this data for other types of system comparisons after MIREX 2007 results have been finalized.
 
* Human evaluation to be designed and led by IMIRSEL following a similar format to that used at MIREX 2006
 
* Human evaluators will be drawn from the participating labs (and any volunteers from IMIRSEL or on the MIREX lists)
 
  
 +
For each query (and its 4 mutations), the returned results (candidates) from all systems will  then grouped together (query set) for evaluation by the human graders. The graders will  provide with only heard perfect version against which to evaluate the candidates and did not know whether the candidates came from a perfect or mutated query. Each query/candidate set was evaluated by 1 individual grader. Using the Evalutron 6000 system, the graders gave each query/candidate pair two types of scores. Graders will be  asked to provide 1 categorical score with 3 categories: NS,SS,VS as explained below, and one fine score (in the range from 0 to 10).
  
 
== Submission Format ==  
 
== Submission Format ==  
  
  
=== Inputs/Outputs ===
+
'''Input'''
  
==== Task 1: ====
+
Parameters:<br/>
Input:
+
- the name of a directory containing about 5,000 MIDI files containing monophonic folk songs and <br/>
Parameters:
+
- the name of one MIDI file containing a monophonic query.
* the name of a directory containing the MIDI files  
 
* the name of one MIDI file containing a monophonic query.
 
  
The program will be called 6 times. Three of the queries are going to be quantized (produced from symbolic notation) and three produced by humming or whistling, thus with slight rhythmic and pitch deviations.
+
E.g.
 +
myAlgo.sh /path/to/folder/withMIDIfile/ /path/to/query.mid
  
Output:
 
- a list of the names of the 10 most similar matching MIDI files, ordered by melodic similarity. Write the file name in separate lines, without empty lines in between.
 
  
==== Task 2 ====
 
Task 2: Input: same interface as for task 1, thus the name of the directory with files to be searched and the name of the query. However, the directory will contain either about 10,000 mostly polyphonic MIDI files or 1000 Karaoke files.
 
  
Output: a list of the names of 10 different MIDI file names that contain melodically similar musical material, ordered by similarity, plus for each file the time (offset from the beginning in seconds) where the query matches and where the matching bit ends. If the query matches in more than one position, return the position of the most similar match (or any one of them if there is more than one most similar match). If the algorithm does not align the query with the MIDI file at any particular position, just return 0 as start time and the duration of the MIDI file as end time.
+
The program will be called once for each query.
  
Sample output line:
+
'''Expected output'''
  
somefile.mid,0,2.3
+
A list of the names of the 10 most similar matching MIDI files, ordered by melodic similarity. Write the file name in separate lines, without empty lines in between.
  
(means that somefile.mid matches the query, and the matching bit starts at the very beginning of the file and ends 2.3 seconds later). The most similar match should be returned first.
+
E.g.
In evaluation, the human graders will only here that part of the song +-10 sec buffer in both ends.
+
query1.mid song242.mid song213.mid song1242.mid ...
 +
query2.mid song5454.mid song423.mid song454.mid ...
 +
...
  
=== Measures ===
+
E.g.
Use the same measures as [[2006:Symbolic_Melodic_Similarity_Results]] to compare the search results of the various algorithms.
+
query1.mid,song242.mid,song213.mid,song1242.mid ...
 +
query2.mid,song5454.mid,song423.mid,song454.mid ...
 +
...
 +
=== Packaging submissions ===
  
=== Packaging submissions ===
+
* All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed). [mailto:mirproject@lists.lis.uiuc.edu IMIRSEL] should be notified of any dependencies that you cannot include with your submission at the earliest opportunity (in order to give them time to satisfy the dependency).
All submissions should be statically linked to all libraries (the presence of  
+
* Be sure to follow the [[2006:Best Coding Practices for MIREX | Best Coding Practices for MIREX]]
dynamically linked libraries cannot be guarenteed).
+
* Be sure to follow the [[MIREX 2010 Submission Instructions]]
  
All submissions should include a README file including the following the  
+
All submissions should include a README file including the following the information:
information:
 
  
* Command line calling format for all executables and an example formatted set of commands
+
* Command line calling format for all executables including examples
 
* Number of threads/cores used or whether this should be specified on the command line
 
* Number of threads/cores used or whether this should be specified on the command line
 
* Expected memory footprint
 
* Expected memory footprint
 
* Expected runtime
 
* Expected runtime
* Any required environments (and versions), e.g. python, java, bash, matlab.
+
* Approximately how much scratch disk space will the submission need to store any feature/cache files?
 +
* Any required environments/architectures (and versions) such as Matlab, Java, Python, Bash, Ruby etc.
 +
* Any special notice regarding to running your algorithm
 +
 
 +
Note that the information that you place in the README file is '''extremely''' important in ensuring that your submission is evaluated properly.  
 +
 
 +
=== Time and hardware limits ===
 +
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions will be imposed.
 +
 
 +
A hard limit of 24 hours will be imposed on feature extraction times.
 +
 
 +
A hard limit of 48 hours will be imposed on the 3 training/classification cycles, leading to a total runtime limit of 72 hours for each submission.
 +
 
 +
=== Submission opening date ===
 +
 
 +
TBA
 +
 
 +
=== Submission closing date ===
 +
 
 +
TBA

Latest revision as of 15:25, 16 December 2010

Description

Retrieve the most similar items from a collection of symbolic documents, given a query, and rank them by melodic similarity. There will be only 1 task this year. Monophonic to monophonic. Both the query and the documents in the collection will be monophonic.

Each system will be given a query and returned the 10 most melodically similar songs from those taken from the Essen Collection (5274 pieces in the MIDI format; see ESAC Data Homepage for more information). For of the 6 queries, we made four classes of error-mutations, thus the set comprises the following query classes:

  • 0. No errors
  • 1. One note deleted
  • 2. One note inserted
  • 3. One interval enlarged
  • 4. One interval compressed


Task Specific Mailing List

You can subscribe to this list to participate in the discussion.

Data

  • 5,274 tunes belonging to the Essen folksong collection. The tunes are in standard MIDI file format. Download (< 1 MB)

Evaluation

The same method for building the ground truth as in the previous iterations in 2006 and 2007 will be used. This method has the advantage that no ground truth needs to be built in advance. After the algorithms have been submitted, their results are pooled for every query, and human evaluators are asked to judge the relevance of the matches for some queries.

For each query (and its 4 mutations), the returned results (candidates) from all systems will then grouped together (query set) for evaluation by the human graders. The graders will provide with only heard perfect version against which to evaluate the candidates and did not know whether the candidates came from a perfect or mutated query. Each query/candidate set was evaluated by 1 individual grader. Using the Evalutron 6000 system, the graders gave each query/candidate pair two types of scores. Graders will be asked to provide 1 categorical score with 3 categories: NS,SS,VS as explained below, and one fine score (in the range from 0 to 10).

Submission Format

Input

Parameters:
- the name of a directory containing about 5,000 MIDI files containing monophonic folk songs and
- the name of one MIDI file containing a monophonic query.

E.g.

myAlgo.sh /path/to/folder/withMIDIfile/ /path/to/query.mid


The program will be called once for each query.

Expected output

A list of the names of the 10 most similar matching MIDI files, ordered by melodic similarity. Write the file name in separate lines, without empty lines in between.

E.g.

query1.mid song242.mid song213.mid song1242.mid ...
query2.mid song5454.mid song423.mid song454.mid ...

...

E.g.

query1.mid,song242.mid,song213.mid,song1242.mid ...
query2.mid,song5454.mid,song423.mid,song454.mid ...

...

Packaging submissions

  • All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed). IMIRSEL should be notified of any dependencies that you cannot include with your submission at the earliest opportunity (in order to give them time to satisfy the dependency).
  • Be sure to follow the Best Coding Practices for MIREX
  • Be sure to follow the MIREX 2010 Submission Instructions

All submissions should include a README file including the following the information:

  • Command line calling format for all executables including examples
  • Number of threads/cores used or whether this should be specified on the command line
  • Expected memory footprint
  • Expected runtime
  • Approximately how much scratch disk space will the submission need to store any feature/cache files?
  • Any required environments/architectures (and versions) such as Matlab, Java, Python, Bash, Ruby etc.
  • Any special notice regarding to running your algorithm

Note that the information that you place in the README file is extremely important in ensuring that your submission is evaluated properly.

Time and hardware limits

Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions will be imposed.

A hard limit of 24 hours will be imposed on feature extraction times.

A hard limit of 48 hours will be imposed on the 3 training/classification cycles, leading to a total runtime limit of 72 hours for each submission.

Submission opening date

TBA

Submission closing date

TBA