Difference between revisions of "2006:Score Following Proposal"

From MIREX Wiki
m (Reference alignment)
(Evaluation metrics)
Line 59: Line 59:
 
=== Evaluation metrics ===
 
=== Evaluation metrics ===
  
* Percentage of ''missed'' events.
+
Evaluation is done by comparing the ''reference file'' with a ''result file'' for each audio in the database. Here are the criteria used for this year's evaluation:
* Average ''Offset'' between the ''detected'' note onsets and reference alignment and statistics of it.
+
 
* Average latency of the system.
+
==== Missed Notes ====
 +
 
 +
Missed notes are:
 +
 
 +
# Notes that are reported in the reference file and are not reported in the result file.
 +
# Notes that are reported in both reference and result file but with an offset of greater than 2000 milli-seconds.
 +
 
 +
==== False Positive ====
 +
 
 +
False positives are notes reported in both reference and result files but with a delay (absolute value offset) of greater than 2000 milli-seconds. This is also reported as part of the missed notes.
 +
 
 +
==== Offset ====
 +
 
 +
* Average ''Offset'' between the ''detected'' note onsets and reference alignment.
 +
 
 +
==== Latency ====
 +
 
 +
* Difference between detection time and the time the system sees the audio.
 +
 
 +
==== Metrics ====
 +
 
 +
All measures above are calculated both locally (i.e. for each sound file) and globally (over the whole database). This way we will have two ''precision rates'':
 +
 
 +
# '''Piecewise precision rate:''' is the average of the percentage of detected notes (<tt>total number of events - missed notes</tt>) for each piece.
 +
# '''Overall precision rate:''' is equal to percentage of <tt>total number of events to detect - total number of missed notes</tt>
 +
 
 +
Besides these measures, the following measures are provided:
 +
 
 +
* '''Average of absolute value of offset''' both piece-wise and global.
 +
* '''Mean of offset''' including negative and positive signs.
 +
* '''Standard deviation offset'''
 +
* '''Average latency''' both piece-wise and global.
  
 
=== Evaluator pseudo-code ===
 
=== Evaluator pseudo-code ===

Revision as of 03:32, 25 September 2006

Proposers

  • Arshia Cont (University of California in San Diego (UCSD) and Ircam - Realtime Applications Team, France) - cont@ircam.fr
  • Diemo Schwarz (Ircam - Realtime Applications Team, France) - schwarz@ircam.fr

Title

Score Following

Description

Score Following is the real-time alignment of incoming music signal to the music score. The music signal can be symbolic (Midi Score Following) or Audio.

This page describes a proposal for evaluation of score following systems. Discussion of the evaluation procedures on the MIREX 06 "ScoreFollowing06" contest planning list will be documented on the Score Following page. A full digest of the discussions is available to subscribers from the MIREX 06 "ScoreFollowing06" contest planning list archives.

Submissions will be required to estimate alignment precision according to the indexed times, type of alignment (monophonic, polyphonic), type of training and realtime performance, also separated into two domains (upon enough submissions) for symbolic and audio systems.

Status

Evaluation procedures

Evaluation procedure consists of running score followers on a database of aligned audio to score where the database contains score, and performance audio (for system call) and a reference alignment (for evaluations) -- See below for details.

Suggested calling formats for submitted algorithms

During evaluation, each system will be called in command line with probably something like the following format:

<system-execution-file> <performance-audio-filename> <MIDI-score-filename> <result-filename> <log-filename>

N.B.: the previous calling format given here in July of <system-execution-file> <input-folder> <output-filename> is discouraged by the IMIRSEL team.

Your submitted binaries should be able to use the appropriate score and audio file and undertake the score following task, and write the results to the output file as given.

It is important to be able to create the output ascii file in a "different" path than the default.

In order to consider the issue of training and audio following, an alternative call format would be:

<system-execution-file> <performance-audio-filename> <MIDI-score-filename> <audio-score-filename> <reference-alignment-filename> <result-filename> <log-filename>

The reference alignment file links the MIDI to the audio score. The audio score can be used for training for non-audio score followers.

Input data

Each system will need an Audio input as well as a Score to follow (or align).

File formats

Score used for this year's MIREX would be MIDI files. Audio format would be standard WAV or AIFF, as performances of the given MIDI score.

Output data

File formats

ASCII output for each score following system as described below. column data should be separated by white space and rows by unix-newline '\n'.

Content

The result files represent the alignment found by a score following system between a MIDI score and a recording of a performance of it. They have one line per detected note with the columns:

  1. estimated note onset time in performance audio file (ms)
  2. detection time relative to performance audio file (ms)
  3. note start time in score (ms)
  4. MIDI note number in score (int)

Remarks: The third column with the detected note's start time in score serves as the unique identifier of a note (or chord for polyphonic scores) that links it to the ground truth onset of that note within the reference alignment files. The fourth column of MIDI note number is there only for your convenience, to know your way around in the result files, if you know the melody in MIDI.

Evaluation metrics

Evaluation is done by comparing the reference file with a result file for each audio in the database. Here are the criteria used for this year's evaluation:

Missed Notes

Missed notes are:

  1. Notes that are reported in the reference file and are not reported in the result file.
  2. Notes that are reported in both reference and result file but with an offset of greater than 2000 milli-seconds.

False Positive

False positives are notes reported in both reference and result files but with a delay (absolute value offset) of greater than 2000 milli-seconds. This is also reported as part of the missed notes.

Offset

  • Average Offset between the detected note onsets and reference alignment.

Latency

  • Difference between detection time and the time the system sees the audio.

Metrics

All measures above are calculated both locally (i.e. for each sound file) and globally (over the whole database). This way we will have two precision rates:

  1. Piecewise precision rate: is the average of the percentage of detected notes (total number of events - missed notes) for each piece.
  2. Overall precision rate: is equal to percentage of total number of events to detect - total number of missed notes

Besides these measures, the following measures are provided:

  • Average of absolute value of offset both piece-wise and global.
  • Mean of offset including negative and positive signs.
  • Standard deviation offset
  • Average latency both piece-wise and global.

Evaluator pseudo-code

For each audio/system output/reference tuple from the database,

  • Compute the number of "missed note" by comparing the output and the reference.
  • Compute the "offset" (the difference between onset time reference and score follower's onset time output) for each note.

Errors computed in this version are:

  • Missed note percentage
  • Average latency
  • Average offset
  • Offset statistics: mean and standard deviation

Reference Database

Reference database contains score, and performance audio (for system call) and a reference alignment (for evaluations).

Contributions

  • Christopher Raphael:
    • Mozart Dorabella, voice
    • Mozart Clarinet Concerto K370, clarinet
    • Rodrigo Aranjuez Concerto, guitar
    • Sarasate Zigeunerweisen, violin
      • Aligned by Christopher Raphael's score follower and corrected by hand.
  • Ircam (Arshia Cont and Diemo Schwarz)
    • Boulez ... Explosante-Fixe ..., flute (47 files, duration approx. 1 hour)
    • Bach Violin Sonatas, performed by Menuhin and Kremer (two sets, three sonatas)
      • Aligned by an external offline score alignment algorithm and corrected by hand.

Content Format

Score Files

Scores are in MIDI formats.

Audio Files

Audio will be either WAVE or AIFF that contain real performances of a given MIDI score.

Reference alignment

The reference files constitute a ground truth alignment between a MIDI score and a recording of it. They have one line per score note, with the columns:

  1. note onset time in reference audio file [ms]
  2. note start time in score [ms]
  3. MIDI note number in score [nn]

Note that more columns might be added to this definition in the future, e.g. to mark trills, so please program your reference file parser in a way that additional columns don't confuse the program and are gracefully are ignored.

Example

To see a sample and example of the database, refer to: http://crca.ucsd.edu/arshia/mirex06-scofo/

Potential Participants

  • Arshia Cont (UCSD / Ircam)
  • Roger Dannenberg (Carnegie Mellon University)
  • Christopher Raphael (Indiana university)
  • Diemo Schwarz (Ircam)
  • Miller Puckette (UCSD)
  • Ozgur Izmirli (Connolle)
  • Cort Lippe (University of Buffalo)
  • Frank Weinstock (TimeWarp Technologies)