Difference between revisions of "2006:Score Following Proposal"

From MIREX Wiki
(Added reference database samples/examples)
(Added evaluation metrics and psuedo-code)
Line 52: Line 52:
 
# MIDI note number in score (int)
 
# MIDI note number in score (int)
  
 +
=== Evaluation metrics ===
  
=== Evaluation metrics ===
+
* Percentage of ''missed'' events.
 +
* Average ''Offset'' between the ''detected'' note onsets and reference alignment and statistics of it.
 +
* Average latency of the system.
  
 
=== Evaluator pseudo-code ===
 
=== Evaluator pseudo-code ===
 +
 +
For each audio/system output/reference tuple from the database,
 +
 +
* Compute the number of "missed note" by comparing the output and the reference.
 +
* Compute the "offset" (the difference between onset time reference and score follower's onset time output) for each note.
 +
 +
Errors computed in this version are:
 +
 +
* Missed note percentage
 +
* Average latency
 +
* Average offset
 +
* Offset statistics: mean and standard deviation
  
 
== Reference Database ==
 
== Reference Database ==

Revision as of 15:22, 26 July 2006

Proposers

  • Arshia Cont (University of California in San Diego (UCSD) and Ircam - Realtime Applications Team, France) - cont@ircam.fr
  • Diemo Schwarz (Ircam - Realtime Applications Team, France) - schwarz@ircam.fr

Title

Score Following

Description

Score Following is the real-time alignment of incoming music signal to the music score. The music signal can be symbolic (Midi Score Following) or Audio.

This page describes a proposal for evaluation of score following systems. Discussion of the evaluation procedures on the MIREX 06 "ScoreFollowing06" contest planning list will be documented on the Score Following page. A full digest of the discussions is available to subscribers from the MIREX 06 "ScoreFollowing06" contest planning list archives.

Submissions will be required to estimate alignment precision according to the indexed times, type of alignment (monophonic, polyphonic), type of training and realtime performance, also separated into two domains (upon enough submissions) for symbolic and audio systems.

Status

Evaluation procedures

Evaluation procedure consists of running score followers on a database of aligned audio to score where the database contains score, and performance audio (for system call) and a reference alignment (for evaluations) -- See below for details.

Suggested calling formats for submitted algorithms

During evaluation, each system will be called in command line with the following format:

<system-execution-file> <input-folder> <output-filename>

The input folder contains the score and audio performance of the score. Your submitted binaries should be able to BROWSE this folder and use the appropriate score and audio file and undertake the score following task, and write the results to the output file as given.

It is important to be able to create the output ascii file in a "different" path than the default.

In order to consider the issue of training, an alternative call format would be:

<system-execution-file> <input-folder> <output-filename> <training-folder>

where the training folder contains appropriate files for training. Obviously, if this third argument is not given, it is assumed that there is no learning/training phase.

Input data

Each system will need an Audio input as well as a Score to follow (or align).

File formats

Score used for this year's MIREX would be MIDI files. Audio format would be standard WAV or AIFF, as performances of the given MIDI score.

Output data

File formats

ASCII output for each score following system as described below. column data should be separated by tabs and rows by return.

Content

The result files represent the alignment found by a score following system between a MIDI score and a recording of a performance of it. They have one line per detected note with the columns:

  1. estimated note onset time in performance audio file (ms)
  2. detection time relative to performance audio file (ms)
  3. note start time in score (ms)
  4. MIDI note number in score (int)

Evaluation metrics

  • Percentage of missed events.
  • Average Offset between the detected note onsets and reference alignment and statistics of it.
  • Average latency of the system.

Evaluator pseudo-code

For each audio/system output/reference tuple from the database,

  • Compute the number of "missed note" by comparing the output and the reference.
  • Compute the "offset" (the difference between onset time reference and score follower's onset time output) for each note.

Errors computed in this version are:

  • Missed note percentage
  • Average latency
  • Average offset
  • Offset statistics: mean and standard deviation

Reference Database

Reference database contains score, and performance audio (for system call) and a reference alignment (for evaluations).

Contributions

  • Christopher Raphael:
    • Mozart Dorabella, voice
    • Mozart Clarinet Concerto K370, clarinet
    • Rodrigo Aranjuez Concerto, guitar
    • Sarasate Zigeunerweisen, violin
      • Aligned by Christopher Raphael's score follower and corrected by hand.
  • Ircam (Arshia Cont and Diemo Schwarz)
    • Boulez ... Explosante-Fixe ..., flute (47 files, duration approx. 1 hour)
    • Bach Violin Sonatas, performed by Menuhin and Kremer (two sets, three sonatas)
      • Aligned by an external offline score alignment algorithm and corrected by hand.

Content Format

Score Files

Scores are in MIDI formats.

Audio Files

Audio will be either WAVE or AIFF that contain real performances of a given MIDI score.

Reference alignment

The reference files constitute a ground truth alignment between a MIDI score and a recording of it. They have one line per score note, with the columns:

  1. note onset time in reference audio file [ms]
  2. note start time in score [ms]
  3. MIDI note number in score [nn]

Example

To see a sample and example of the database, refer to: http://crca.ucsd.edu/arshia/mirex06-scofo/

Potential Participants

  • Arshia Cont (UCSD / Ircam)
  • Roger Dannenberg (Carnegie Mellon University)
  • Christopher Raphael (Indiana university)
  • Diemo Schwarz (Ircam)
  • Miller Puckette (UCSD)
  • Ozgur Izmirli (Connolle)
  • Cort Lippe (University of Buffalo)
  • Frank Weinstock (TimeWarp Technologies)