Difference between revisions of "2008:Real-time Audio to Score Alignment (a.k.a. Score Following) Results"
From MIREX Wiki
IMIRSELBot (talk | contribs) m (Robot: Automated text replacement (-\[\[([A-Z][^:]+)\]\] +2008:\1)) |
IMIRSELBot (talk | contribs) m (Robot: Automated text replacement (-<csv([^>]*)> +<csv\1>2008/)) |
||
Line 14: | Line 14: | ||
===Summary Results=== | ===Summary Results=== | ||
− | <csv>scofo/scofo_summary_results.csv</csv> | + | <csv>2008/scofo/scofo_summary_results.csv</csv> |
===Individual Results=== | ===Individual Results=== | ||
Line 21: | Line 21: | ||
===Summary Results w.r.t R. Macrae`s Evaluation Script=== | ===Summary Results w.r.t R. Macrae`s Evaluation Script=== | ||
− | <csv>scofo/scofo_summary_results_withRobsEvalScript.csv</csv> | + | <csv>2008/scofo/scofo_summary_results_withRobsEvalScript.csv</csv> |
===Individual Results w.r.t R. Macrae`s Evaluation Script=== | ===Individual Results w.r.t R. Macrae`s Evaluation Script=== |
Revision as of 19:50, 13 May 2010
Contents
Introduction
These are the results for the 2008 running of the Real-time Audio to Score Alignment (a.k.a Score Following) task. For background information about this task set please refer to the 2008:Real-time Audio to Score Alignment (a.k.a Score Following) page.
General Legend
Team ID
MO1 = N. Montecchio & Orio 1
MO2 = N. Montecchio & Orio 2
RM1 = R. Macrae
RM2 = R. Macrae
Summary Results
MO1 | MO2 | RM1 | RM2 | |
---|---|---|---|---|
Piecewise Precision (MO GT) | 84.45% | 68.84% | 17.10% | 19.50% |
Piecewise Precision (RM GT) | 48.55% | 41.67% | 25.34% | 26.19% |
Ave. Piecewise Precision | 66.50% | 55.26% | 21.22% | 22.85% |
Individual Results
MO = N. Montecchio & Orio
RM = R. Macrae
Summary Results w.r.t R. Macrae`s Evaluation Script
MO1 | MO2 | RM1 | RM2 | |
---|---|---|---|---|
Piecewise Precision (MO GT) | 81.59% | 65.63% | 24.27% | 17.03% |
Piecewise Precision (RM GT) | 25.93% | 25.36% | 44.77% | 28.80% |
Ave. Piecewise Precision | 53.76% | 45.50% | 34.52% | 22.92% |
Individual Results w.r.t R. Macrae`s Evaluation Script
MO = N. Montecchio & Orio
RM = R. Macrae
The systems are evaluated against the ground truth that is prepared by parsing the score files by each systems own midi parser (MO GT, RM GT).