<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Proton</id>
	<title>MIREX Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Proton"/>
	<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/wiki/Special:Contributions/Proton"/>
	<updated>2026-05-13T10:31:41Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2010:MIREX_2010_Poster_List&amp;diff=7572</id>
		<title>2010:MIREX 2010 Poster List</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2010:MIREX_2010_Poster_List&amp;diff=7572"/>
		<updated>2010-08-01T21:31:54Z</updated>

		<summary type="html">&lt;p&gt;Proton: /* Add your author names here, once for each poster along with &amp;quot;title of some sort&amp;quot; and (Task(s) covered) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==MIREX 2010 Poster Session Planning List==&lt;br /&gt;
The MIREX 2010 Poster Session will be held Wednesday, 11 August: 16:00 - 17:45. We will be holding the MIREX plenary meeting 13:00-14:00 as a working lunch on the same day.&lt;br /&gt;
&lt;br /&gt;
Our hosts in Utrecht need to know the number of posters so they can set up the room. Please add you name and the task(s) dealt with in your poster. &lt;br /&gt;
&lt;br /&gt;
We had many groups/individuals submit across tasks. You can choose to create one ISMIR poster bringing all your data together or can split up your data across, say, two or three posters. If you have questions, please contact me at jdownie@illinois.edu or the MIREX mailing list about task poster options.&lt;br /&gt;
&lt;br /&gt;
As a reminder, the MIREX posters need to follow the [http://ismir2010.ismir.net/information-for-authors/information-for-presenters/ ISMIR 2010 poster guidelines] (i.e., A0, portrait orientation).&lt;br /&gt;
&lt;br /&gt;
==Add your author names here, once for each poster along with &amp;quot;title of some sort&amp;quot; and (Task(s) covered)==&lt;br /&gt;
# IMIRSEL: ''MIREX 2010 Overview, Part I'' (Train Test Tasks)&lt;br /&gt;
# IMIRSEL: ''MIREX 2010 Overview, Part II'' (All Other Tasks)&lt;br /&gt;
# Andreas Arzt and Gerhard Widmer: &amp;quot;Real-time Music Tracking using Tempo-aware On-line Dynamic Time Warping&amp;quot; (Real-time Audio to Score Alignment (a.k.a Score Following))&lt;br /&gt;
# Pasi Saari and Olivier Lartillot: &amp;quot;SubEnsemble - Classification framework based on the Ensemble Approach and Feature Selection&amp;quot; (Train Test Tasks)&lt;br /&gt;
# Gabriel Sargent, Frédéric Bimbot and Emmanuel Vincent: &amp;quot;Structural segmentation of songs using multi-criteria generalized likelihood ratio and regularity constraints&amp;quot; (Structural Segmentation Task)&lt;br /&gt;
# Emmanouil Benetos and Simon Dixon: &amp;quot;Multiple fundamental frequency estimation using spectral structure and temporal evolution rules&amp;quot; (Multiple Fundamental Frequency Estimation &amp;amp; Tracking Task)&lt;br /&gt;
# J. Urbano, J. Lloréns, J. Morato and S. Sánchez-Cuadrado: ''Local Alignment with Geometric Representations'' (Symbolic Melodic Similarity)&lt;br /&gt;
# F.J.Rodriguez-Serrano, P.Vera-Candeas, P.Cabanas-Molero, J.J.Carabias-Orti, N.Ruiz-Reyes: ''AM Sinusoidal Modeling for Onset Detection FOR ONSET DETECTION'' (Audio Onset Detection)&lt;br /&gt;
# R.Mata-Campos, F.J.Rodriguez-Serrano, P.Vera-Candeas, J.J.Carabias-Orti, F.J.Canadas-Quesada: ''Beat Tracking improved by AM Sinusoidal Modeled Onsets'' (Audio Beat Tracking)&lt;br /&gt;
# F.J.Rodriguez-Serrano, P.Vera-Candeas,  J.J.Carabias-Orti,P.Cabanas-Molero, N.Ruiz-Reyes: ''Real time audio to score alignment based on NLS multipitch estimation'' (Real-time Audio to Score Alignment (a.k.a Score Following))&lt;br /&gt;
# F.J. Cañadas-Quesada, F. Rodríguez-Serrano, P. Vera-Candeas, N. Ruiz-Reyes and J. Carabias-Orti: ''Multiple Fundamental Frequency Estimation &amp;amp; Tracking in Polyphonic Music for MIREX 2010'' (Multiple Fundamental Frequency Estimation &amp;amp; Tracking)&lt;br /&gt;
# Zhiyao Duan and Bryan Pardo: &amp;quot;A Real-time Score Follower for MIREX 2010&amp;quot; (Real-time Audio to Score Alignment (a.k.a Score Following))&lt;br /&gt;
# Zhiyao Duan, Jinyu Han and Bryan Pardo: &amp;quot;A Multi-pitch Estimation and Tracking System&amp;quot; (Multiple Fundamental Frequency Estimation &amp;amp; Tracking Task)&lt;br /&gt;
&lt;br /&gt;
==Below are some examples from MIREX 2009==&lt;br /&gt;
&lt;br /&gt;
# Matt Hoffman: ''Using CBA to Automatically Tag Songs'' (Audio tag classification/retrieval)&lt;br /&gt;
# Suman Ravuri, Dan Ellis: ''The Hydra System of Cover Song Classification'' (Cover Song Identification)&lt;br /&gt;
# Joan Serra, Massimiliano Zanin, Ralph G Andrzejak: ''Cover song retrieval by recurrence quantification and unsupervised set detection'' (Cover Song Identification)&lt;br /&gt;
# MTG Team: &amp;quot;Music Type Groupers (MTG): Generic Music Classification Algorithms&amp;quot; (Audio Genre Classification, Mood Classification, Artist Identification, Classical Composer Identification)&lt;br /&gt;
# R. Jang: &amp;quot;Poster #2&amp;quot; (placeholder to get the auto-counter to increment)&lt;br /&gt;
# R. Jang: &amp;quot;Poster #3&amp;quot; (placeholder to get the auto-counter to increment)&lt;/div&gt;</summary>
		<author><name>Proton</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2010:Real-time_Audio_to_Score_Alignment_(a.k.a_Score_Following)&amp;diff=7218</id>
		<title>2010:Real-time Audio to Score Alignment (a.k.a Score Following)</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2010:Real-time_Audio_to_Score_Alignment_(a.k.a_Score_Following)&amp;diff=7218"/>
		<updated>2010-07-01T02:40:10Z</updated>

		<summary type="html">&lt;p&gt;Proton: /* Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Real-time Audio to Score Alignment'', also known as ''Score Following''&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
Score Following is the real-time alignment of an incoming music signal to the music score. The music signal can be symbolic (MIDI) or audio, but we will concentrate here on audio following, unless there are some candidates who'd want their symbolic followers to be evaluated and can propose reference data.  &lt;br /&gt;
&lt;br /&gt;
This page describes a proposal for evaluation of score following systems. Discussion of the evaluation procedures on the [https://mail.lis.uiuc.edu/mailman/listinfo/mrx-com01 Score Following contest planning list] will be documented on the [[Score Following]] page. A full digest of the discussions is available to subscribers from the [https://mail.lis.uiuc.edu/mailman/private/mrx-com01/ Score Following contest planning list archives].&lt;br /&gt;
&lt;br /&gt;
Submissions will be required to estimate alignment precision according to the indexed times.  In order for your system to participate, please specify the type of alignment (monophonic, polyphonic), type of training and realtime performance, also separated into two domains (upon enough submissions) for symbolic and audio systems.  Note that we also do accept systems that don't run in real-time in practice, as soon as their algorithm is on-line, i.e. without making use of global knowledge of the input.&lt;br /&gt;
&lt;br /&gt;
== Data == &lt;br /&gt;
46 recordings and their corresponding MIDI representations of the score will be used in the evaluation. These 46 excerpts were extracted from 4 distinct musical pieces.&lt;br /&gt;
Recordings are in 44.1Khz 16bit wav format. The reference scores are in MIDI format.&lt;br /&gt;
&lt;br /&gt;
Zhiyao Duan and Prof. Bryan Pardo contributed another polyphonic dataset. This dataset consists of 10 pieces of four-part J.S. Bach chorales. The audio file was performed by a quartet of instruments: violin, clarinet, saxophone and bassoon. The ground-truth alignment between audio and midi were generated by human annotation.&lt;br /&gt;
&lt;br /&gt;
== Evaluation procedures ==&lt;br /&gt;
&lt;br /&gt;
Evaluation procedure consists of running score followers on a database of aligned audio to score where the database contains score, and performance audio (for system call) and a reference alignment (for evaluations) -- &lt;br /&gt;
See http://ismir2007.ismir.net/proceedings/ISMIR2007_p315_cont.pdf for details.&lt;br /&gt;
&lt;br /&gt;
See the details of 2006 proposal on the [[2006:Score_Following_Proposal|MIREX 2006 Wiki]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== I/O Format ===&lt;br /&gt;
Each system should conform to the following format:&lt;br /&gt;
&lt;br /&gt;
 ''doScofo.sh &amp;quot;/path/to/audiofile.wav&amp;quot; &amp;quot;/path/to/midi_score_file.mid&amp;quot; &amp;quot;/path/to/result/filename.txt&amp;quot; &lt;br /&gt;
&lt;br /&gt;
The stdout and stderr will be logged.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;/path/to/result/filenam.txt&amp;quot; should be have one line per detected note with the following 4 columns&lt;br /&gt;
&lt;br /&gt;
   1. estimated note onset time in performance audio file (ms)&lt;br /&gt;
   2. detection time relative to performance audio file (ms)&lt;br /&gt;
   3. note start time in score (ms)&lt;br /&gt;
   4. MIDI note number in score (int) &lt;br /&gt;
&lt;br /&gt;
Example :&lt;br /&gt;
 ''1800	1800	0	75''&lt;br /&gt;
 ''2021	2022	187.5	73''&lt;br /&gt;
 ''...	...	...	...''&lt;br /&gt;
&lt;br /&gt;
Remarks: The third column with the detected note's start time in score serves as the unique identifier of a note (or chord for polyphonic scores) that links it to the ground truth onset of that note within the reference alignment files. The fourth column of MIDI note number is there only for your convenience, to know your way around in the result files, if you know the melody in MIDI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Packaging submissions ===&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of &lt;br /&gt;
dynamically linked libraries cannot be guarenteed).&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the &lt;br /&gt;
information:&lt;br /&gt;
&lt;br /&gt;
* Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
* Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
* Expected memory footprint&lt;br /&gt;
* Expected runtime&lt;br /&gt;
* Any required environments (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks,&lt;br /&gt;
hard limits on the runtime of submissions are specified. &lt;br /&gt;
 &lt;br /&gt;
A hard limit of 12 hours will be imposed on rthe total runtime of algorithms. Submissions that exceed this runtime may not receive a result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submission closing date ==&lt;br /&gt;
&lt;br /&gt;
Friday 4th June 2010&lt;/div&gt;</summary>
		<author><name>Proton</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2010:Real-time_Audio_to_Score_Alignment_(a.k.a_Score_Following)&amp;diff=6758</id>
		<title>2010:Real-time Audio to Score Alignment (a.k.a Score Following)</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2010:Real-time_Audio_to_Score_Alignment_(a.k.a_Score_Following)&amp;diff=6758"/>
		<updated>2010-05-17T15:51:23Z</updated>

		<summary type="html">&lt;p&gt;Proton: /* Potential Participants */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Title ==&lt;br /&gt;
''Real-time Audio to Score Alignment'', also known as ''Score Following''&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&lt;br /&gt;
The text of this section is copied from the 2009 page. Please add your comments and discussions for 2010. &lt;br /&gt;
&lt;br /&gt;
Score Following is the real-time alignment of an incoming music signal to the music score. The music signal can be symbolic (MIDI) or audio, but we will concentrate here on audio following, unless there are some candidates who'd want their symbolic followers to be evaluated and can propose reference data.  &lt;br /&gt;
&lt;br /&gt;
This page describes a proposal for evaluation of score following systems. Discussion of the evaluation procedures on the [https://mail.lis.uiuc.edu/mailman/listinfo/mrx-com01 Score Following contest planning list] will be documented on the [[Score Following]] page. A full digest of the discussions is available to subscribers from the [https://mail.lis.uiuc.edu/mailman/private/mrx-com01/ Score Following contest planning list archives].&lt;br /&gt;
&lt;br /&gt;
Submissions will be required to estimate alignment precision according to the indexed times.  In order for your system to participate, please specify the type of alignment (monophonic, polyphonic), type of training and realtime performance, also separated into two domains (upon enough submissions) for symbolic and audio systems.  Note that we also do accept systems that don't run in real-time in practice, as soon as their algorithm is on-line, i.e. without making use of global knowledge of the input.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2010 ==&lt;br /&gt;
&lt;br /&gt;
Your comments here.&lt;br /&gt;
&lt;br /&gt;
== Evolution ==&lt;br /&gt;
This year's changes are proposed here and on the list, and are currently under discussion.  Proposed changes are mainly about the score and reference file formats and the evaluation metrics:&lt;br /&gt;
&lt;br /&gt;
* the proposed new score and reference file format is described here: [[2010:Score File Format]]&lt;br /&gt;
* evaluation metrics will more closely reflect the different approaches and applications of score following  &lt;br /&gt;
&lt;br /&gt;
See the details of last year's proposal on the [https://www.music-ir.org/mirex2006/index.php/Score_Following_Proposal MIREX 2006 Wiki]&lt;br /&gt;
&lt;br /&gt;
== Title ==&lt;br /&gt;
''Real-time Audio to Score Alignment'', also known as ''Score Following''&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
Score Following is the real-time alignment of an incoming music signal to the music score. The music signal can be symbolic (MIDI) or audio, but we will concentrate here on audio following, unless there are some candidates who'd want their symbolic followers to be evaluated and can propose reference data.  &lt;br /&gt;
&lt;br /&gt;
This page describes a proposal for evaluation of score following systems. Discussion of the evaluation procedures on the [https://mail.lis.uiuc.edu/mailman/listinfo/mrx-com01 Score Following contest planning list] will be documented on the [[Score Following]] page. A full digest of the discussions is available to subscribers from the [https://mail.lis.uiuc.edu/mailman/private/mrx-com01/ Score Following contest planning list archives].&lt;br /&gt;
&lt;br /&gt;
Submissions will be required to estimate alignment precision according to the indexed times.  In order for your system to participate, please specify the type of alignment (monophonic, polyphonic), type of training and realtime performance, also separated into two domains (upon enough submissions) for symbolic and audio systems.  Note that we also do accept systems that don't run in real-time in practice, as soon as their algorithm is on-line, i.e. without making use of global knowledge of the input.&lt;br /&gt;
&lt;br /&gt;
== Evolution ==&lt;br /&gt;
This year's changes are proposed here and on the list, and are currently under discussion.  Proposed changes are mainly about the score and reference file formats and the evaluation metrics:&lt;br /&gt;
&lt;br /&gt;
* the proposed new score and reference file format is described here: [[Score File Format]]&lt;br /&gt;
* evaluation metrics will more closely reflect the different approaches and applications of score following  &lt;br /&gt;
&lt;br /&gt;
See the details of last year's proposal on the [https://www.music-ir.org/mirex2006/index.php/Score_Following_Proposal MIREX 2006 Wiki]&lt;br /&gt;
&lt;br /&gt;
== Evaluation procedures ==&lt;br /&gt;
&lt;br /&gt;
Evaluation procedure consists of running score followers on a database of aligned audio to score where the database contains score, and performance audio (for system call) and a reference alignment (for evaluations) -- See below for details. &lt;br /&gt;
&lt;br /&gt;
=== I/O Format ===&lt;br /&gt;
Each system should conform to the following format:&lt;br /&gt;
&lt;br /&gt;
 ''doScofo.sh &amp;quot;/path/to/audiofile.wav&amp;quot; &amp;quot;/path/to/midi_score_file.wav&amp;quot; &amp;quot;/path/to/result/filename.txt&amp;quot; &lt;br /&gt;
&lt;br /&gt;
The stdout and stderr will be logged.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;/path/to/result/filenam.txt&amp;quot; should be have one line per detected note with the following 4 columns&lt;br /&gt;
&lt;br /&gt;
   1. estimated note onset time in performance audio file (ms)&lt;br /&gt;
   2. detection time relative to performance audio file (ms)&lt;br /&gt;
   3. note start time in score (ms)&lt;br /&gt;
   4. MIDI note number in score (int) &lt;br /&gt;
&lt;br /&gt;
Example :&lt;br /&gt;
 ''1800	1800	0	75''&lt;br /&gt;
 ''2021	2022	187.5	73''&lt;br /&gt;
 ''...	...	...	...''&lt;br /&gt;
&lt;br /&gt;
Remarks: The third column with the detected note's start time in score serves as the unique identifier of a note (or chord for polyphonic scores) that links it to the ground truth onset of that note within the reference alignment files. The fourth column of MIDI note number is there only for your convenience, to know your way around in the result files, if you know the melody in MIDI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Potential Participants === &lt;br /&gt;
&lt;br /&gt;
Wei-Ta Chu, National Chung Cheng University, Taiwan. Email: wtchu AT cs DOT ccu DOT edu DOT tw&lt;br /&gt;
&lt;br /&gt;
Zhiyao Duan, Bryan Pardo, Northwestern University, USA. Email: zhiyaoduan00 AT gmail &amp;lt;dot&amp;gt; com&lt;/div&gt;</summary>
		<author><name>Proton</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2010:Multiple_Fundamental_Frequency_Estimation_%26_Tracking&amp;diff=6757</id>
		<title>2010:Multiple Fundamental Frequency Estimation &amp; Tracking</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2010:Multiple_Fundamental_Frequency_Estimation_%26_Tracking&amp;diff=6757"/>
		<updated>2010-05-17T15:50:56Z</updated>

		<summary type="html">&lt;p&gt;Proton: /* Potential Participants */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Description==&lt;br /&gt;
&lt;br /&gt;
The text of this section is copied from the 2009 page. Please add your comments and discussions for 2010. &lt;br /&gt;
&lt;br /&gt;
That a complex music signal can be represented by the F0 contours of its constituent sources is a very useful concept for most music information retrieval systems. There have been many attempts at multiple (aka polyphonic) F0 estimation and melody extraction, a related area. The goal of multiple F0 estimation and tracking is to identify the active F0s in each time frame and to track notes and timbres continuously in a complex music signal. In this task, we would like to evaluate state-of-the-art multiple-F0 estimation and tracking algorithms. Since F0 tracking of all sources in a complex audio mixture can be very hard, we are restricting the problem to 3 cases:&lt;br /&gt;
&lt;br /&gt;
1. Estimate active fundamental frequencies on a frame-by-frame basis.&lt;br /&gt;
&lt;br /&gt;
2. Track note contours on a continuous time basis. (as in audio-to-midi). This task will also include a piano transcription sub task.&lt;br /&gt;
&lt;br /&gt;
3. Track timbre on a continous time basis.&lt;br /&gt;
&lt;br /&gt;
The deadline For this task is  September 8th. Please feel free to request extension if needed.&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2010 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Discussions from 2009 ==&lt;br /&gt;
&lt;br /&gt;
https://www.music-ir.org/mirex/2009/index.php/Multiple_Fundamental_Frequency_Estimation_%26_Tracking#Discussions_for_2009&lt;br /&gt;
&lt;br /&gt;
==Data==&lt;br /&gt;
&lt;br /&gt;
	A woodwind quintet transcription of the fifth variation from L. van Beethoven's Variations for String Quartet Op.18 No. 5.  Each part (flute, oboe, clarinet, horn, or bassoon) was recorded separately while the performer listened to the other parts (recorded previously) through headphones. Later the parts were mixed to a monaural 44.1kHz/16bits  file.&lt;br /&gt;
&lt;br /&gt;
	Synthesized pieces using RWC MIDI and RWC samples. Includes pieces from Classical and Jazz collections. Polyphony changes from 1 to 4 sources.&lt;br /&gt;
&lt;br /&gt;
	Polyphonic piano recordings generated using a disklavier playback piano.&lt;br /&gt;
&lt;br /&gt;
So, there are 6, 30-sec clips for each polyphony (2-3-4-5) for a total of 30 examples, plus there are 10 30-sec polyphonic piano clips. Please email me about your estimated running time (in terms of n times realtime), if we believe everybodyΓÇÖs algorithm is fast enough, we can increase the number of test samples. (There were 90 x real-time algo`s for melody extraction tasks in the past.)&lt;br /&gt;
&lt;br /&gt;
All files are in 44.1kHz / 16 bit wave format. The development set can be found at&lt;br /&gt;
[https://www.music-ir.org/evaluation/MIREX/data/2007/multiF0/index.htm           Development Set for MIREX 2007 MultiF0 Estimation  Tracking Task].  &lt;br /&gt;
&lt;br /&gt;
Send an email to [mailto:mertbay@uiuc.edu mertbay@uiuc.edu] for the username and password.&lt;br /&gt;
&lt;br /&gt;
==Evaluation==&lt;br /&gt;
&lt;br /&gt;
This year, We would like to discuss different evaluation methods. From last year`s result, it can be seen that on note tracking,  algorithms performed poorly when evaluated using note offsets. Below is the evaluation methods we used last year: &lt;br /&gt;
&lt;br /&gt;
For Task 1 (frame level evaluation), systems will report the number of active pitches every 10ms. Precision (the portion of correct retrieved pitches for all pitches retrieved for each frame) and Recall (the ratio of correct pitches to  all ground truth pitches for each frame) will be reported. A Returned Pitch is assumed to be correct if it is within a half semitone  (+ - 3%) of a ground-truth pitch for that frame. Only one ground-truth pitch can be associated with each Returned Pitch.&lt;br /&gt;
Also  as suggested, an error score as described in [http://www.hindawi.com/GetArticle.aspx?doi=10.1155/2007/48317 Poliner and Ellis p.g. 5 ] will be calculated. &lt;br /&gt;
The frame level ground truth  will be calculated by [http://www.ircam.fr/pcm/cheveign/sw/yin.zip YIN] and hand corrected.&lt;br /&gt;
&lt;br /&gt;
For Task 2 (note tracking), again Precision (the ratio of correctly transcribed ground truth notes to the  number of ground truth notes for that input clip) and Recall (ratio of correctly transcribed ground truth notes to the number of transcribed notes) will be reported. A ground truth note is assumed to be correctly transcribed if the system returns a note that is within a half semitone (+ - 3%) of that note AND the returned note`s onset is within a 100ms range( + - 50ms) of the onset of the ground truth note, and its offset is within 20% range of the ground truth note`s offset. Again, one ground truth note can only be associated with one transcribed note.&lt;br /&gt;
&lt;br /&gt;
The ground truth for this task will be annotated by hand. An amplitude threshold relative to the file/instrument will be determined. Note onset is going to be set to the time where its amplitude rises higher than the threshold  and the offset is going to be set to the the time where the note`s amplitude decays lower than the threshold. The ground truth is going to be set as the average F0 between the onset and the offset of the note.&lt;br /&gt;
In the case of legato, the onset/offset is going to be set to the time where the F0 deviates more than 3% of the average F0 through out the the note up to that point. There is not going to be any vibrato larger than a half semitone in the test data.&lt;br /&gt;
&lt;br /&gt;
Different statistics can also be reported if agreed by the participants.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
&lt;br /&gt;
Submissions have to conform to the specified format below:&lt;br /&gt;
&lt;br /&gt;
 ''doMultiF0 &amp;quot;path/to/file.wav&amp;quot;  &amp;quot;path/to/output/file.F0&amp;quot; ''&lt;br /&gt;
&lt;br /&gt;
path/to/file.wav: Path to the input audio file.&lt;br /&gt;
&lt;br /&gt;
path/to/output/file.F0: The output file. &lt;br /&gt;
&lt;br /&gt;
Programs can use their working directory if they need to keep temporary cache files or internal debuggin info. Stdout and stderr will be logged.&lt;br /&gt;
&lt;br /&gt;
For each task, the format of the output file is going to be different:&lt;br /&gt;
For the first task, F0-estimation on frame basis,  the output will be a file where each row has a  time stamp and a number of active F0s in that frame, separated by a tab for every 10ms increments. &lt;br /&gt;
	&lt;br /&gt;
Example :&lt;br /&gt;
 ''time	F01	F02	F03	''&lt;br /&gt;
 ''time	F01	F02	F03	F04''&lt;br /&gt;
 ''time	...	...	...	...''&lt;br /&gt;
&lt;br /&gt;
which might look like:&lt;br /&gt;
&lt;br /&gt;
 ''0.78	146.83	220.00	349.23''&lt;br /&gt;
 ''0.79	349.23	146.83	369.99	220.00	''&lt;br /&gt;
 ''0.80	...	...	...	...''&lt;br /&gt;
&lt;br /&gt;
For the second task,  for each row, the file should contain  the onset, offset and the F0 of each note event separated by a tab, ordered in terms of onset times:&lt;br /&gt;
&lt;br /&gt;
 onset	offset F01&lt;br /&gt;
 onset	offset F02&lt;br /&gt;
 ...	... ...&lt;br /&gt;
which might look like:&lt;br /&gt;
&lt;br /&gt;
 0.68	1.20	349.23&lt;br /&gt;
 0.72	1.02	220.00&lt;br /&gt;
 ...	...	...&lt;br /&gt;
The DEADLINE is TBA.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Potential Participants==&lt;br /&gt;
Emmanouil Benetos, Simon Dixon, Centre for Digital Music, Queen Mary University of London, UK. emmanouil.benetos at elec.qmul.ac.uk&lt;br /&gt;
&lt;br /&gt;
Zhiyao Duan, Jinyu Han, Bryan Pardo, Northwestern University, USA. Email: zhiyaoduan00 AT gmail &amp;lt;dot&amp;gt; com&lt;br /&gt;
&lt;br /&gt;
If  you might consider participating, please add your name and email address here and also please sign up for the Multi-F0  mail list:&lt;br /&gt;
[https://mail.lis.uiuc.edu/mailman/listinfo/mrx-com03 Multi-F0 Estimation Tracking email list]&lt;/div&gt;</summary>
		<author><name>Proton</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2010:Multiple_Fundamental_Frequency_Estimation_%26_Tracking&amp;diff=6756</id>
		<title>2010:Multiple Fundamental Frequency Estimation &amp; Tracking</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2010:Multiple_Fundamental_Frequency_Estimation_%26_Tracking&amp;diff=6756"/>
		<updated>2010-05-17T15:50:40Z</updated>

		<summary type="html">&lt;p&gt;Proton: /* Potential Participants */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Description==&lt;br /&gt;
&lt;br /&gt;
The text of this section is copied from the 2009 page. Please add your comments and discussions for 2010. &lt;br /&gt;
&lt;br /&gt;
That a complex music signal can be represented by the F0 contours of its constituent sources is a very useful concept for most music information retrieval systems. There have been many attempts at multiple (aka polyphonic) F0 estimation and melody extraction, a related area. The goal of multiple F0 estimation and tracking is to identify the active F0s in each time frame and to track notes and timbres continuously in a complex music signal. In this task, we would like to evaluate state-of-the-art multiple-F0 estimation and tracking algorithms. Since F0 tracking of all sources in a complex audio mixture can be very hard, we are restricting the problem to 3 cases:&lt;br /&gt;
&lt;br /&gt;
1. Estimate active fundamental frequencies on a frame-by-frame basis.&lt;br /&gt;
&lt;br /&gt;
2. Track note contours on a continuous time basis. (as in audio-to-midi). This task will also include a piano transcription sub task.&lt;br /&gt;
&lt;br /&gt;
3. Track timbre on a continous time basis.&lt;br /&gt;
&lt;br /&gt;
The deadline For this task is  September 8th. Please feel free to request extension if needed.&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2010 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Discussions from 2009 ==&lt;br /&gt;
&lt;br /&gt;
https://www.music-ir.org/mirex/2009/index.php/Multiple_Fundamental_Frequency_Estimation_%26_Tracking#Discussions_for_2009&lt;br /&gt;
&lt;br /&gt;
==Data==&lt;br /&gt;
&lt;br /&gt;
	A woodwind quintet transcription of the fifth variation from L. van Beethoven's Variations for String Quartet Op.18 No. 5.  Each part (flute, oboe, clarinet, horn, or bassoon) was recorded separately while the performer listened to the other parts (recorded previously) through headphones. Later the parts were mixed to a monaural 44.1kHz/16bits  file.&lt;br /&gt;
&lt;br /&gt;
	Synthesized pieces using RWC MIDI and RWC samples. Includes pieces from Classical and Jazz collections. Polyphony changes from 1 to 4 sources.&lt;br /&gt;
&lt;br /&gt;
	Polyphonic piano recordings generated using a disklavier playback piano.&lt;br /&gt;
&lt;br /&gt;
So, there are 6, 30-sec clips for each polyphony (2-3-4-5) for a total of 30 examples, plus there are 10 30-sec polyphonic piano clips. Please email me about your estimated running time (in terms of n times realtime), if we believe everybodyΓÇÖs algorithm is fast enough, we can increase the number of test samples. (There were 90 x real-time algo`s for melody extraction tasks in the past.)&lt;br /&gt;
&lt;br /&gt;
All files are in 44.1kHz / 16 bit wave format. The development set can be found at&lt;br /&gt;
[https://www.music-ir.org/evaluation/MIREX/data/2007/multiF0/index.htm           Development Set for MIREX 2007 MultiF0 Estimation  Tracking Task].  &lt;br /&gt;
&lt;br /&gt;
Send an email to [mailto:mertbay@uiuc.edu mertbay@uiuc.edu] for the username and password.&lt;br /&gt;
&lt;br /&gt;
==Evaluation==&lt;br /&gt;
&lt;br /&gt;
This year, We would like to discuss different evaluation methods. From last year`s result, it can be seen that on note tracking,  algorithms performed poorly when evaluated using note offsets. Below is the evaluation methods we used last year: &lt;br /&gt;
&lt;br /&gt;
For Task 1 (frame level evaluation), systems will report the number of active pitches every 10ms. Precision (the portion of correct retrieved pitches for all pitches retrieved for each frame) and Recall (the ratio of correct pitches to  all ground truth pitches for each frame) will be reported. A Returned Pitch is assumed to be correct if it is within a half semitone  (+ - 3%) of a ground-truth pitch for that frame. Only one ground-truth pitch can be associated with each Returned Pitch.&lt;br /&gt;
Also  as suggested, an error score as described in [http://www.hindawi.com/GetArticle.aspx?doi=10.1155/2007/48317 Poliner and Ellis p.g. 5 ] will be calculated. &lt;br /&gt;
The frame level ground truth  will be calculated by [http://www.ircam.fr/pcm/cheveign/sw/yin.zip YIN] and hand corrected.&lt;br /&gt;
&lt;br /&gt;
For Task 2 (note tracking), again Precision (the ratio of correctly transcribed ground truth notes to the  number of ground truth notes for that input clip) and Recall (ratio of correctly transcribed ground truth notes to the number of transcribed notes) will be reported. A ground truth note is assumed to be correctly transcribed if the system returns a note that is within a half semitone (+ - 3%) of that note AND the returned note`s onset is within a 100ms range( + - 50ms) of the onset of the ground truth note, and its offset is within 20% range of the ground truth note`s offset. Again, one ground truth note can only be associated with one transcribed note.&lt;br /&gt;
&lt;br /&gt;
The ground truth for this task will be annotated by hand. An amplitude threshold relative to the file/instrument will be determined. Note onset is going to be set to the time where its amplitude rises higher than the threshold  and the offset is going to be set to the the time where the note`s amplitude decays lower than the threshold. The ground truth is going to be set as the average F0 between the onset and the offset of the note.&lt;br /&gt;
In the case of legato, the onset/offset is going to be set to the time where the F0 deviates more than 3% of the average F0 through out the the note up to that point. There is not going to be any vibrato larger than a half semitone in the test data.&lt;br /&gt;
&lt;br /&gt;
Different statistics can also be reported if agreed by the participants.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
&lt;br /&gt;
Submissions have to conform to the specified format below:&lt;br /&gt;
&lt;br /&gt;
 ''doMultiF0 &amp;quot;path/to/file.wav&amp;quot;  &amp;quot;path/to/output/file.F0&amp;quot; ''&lt;br /&gt;
&lt;br /&gt;
path/to/file.wav: Path to the input audio file.&lt;br /&gt;
&lt;br /&gt;
path/to/output/file.F0: The output file. &lt;br /&gt;
&lt;br /&gt;
Programs can use their working directory if they need to keep temporary cache files or internal debuggin info. Stdout and stderr will be logged.&lt;br /&gt;
&lt;br /&gt;
For each task, the format of the output file is going to be different:&lt;br /&gt;
For the first task, F0-estimation on frame basis,  the output will be a file where each row has a  time stamp and a number of active F0s in that frame, separated by a tab for every 10ms increments. &lt;br /&gt;
	&lt;br /&gt;
Example :&lt;br /&gt;
 ''time	F01	F02	F03	''&lt;br /&gt;
 ''time	F01	F02	F03	F04''&lt;br /&gt;
 ''time	...	...	...	...''&lt;br /&gt;
&lt;br /&gt;
which might look like:&lt;br /&gt;
&lt;br /&gt;
 ''0.78	146.83	220.00	349.23''&lt;br /&gt;
 ''0.79	349.23	146.83	369.99	220.00	''&lt;br /&gt;
 ''0.80	...	...	...	...''&lt;br /&gt;
&lt;br /&gt;
For the second task,  for each row, the file should contain  the onset, offset and the F0 of each note event separated by a tab, ordered in terms of onset times:&lt;br /&gt;
&lt;br /&gt;
 onset	offset F01&lt;br /&gt;
 onset	offset F02&lt;br /&gt;
 ...	... ...&lt;br /&gt;
which might look like:&lt;br /&gt;
&lt;br /&gt;
 0.68	1.20	349.23&lt;br /&gt;
 0.72	1.02	220.00&lt;br /&gt;
 ...	...	...&lt;br /&gt;
The DEADLINE is TBA.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Potential Participants==&lt;br /&gt;
Emmanouil Benetos, Simon Dixon, Centre for Digital Music, Queen Mary University of London, UK. emmanouil.benetos at elec.qmul.ac.uk&lt;br /&gt;
Zhiyao Duan, Jinyu Han, Bryan Pardo, Northwestern University, USA. Email: zhiyaoduan00 AT gmail &amp;lt;dot&amp;gt; com&lt;br /&gt;
&lt;br /&gt;
If  you might consider participating, please add your name and email address here and also please sign up for the Multi-F0  mail list:&lt;br /&gt;
[https://mail.lis.uiuc.edu/mailman/listinfo/mrx-com03 Multi-F0 Estimation Tracking email list]&lt;/div&gt;</summary>
		<author><name>Proton</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2010:Real-time_Audio_to_Score_Alignment_(a.k.a_Score_Following)&amp;diff=6755</id>
		<title>2010:Real-time Audio to Score Alignment (a.k.a Score Following)</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2010:Real-time_Audio_to_Score_Alignment_(a.k.a_Score_Following)&amp;diff=6755"/>
		<updated>2010-05-17T15:47:18Z</updated>

		<summary type="html">&lt;p&gt;Proton: /* Potential Participants */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Title ==&lt;br /&gt;
''Real-time Audio to Score Alignment'', also known as ''Score Following''&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&lt;br /&gt;
The text of this section is copied from the 2009 page. Please add your comments and discussions for 2010. &lt;br /&gt;
&lt;br /&gt;
Score Following is the real-time alignment of an incoming music signal to the music score. The music signal can be symbolic (MIDI) or audio, but we will concentrate here on audio following, unless there are some candidates who'd want their symbolic followers to be evaluated and can propose reference data.  &lt;br /&gt;
&lt;br /&gt;
This page describes a proposal for evaluation of score following systems. Discussion of the evaluation procedures on the [https://mail.lis.uiuc.edu/mailman/listinfo/mrx-com01 Score Following contest planning list] will be documented on the [[Score Following]] page. A full digest of the discussions is available to subscribers from the [https://mail.lis.uiuc.edu/mailman/private/mrx-com01/ Score Following contest planning list archives].&lt;br /&gt;
&lt;br /&gt;
Submissions will be required to estimate alignment precision according to the indexed times.  In order for your system to participate, please specify the type of alignment (monophonic, polyphonic), type of training and realtime performance, also separated into two domains (upon enough submissions) for symbolic and audio systems.  Note that we also do accept systems that don't run in real-time in practice, as soon as their algorithm is on-line, i.e. without making use of global knowledge of the input.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2010 ==&lt;br /&gt;
&lt;br /&gt;
Your comments here.&lt;br /&gt;
&lt;br /&gt;
== Evolution ==&lt;br /&gt;
This year's changes are proposed here and on the list, and are currently under discussion.  Proposed changes are mainly about the score and reference file formats and the evaluation metrics:&lt;br /&gt;
&lt;br /&gt;
* the proposed new score and reference file format is described here: [[2010:Score File Format]]&lt;br /&gt;
* evaluation metrics will more closely reflect the different approaches and applications of score following  &lt;br /&gt;
&lt;br /&gt;
See the details of last year's proposal on the [https://www.music-ir.org/mirex2006/index.php/Score_Following_Proposal MIREX 2006 Wiki]&lt;br /&gt;
&lt;br /&gt;
== Title ==&lt;br /&gt;
''Real-time Audio to Score Alignment'', also known as ''Score Following''&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
Score Following is the real-time alignment of an incoming music signal to the music score. The music signal can be symbolic (MIDI) or audio, but we will concentrate here on audio following, unless there are some candidates who'd want their symbolic followers to be evaluated and can propose reference data.  &lt;br /&gt;
&lt;br /&gt;
This page describes a proposal for evaluation of score following systems. Discussion of the evaluation procedures on the [https://mail.lis.uiuc.edu/mailman/listinfo/mrx-com01 Score Following contest planning list] will be documented on the [[Score Following]] page. A full digest of the discussions is available to subscribers from the [https://mail.lis.uiuc.edu/mailman/private/mrx-com01/ Score Following contest planning list archives].&lt;br /&gt;
&lt;br /&gt;
Submissions will be required to estimate alignment precision according to the indexed times.  In order for your system to participate, please specify the type of alignment (monophonic, polyphonic), type of training and realtime performance, also separated into two domains (upon enough submissions) for symbolic and audio systems.  Note that we also do accept systems that don't run in real-time in practice, as soon as their algorithm is on-line, i.e. without making use of global knowledge of the input.&lt;br /&gt;
&lt;br /&gt;
== Evolution ==&lt;br /&gt;
This year's changes are proposed here and on the list, and are currently under discussion.  Proposed changes are mainly about the score and reference file formats and the evaluation metrics:&lt;br /&gt;
&lt;br /&gt;
* the proposed new score and reference file format is described here: [[Score File Format]]&lt;br /&gt;
* evaluation metrics will more closely reflect the different approaches and applications of score following  &lt;br /&gt;
&lt;br /&gt;
See the details of last year's proposal on the [https://www.music-ir.org/mirex2006/index.php/Score_Following_Proposal MIREX 2006 Wiki]&lt;br /&gt;
&lt;br /&gt;
== Evaluation procedures ==&lt;br /&gt;
&lt;br /&gt;
Evaluation procedure consists of running score followers on a database of aligned audio to score where the database contains score, and performance audio (for system call) and a reference alignment (for evaluations) -- See below for details. &lt;br /&gt;
&lt;br /&gt;
=== I/O Format ===&lt;br /&gt;
Each system should conform to the following format:&lt;br /&gt;
&lt;br /&gt;
 ''doScofo.sh &amp;quot;/path/to/audiofile.wav&amp;quot; &amp;quot;/path/to/midi_score_file.wav&amp;quot; &amp;quot;/path/to/result/filename.txt&amp;quot; &lt;br /&gt;
&lt;br /&gt;
The stdout and stderr will be logged.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;/path/to/result/filenam.txt&amp;quot; should be have one line per detected note with the following 4 columns&lt;br /&gt;
&lt;br /&gt;
   1. estimated note onset time in performance audio file (ms)&lt;br /&gt;
   2. detection time relative to performance audio file (ms)&lt;br /&gt;
   3. note start time in score (ms)&lt;br /&gt;
   4. MIDI note number in score (int) &lt;br /&gt;
&lt;br /&gt;
Example :&lt;br /&gt;
 ''1800	1800	0	75''&lt;br /&gt;
 ''2021	2022	187.5	73''&lt;br /&gt;
 ''...	...	...	...''&lt;br /&gt;
&lt;br /&gt;
Remarks: The third column with the detected note's start time in score serves as the unique identifier of a note (or chord for polyphonic scores) that links it to the ground truth onset of that note within the reference alignment files. The fourth column of MIDI note number is there only for your convenience, to know your way around in the result files, if you know the melody in MIDI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Potential Participants === &lt;br /&gt;
&lt;br /&gt;
Wei-Ta Chu, National Chung Cheng University, Taiwan. Email: wtchu AT cs DOT ccu DOT edu DOT tw&lt;br /&gt;
Zhiyao Duan, Bryan Pardo, Northwestern University, USA. Email: zhiyaoduan00 AT gmail &amp;lt;dot&amp;gt; com&lt;/div&gt;</summary>
		<author><name>Proton</name></author>
		
	</entry>
</feed>