<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jose+R.+Zapata</id>
	<title>MIREX Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jose+R.+Zapata"/>
	<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/wiki/Special:Contributions/Jose_R._Zapata"/>
	<updated>2026-04-13T20:13:27Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Beat_Tracking&amp;diff=10107</id>
		<title>2014:Audio Beat Tracking</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Beat_Tracking&amp;diff=10107"/>
		<updated>2014-06-17T19:58:10Z</updated>

		<summary type="html">&lt;p&gt;Jose R. Zapata: /* Potential Participants */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
The text of this section was copied from the 2012 Wiki.  Please add your comments and discussion at the bottom of this page.&lt;br /&gt;
&lt;br /&gt;
The aim of the automatic beat tracking task is to track each beat locations in a collection of sound files. Unlike the Audio Tempo Extraction task, which aim is to detect tempi for each file, the beat tracking task aims at detecting all beat locations in recordings. The algorithms will be evaluated in terms of their accuracy in predicting beat locations annotated by a group of listeners. &lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Collections ===&lt;br /&gt;
The original 2006 dataset contains 160 30-second excerpts (WAV format) used for the Audio Tempo and Beat contests in 2006.  Beat locations have been annotated in each excerpt by 40 different listeners (39 listeners for a few excerpts. The length of each excerpt is 30 seconds. These audio recordings were selected to provide a stable tempo value, a wide distribution of tempi values, and a large variety of instrumentation and musical styles. About 20% of the files contain non-binary meters, and a small number of examples contain changing meters.  One disadvantage of using this set for beat tracking is that the tempi are rather stable and this set will not test beat-tracking algorithms in their ability to track tempo changes.&lt;br /&gt;
&lt;br /&gt;
The second collection is comprised of 367 Chopin Mazurkas, represented as full audio tracks (WAV format). The Mazurka dataset contains tempo changes so it will evaluate the ability of algorithms to track these.&lt;br /&gt;
&lt;br /&gt;
The third collection was assembled and donated in 2012. This dataset contains 217 excerpts around 40s each, of which 19 are &amp;quot;easy&amp;quot; and the remaining 198 are &amp;quot;hard&amp;quot;. The harder excerpts were drawn from the following musical styles: Romantic music, ﬁlm soundtracks, blues, chanson and solo guitar. &lt;br /&gt;
&lt;br /&gt;
This dataset has been designed for radically new techniques which can contend with challenging beat tracking situations like: quiet accompaniment, expressive timing, changes in time signature, slow tempo, poor sound quality etc. So, if your beat tracker likes a 4/4 time-signature with a steady tempo and needs clear percussive onsets, don't expect it to do very well!&lt;br /&gt;
But don't be deterred, this is for the good of beat tracking. &lt;br /&gt;
&lt;br /&gt;
You can read in detail about how the dataset was made here:&lt;br /&gt;
[http://dx.doi.org/10.1109/TASL.2012.2205244 ''Selective Sampling for Beat Tracking Evaluation'']&lt;br /&gt;
&lt;br /&gt;
=== Audio Formats ===&lt;br /&gt;
&lt;br /&gt;
The data are monophonic sound files, with the associated onset times and data about the annotation robustness.&lt;br /&gt;
&lt;br /&gt;
* CD-quality (PCM, 16-bit, 44100 Hz)&lt;br /&gt;
* single channel (mono)&lt;br /&gt;
* file length between 2 and 36 seconds (total time: 14 minutes) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Submissions to this task will have to conform to a specified format detailed below. Submissions should be packaged and contain at least two files: The algorithm itself and a README containing contact information and detailing, in full, the use of the algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Input Data ===&lt;br /&gt;
Participating algorithms will have to read audio in the following format:&lt;br /&gt;
&lt;br /&gt;
* Sample rate: 44.1 KHz&lt;br /&gt;
* Sample size: 16 bit&lt;br /&gt;
* Number of channels: 1 (mono)&lt;br /&gt;
* Encoding: WAV &lt;br /&gt;
&lt;br /&gt;
=== Output Data ===&lt;br /&gt;
&lt;br /&gt;
The beat tracking algorithms will return beat-times in an ASCII text file for each input .wav audio file. The specification of this output file is immediately below.&lt;br /&gt;
&lt;br /&gt;
=== Output File Format (Audio Beat tracking) ===&lt;br /&gt;
&lt;br /&gt;
The Beat Tracking output file format is an ASCII text format. Each beat time is specified, in seconds, on its own line. Specifically, &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;beat time(in seconds)&amp;gt;\n&lt;br /&gt;
&lt;br /&gt;
where \n denotes the end of line. The &amp;lt; and &amp;gt; characters are not included. An example output file would look something like:&lt;br /&gt;
&lt;br /&gt;
 0.243&lt;br /&gt;
 0.486&lt;br /&gt;
 0.729&lt;br /&gt;
&lt;br /&gt;
=== Algorithm Calling Format ===&lt;br /&gt;
&lt;br /&gt;
The submitted algorithm must take as arguments a SINGLE .wav file to perform the onset detection on as well as the full output path and filename of the output file. The ability to specify the output path and file name is essential. Denoting the input .wav file path and name as %input and the output file path and name as %output, a program called foobar could be called from the command-line as follows:&lt;br /&gt;
&lt;br /&gt;
 foobar %input %output&lt;br /&gt;
 foobar -i %input -o %output&lt;br /&gt;
&lt;br /&gt;
Moreover, if your submission takes additional parameters, such as a detection threshold, foobar could be called like:&lt;br /&gt;
&lt;br /&gt;
 foobar .1 %input %output&lt;br /&gt;
 foobar -param1 .1 -i %input -o %output  &lt;br /&gt;
&lt;br /&gt;
If your submission is in MATLAB, it should be submitted as a function. Once again, the function must contain String inputs for the full path and names of the input and output files. Parameters could also be specified as input arguments of the function. For example: &lt;br /&gt;
&lt;br /&gt;
 foobar('%input','%output')&lt;br /&gt;
 foobar(.1,'%input','%output')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== README File ===&lt;br /&gt;
&lt;br /&gt;
A README file accompanying each submission should contain explicit instructions on how to to run the program (as well as contact information, etc.). In particular, each command line to run should be specified, using %input for the input sound file and %output for the resulting text file.&lt;br /&gt;
&lt;br /&gt;
For instance, to test the program foobar with different values for parameters param1, the README file would look like:&lt;br /&gt;
&lt;br /&gt;
 foobar -param1 .1 -i %input -o %output&lt;br /&gt;
 foobar -param1 .15 -i %input -o %output&lt;br /&gt;
 foobar -param1 .2 -i %input -o %output&lt;br /&gt;
 foobar -param1 .25 -i %input -o %output&lt;br /&gt;
 foobar -param1 .3 -i %input -o %output&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
For a submission using MATLAB, the README file could look like:&lt;br /&gt;
&lt;br /&gt;
 matlab -r &amp;quot;foobar(.1,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 matlab -r &amp;quot;foobar(.15,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 matlab -r &amp;quot;foobar(.2,'%input','%output');quit;&amp;quot; &lt;br /&gt;
 matlab -r &amp;quot;foobar(.25,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 matlab -r &amp;quot;foobar(.3,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The different command lines to evaluate the performance of each parameter set over the whole database will be generated automatically from each line in the README file containing both '%input' and '%output' strings.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
&lt;br /&gt;
The evaluation methods are taken from the beat evaluation toolbox and&lt;br /&gt;
are described in the following technical report: &lt;br /&gt;
&lt;br /&gt;
 M. E. P. Davies, N. Degara and M. D. Plumbley. &amp;quot;Evaluation methods for musical audio beat tracking algorithms&amp;quot;. [http://www.elec.qmul.ac.uk/people/markp/2009/DaviesDegaraPlumbley09-evaluation-tr.pdf ''Technical Report C4DM-TR-09-06'']. This link now works! :)&lt;br /&gt;
&lt;br /&gt;
For further details on the specifics of the methods please refer to the&lt;br /&gt;
paper. However, here is a brief summary with appropriate references:&lt;br /&gt;
&lt;br /&gt;
*'''F-measure''' - the standard calculation as used in onset evaluation but&lt;br /&gt;
with a 70ms window. &lt;br /&gt;
&lt;br /&gt;
 S. Dixon, &amp;quot;Onset detection revisited,&amp;quot; in ''Proceedings of 9th&lt;br /&gt;
 International Conference on Digital Audio Effects (DAFx)'', Montreal,&lt;br /&gt;
 Canada, pp. 133-137, 2006.&lt;br /&gt;
&lt;br /&gt;
 S. Dixon, &amp;quot;Evaluation of audio beat tracking system beatroot,&amp;quot; ''Journal&lt;br /&gt;
 of New Music Research'', vol. 36, no. 1, pp. 39-51, 2007.&lt;br /&gt;
&lt;br /&gt;
*'''Cemgil''' - beat accuracy is calculated using a Gaussian error function&lt;br /&gt;
with 40ms standard deviation.&lt;br /&gt;
&lt;br /&gt;
 A. T. Cemgil, B. Kappen, P. Desain, and H. Honing, &amp;quot;On tempo tracking:&lt;br /&gt;
 Tempogram representation and Kalman filtering,&amp;quot; ''Journal Of New Music&lt;br /&gt;
 Research'', vol. 28, no. 4, pp. 259-273, 2001&lt;br /&gt;
 &lt;br /&gt;
*'''Goto''' - binary decision of correct or incorrect tracking based on&lt;br /&gt;
statistical properties of a beat error sequence.&lt;br /&gt;
&lt;br /&gt;
 M. Goto and Y. Muraoka, &amp;quot;Issues in evaluating beat tracking systems,&amp;quot; in&lt;br /&gt;
 ''Working Notes of the IJCAI-97 Workshop on Issues in AI and Music -&lt;br /&gt;
 Evaluation and Assessment'', 1997, pp. 9-16.&lt;br /&gt;
&lt;br /&gt;
*'''PScore''' - McKinney's impulse train cross-correlation method as used in&lt;br /&gt;
2006.&lt;br /&gt;
&lt;br /&gt;
 M. F. McKinney, D. Moelants, M. E. P. Davies, and A. Klapuri,&lt;br /&gt;
 &amp;quot;Evaluation of audio beat tracking and music tempo extraction&lt;br /&gt;
 algorithms,&amp;quot; ''Journal of New Music Research'', vol. 36, no. 1, pp. 1-16,&lt;br /&gt;
 2007.&lt;br /&gt;
&lt;br /&gt;
*'''CMLc''', '''CMLt''', '''AMLc''', '''AMLt''' - continuity-based evaluation methods based on&lt;br /&gt;
the longest continuously correctly tracked section. &lt;br /&gt;
&lt;br /&gt;
 S. Hainsworth, &amp;quot;Techniques for the automated analysis of musical audio,&amp;quot;&lt;br /&gt;
 Ph.D. dissertation, Department of Engineering, Cambridge University,&lt;br /&gt;
 2004.&lt;br /&gt;
&lt;br /&gt;
 A. P. Klapuri, A. Eronen, and J. Astola, &amp;quot;Analysis of the meter of&lt;br /&gt;
 acoustic musical signals,&amp;quot; IEEE Transactions on Audio, Speech and&lt;br /&gt;
 Language Processing, vol. 14, no. 1, pp. 342-355, 2006.&lt;br /&gt;
&lt;br /&gt;
*'''D''', '''Dg''' - information based criteria based on analysis of a beat error&lt;br /&gt;
histogram (note the results are measured in 'bits' and not percentages),&lt;br /&gt;
see the technical report for a description.&lt;br /&gt;
&lt;br /&gt;
== Relevant Development Collections ==&lt;br /&gt;
You can find it here:&lt;br /&gt;
&lt;br /&gt;
https://www.music-ir.org/evaluation/MIREX/data/2006/beat/&lt;br /&gt;
&lt;br /&gt;
User: beattrack Password: b34trx&lt;br /&gt;
&lt;br /&gt;
https://www.music-ir.org/evaluation/MIREX/data/2006/tempo/&lt;br /&gt;
&lt;br /&gt;
User: tempo Password: t3mp0&lt;br /&gt;
&lt;br /&gt;
Data has been uploaded in both .tgz and .zip format.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions will be imposed.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 12 hours will be imposed on analysis times. Submissions exceeding this limit may not receive a result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
Jose R. Zapata / joser.zapata (at) upb.edu.co&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Jose R. Zapata</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Downbeat_Estimation&amp;diff=10106</id>
		<title>2014:Audio Downbeat Estimation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Downbeat_Estimation&amp;diff=10106"/>
		<updated>2014-06-17T19:57:11Z</updated>

		<summary type="html">&lt;p&gt;Jose R. Zapata: /* Potential Participants */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
'''This task is new for 2014!'''&lt;br /&gt;
&lt;br /&gt;
This text has been adapted from the Audio Beat Tracking Wiki page.  Please add your comments and discussion at the bottom of this page.&lt;br /&gt;
&lt;br /&gt;
The aim of the automatic downbeat estimation task is to identify the locations of downbeats in a collection of sound files. While this is similar to the Audio Beat Tracking task, here the aim is to find the first beat of each bar (measure) rather than all beat times. Algorithms are '''not''' required to estimate beat times or time-signature in addition to downbeats.&lt;br /&gt;
&lt;br /&gt;
Submitted algorithms will be evaluated in terms of their accuracy in finding downbeat locations (only) as annotated by musical experts across several diverse datasets.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
=== Collections ===&lt;br /&gt;
'''Ballroom'''&lt;br /&gt;
The ballroom dataset contains eight different dance styles (Cha Cha, Jive, Quickstep, Rumba, Samba, Tango, Viennese Waltz and Waltz). It consists of '''697''' excerpts of 30s in duration.&lt;br /&gt;
The dataset contains two different meters 3/4 and 4/4 - but all pieces have constant meter. For further information see Dixon et al (2004) and Krebs et al (2013).&lt;br /&gt;
Note, we are using the ground truth annotations from Krebs et al. (2013) available at https://github.com/CPJKU/BallroomAnnotations&lt;br /&gt;
&lt;br /&gt;
'''Isophonics (Beatles only)'''&lt;br /&gt;
The Beatles dataset from the Centre for Digital Music at Queen Mary, University of London (http://www.isophonics.net/), as also used for Audio Chord Estimation in MIREX for many years. &lt;br /&gt;
This dataset contains '''179''' complete songs (all except Revolution 9), the majority of which are in 4/4.&lt;br /&gt;
For further information see Mauch et al (2009).&lt;br /&gt;
&lt;br /&gt;
'''Turkish Data'''&lt;br /&gt;
The Turkish corpus is an extended version of the annotated data used in Srinivasamurthy et al. (2014). It includes '''82''' excerpts of one&lt;br /&gt;
minute length each, and each piece belongs to one of three&lt;br /&gt;
rhythm classes that are referred to as usul in Turkish Art&lt;br /&gt;
music. 32 pieces are in the 9/8-usul Aksak, 20 pieces&lt;br /&gt;
in the 10/8-usul Curcuna, and 30 samples in the 8/8-usul&lt;br /&gt;
Düyek.&lt;br /&gt;
&lt;br /&gt;
'''Cretan Data'''&lt;br /&gt;
The corpus of Cretan music consists of '''42''' full length pieces of Cretan leaping dances. While there are several dances that differ in terms of their steps, the differences in&lt;br /&gt;
the sound are most noticeable in the melodic content, and all pieces can be considered to belong to one rhythmic style. All these dances are usually notated using a 2/4 time signature,&lt;br /&gt;
and the accompanying rhythmical patterns are usually played on a Cretan lute. While a variety of rhythmic patterns exist, they do not relate to a specific dance and can be&lt;br /&gt;
assumed to occur in all of the 42 songs in this corpus.&lt;br /&gt;
&lt;br /&gt;
'''Carnatic Data'''&lt;br /&gt;
The Carnatic music dataset is a subset of the CompMusic [http://compmusic.upf.edu/carnatic-rhythm-dataset Carnatic Music Rhythm Dataset]. It includes '''118''' two minute long excerpts spanning four most commonly used tālas (the rhythmic framework of Carnatic music, consisting of time cycles) of Carnatic music. There are 30 examples in each of ādi tāla (8 beats/cycle), rūpaka tāla (3 beats/cycle) and miśra chāpu tāla (7 beats/cycle), and 28 examples in khaṇḍa chāpu tāla (5 beats/cycle). The beats of the tāla in miśra chāpu and khaṇḍa chāpu are non-uniform, but for consistency with other datasets, a uniform beat pulse was obtained by interpolating the non-uniformly spaced beat locations. The recordings consist of both vocal and instrumental music recordings representative of the present day performance practice. All recordings contain percussion accompaniment, mainly the Mridangam. &lt;br /&gt;
&lt;br /&gt;
'''HJDB''' (to be confirmed)&lt;br /&gt;
The HJDB dataset contains '''236''' excerpts of Hardcore, Jungle and Drum and Bass music between 30s and 2 minutes in length. All excerpts are in 4/4 and have a constant tempo. &lt;br /&gt;
For further information see Hockman et al (2012).&lt;br /&gt;
&lt;br /&gt;
In total this makes '''1354''' excerpts (of which 259 are full length songs).&lt;br /&gt;
&lt;br /&gt;
=== Audio Formats ===&lt;br /&gt;
&lt;br /&gt;
The data are monophonic sound files&lt;br /&gt;
&lt;br /&gt;
* CD-quality (PCM, 16-bit, 44100 Hz) for all except Ballroom (originally lower quality, but resampled to 44100 Hz)&lt;br /&gt;
* single channel (mono)&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Submissions to this task will have to conform to a specified format detailed below. Submissions should be packaged and contain at least two files: The algorithm itself and a README containing contact information and detailing, in full, the use of the algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Input Data ===&lt;br /&gt;
Participating algorithms will have to read audio in the following format:&lt;br /&gt;
&lt;br /&gt;
* Sample rate: 44.1 KHz&lt;br /&gt;
* Sample size: 16 bit&lt;br /&gt;
* Number of channels: 1 (mono)&lt;br /&gt;
* Encoding: WAV &lt;br /&gt;
&lt;br /&gt;
=== Output Data ===&lt;br /&gt;
&lt;br /&gt;
The downbeat estimation algorithms will return downbeat times in an ASCII text file for each input .wav audio file. The specification of this output file is immediately below.&lt;br /&gt;
&lt;br /&gt;
=== Output File Format (Audio Downbeat Estimation) ===&lt;br /&gt;
&lt;br /&gt;
The downbeat output file format is an ASCII text format. Each downbeat time is specified, in seconds, on its own line. Specifically, &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;downbeat time (in seconds)&amp;gt;\n&lt;br /&gt;
&lt;br /&gt;
where \n denotes the end of line. The &amp;lt; and &amp;gt; characters are not included. An example output file would look something like:&lt;br /&gt;
&lt;br /&gt;
 0.243&lt;br /&gt;
 1.486&lt;br /&gt;
 2.729&lt;br /&gt;
&lt;br /&gt;
=== Algorithm Calling Format ===&lt;br /&gt;
&lt;br /&gt;
The submitted algorithm must take as arguments a SINGLE .wav file to perform the downbeat estimation as well as the full output path and filename of the output file. The ability to specify the output path and file name is essential. Denoting the input .wav file path and name as %input and the output file path and name as %output, a program called foobar could be called from the command-line as follows:&lt;br /&gt;
&lt;br /&gt;
 foobar %input %output&lt;br /&gt;
 foobar -i %input -o %output&lt;br /&gt;
&lt;br /&gt;
Moreover, if your submission takes additional parameters, such as a detection threshold, foobar could be called like:&lt;br /&gt;
&lt;br /&gt;
 foobar .1 %input %output&lt;br /&gt;
 foobar -param1 .1 -i %input -o %output  &lt;br /&gt;
&lt;br /&gt;
If your submission is in MATLAB, it should be submitted as a function. Once again, the function must contain String inputs for the full path and names of the input and output files. Parameters could also be specified as input arguments of the function. For example: &lt;br /&gt;
&lt;br /&gt;
 foobar('%input','%output')&lt;br /&gt;
 foobar(.1,'%input','%output')&lt;br /&gt;
&lt;br /&gt;
=== README File ===&lt;br /&gt;
&lt;br /&gt;
A README file accompanying each submission should contain explicit instructions on how to to run the program (as well as contact information, etc.). In particular, each command line to run should be specified, using %input for the input sound file and %output for the resulting text file.&lt;br /&gt;
&lt;br /&gt;
For instance, to test the program foobar with different values for parameters param1, the README file would look like:&lt;br /&gt;
&lt;br /&gt;
 foobar -param1 .1 -i %input -o %output&lt;br /&gt;
 foobar -param1 .15 -i %input -o %output&lt;br /&gt;
 foobar -param1 .2 -i %input -o %output&lt;br /&gt;
 foobar -param1 .25 -i %input -o %output&lt;br /&gt;
 foobar -param1 .3 -i %input -o %output&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
For a submission using MATLAB, the README file could look like:&lt;br /&gt;
&lt;br /&gt;
 matlab -r &amp;quot;foobar(.1,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 matlab -r &amp;quot;foobar(.15,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 matlab -r &amp;quot;foobar(.2,'%input','%output');quit;&amp;quot; &lt;br /&gt;
 matlab -r &amp;quot;foobar(.25,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 matlab -r &amp;quot;foobar(.3,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The different command lines to evaluate the performance of each parameter set over the whole database will be generated automatically from each line in the README file containing both '%input' and '%output' strings.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
&lt;br /&gt;
For the evalution procedure we will use&lt;br /&gt;
*'''F-measure''' - the standard calculation as used in onset and beat tracking evaluation with a +/-70ms window, see Dixon (2007).&lt;br /&gt;
&lt;br /&gt;
Given the high diversity of musical styles included in the task, results will be reported per each invidiual dataset. &lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions will be imposed.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on analysis times. Submissions exceeding this limit may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
Jose R. Zapata / joser.zapata (at) upb.edu.co&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;br /&gt;
&lt;br /&gt;
S. Dixon, F. Gouyon and G. Widmer, [http://ismir2004.ismir.net/proceedings/p093-page-509-paper165.pdf Towards Characterisation of Music via Rhythmic Patterns], In Proceedings of the 5th International Conference on Music Information Retrieval (ISMIR 2004), pp 509-516.&lt;br /&gt;
&lt;br /&gt;
S. Dixon, [http://www.eecs.qmul.ac.uk/~simond/pub/2007/jnmr07.pdf Evaluation of audio beat tracking system BeatRoot], Journal of New Music Research, vol. 36, no. 1, pp. 39-51, 2007.&lt;br /&gt;
&lt;br /&gt;
J. A. Hockman, M. E. P. Davies, I. Fujinaga.[http://ismir2012.ismir.net/event/papers/169-ismir-2012.pdf ONE IN THE JUNGLE: Downbeat Detection in Hardcore, Jungle, and Drum and Bass], In Proceedings of 13th International Society for Music Information Retrieval Conference (ISMIR), Porto, Portugal pp. 169-174, 2012.&lt;br /&gt;
&lt;br /&gt;
F. Krebs, S. Boeck, and G. Widmer, [http://www.cp.jku.at/research/papers/Krebs_etal_ISMIR_2013.pdf Rhythmic Pattern Modeling for Beat- and Downbeat Tracking in Musical Audio], In Proceedings of 14th International Society for Music Information Retrieval Conference (ISMIR), Curitiba, Brazil, 2013.&lt;br /&gt;
&lt;br /&gt;
M. Mauch, C. Cannam, M. E. P. Davies, S. Dixon, C. Harte, S. Kolozali and D. Tidhar, [http://ismir2009.ismir.net/proceedings/LBD-18.pdf OMRAS2 Metadata Project 2009], Late-breaking session at the 10th International Conference on Music Information Retrieval, 2009.&lt;br /&gt;
&lt;br /&gt;
A. Srinivasamurthy, A. Holzapfel, and Xavier Serra, [http://www.tandfonline.com/doi/full/10.1080/09298215.2013.879902 In Search of Automatic Rhythm Analysis Methods for Turkish and Indian Art Music], Journal of New Music Research, vol. 43, no. 1, pp. 94-114, 2014.&lt;/div&gt;</summary>
		<author><name>Jose R. Zapata</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Downbeat_Estimation&amp;diff=10105</id>
		<title>2014:Audio Downbeat Estimation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Downbeat_Estimation&amp;diff=10105"/>
		<updated>2014-06-17T19:56:51Z</updated>

		<summary type="html">&lt;p&gt;Jose R. Zapata: /* Potential Participants */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
'''This task is new for 2014!'''&lt;br /&gt;
&lt;br /&gt;
This text has been adapted from the Audio Beat Tracking Wiki page.  Please add your comments and discussion at the bottom of this page.&lt;br /&gt;
&lt;br /&gt;
The aim of the automatic downbeat estimation task is to identify the locations of downbeats in a collection of sound files. While this is similar to the Audio Beat Tracking task, here the aim is to find the first beat of each bar (measure) rather than all beat times. Algorithms are '''not''' required to estimate beat times or time-signature in addition to downbeats.&lt;br /&gt;
&lt;br /&gt;
Submitted algorithms will be evaluated in terms of their accuracy in finding downbeat locations (only) as annotated by musical experts across several diverse datasets.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
=== Collections ===&lt;br /&gt;
'''Ballroom'''&lt;br /&gt;
The ballroom dataset contains eight different dance styles (Cha Cha, Jive, Quickstep, Rumba, Samba, Tango, Viennese Waltz and Waltz). It consists of '''697''' excerpts of 30s in duration.&lt;br /&gt;
The dataset contains two different meters 3/4 and 4/4 - but all pieces have constant meter. For further information see Dixon et al (2004) and Krebs et al (2013).&lt;br /&gt;
Note, we are using the ground truth annotations from Krebs et al. (2013) available at https://github.com/CPJKU/BallroomAnnotations&lt;br /&gt;
&lt;br /&gt;
'''Isophonics (Beatles only)'''&lt;br /&gt;
The Beatles dataset from the Centre for Digital Music at Queen Mary, University of London (http://www.isophonics.net/), as also used for Audio Chord Estimation in MIREX for many years. &lt;br /&gt;
This dataset contains '''179''' complete songs (all except Revolution 9), the majority of which are in 4/4.&lt;br /&gt;
For further information see Mauch et al (2009).&lt;br /&gt;
&lt;br /&gt;
'''Turkish Data'''&lt;br /&gt;
The Turkish corpus is an extended version of the annotated data used in Srinivasamurthy et al. (2014). It includes '''82''' excerpts of one&lt;br /&gt;
minute length each, and each piece belongs to one of three&lt;br /&gt;
rhythm classes that are referred to as usul in Turkish Art&lt;br /&gt;
music. 32 pieces are in the 9/8-usul Aksak, 20 pieces&lt;br /&gt;
in the 10/8-usul Curcuna, and 30 samples in the 8/8-usul&lt;br /&gt;
Düyek.&lt;br /&gt;
&lt;br /&gt;
'''Cretan Data'''&lt;br /&gt;
The corpus of Cretan music consists of '''42''' full length pieces of Cretan leaping dances. While there are several dances that differ in terms of their steps, the differences in&lt;br /&gt;
the sound are most noticeable in the melodic content, and all pieces can be considered to belong to one rhythmic style. All these dances are usually notated using a 2/4 time signature,&lt;br /&gt;
and the accompanying rhythmical patterns are usually played on a Cretan lute. While a variety of rhythmic patterns exist, they do not relate to a specific dance and can be&lt;br /&gt;
assumed to occur in all of the 42 songs in this corpus.&lt;br /&gt;
&lt;br /&gt;
'''Carnatic Data'''&lt;br /&gt;
The Carnatic music dataset is a subset of the CompMusic [http://compmusic.upf.edu/carnatic-rhythm-dataset Carnatic Music Rhythm Dataset]. It includes '''118''' two minute long excerpts spanning four most commonly used tālas (the rhythmic framework of Carnatic music, consisting of time cycles) of Carnatic music. There are 30 examples in each of ādi tāla (8 beats/cycle), rūpaka tāla (3 beats/cycle) and miśra chāpu tāla (7 beats/cycle), and 28 examples in khaṇḍa chāpu tāla (5 beats/cycle). The beats of the tāla in miśra chāpu and khaṇḍa chāpu are non-uniform, but for consistency with other datasets, a uniform beat pulse was obtained by interpolating the non-uniformly spaced beat locations. The recordings consist of both vocal and instrumental music recordings representative of the present day performance practice. All recordings contain percussion accompaniment, mainly the Mridangam. &lt;br /&gt;
&lt;br /&gt;
'''HJDB''' (to be confirmed)&lt;br /&gt;
The HJDB dataset contains '''236''' excerpts of Hardcore, Jungle and Drum and Bass music between 30s and 2 minutes in length. All excerpts are in 4/4 and have a constant tempo. &lt;br /&gt;
For further information see Hockman et al (2012).&lt;br /&gt;
&lt;br /&gt;
In total this makes '''1354''' excerpts (of which 259 are full length songs).&lt;br /&gt;
&lt;br /&gt;
=== Audio Formats ===&lt;br /&gt;
&lt;br /&gt;
The data are monophonic sound files&lt;br /&gt;
&lt;br /&gt;
* CD-quality (PCM, 16-bit, 44100 Hz) for all except Ballroom (originally lower quality, but resampled to 44100 Hz)&lt;br /&gt;
* single channel (mono)&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Submissions to this task will have to conform to a specified format detailed below. Submissions should be packaged and contain at least two files: The algorithm itself and a README containing contact information and detailing, in full, the use of the algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Input Data ===&lt;br /&gt;
Participating algorithms will have to read audio in the following format:&lt;br /&gt;
&lt;br /&gt;
* Sample rate: 44.1 KHz&lt;br /&gt;
* Sample size: 16 bit&lt;br /&gt;
* Number of channels: 1 (mono)&lt;br /&gt;
* Encoding: WAV &lt;br /&gt;
&lt;br /&gt;
=== Output Data ===&lt;br /&gt;
&lt;br /&gt;
The downbeat estimation algorithms will return downbeat times in an ASCII text file for each input .wav audio file. The specification of this output file is immediately below.&lt;br /&gt;
&lt;br /&gt;
=== Output File Format (Audio Downbeat Estimation) ===&lt;br /&gt;
&lt;br /&gt;
The downbeat output file format is an ASCII text format. Each downbeat time is specified, in seconds, on its own line. Specifically, &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;downbeat time (in seconds)&amp;gt;\n&lt;br /&gt;
&lt;br /&gt;
where \n denotes the end of line. The &amp;lt; and &amp;gt; characters are not included. An example output file would look something like:&lt;br /&gt;
&lt;br /&gt;
 0.243&lt;br /&gt;
 1.486&lt;br /&gt;
 2.729&lt;br /&gt;
&lt;br /&gt;
=== Algorithm Calling Format ===&lt;br /&gt;
&lt;br /&gt;
The submitted algorithm must take as arguments a SINGLE .wav file to perform the downbeat estimation as well as the full output path and filename of the output file. The ability to specify the output path and file name is essential. Denoting the input .wav file path and name as %input and the output file path and name as %output, a program called foobar could be called from the command-line as follows:&lt;br /&gt;
&lt;br /&gt;
 foobar %input %output&lt;br /&gt;
 foobar -i %input -o %output&lt;br /&gt;
&lt;br /&gt;
Moreover, if your submission takes additional parameters, such as a detection threshold, foobar could be called like:&lt;br /&gt;
&lt;br /&gt;
 foobar .1 %input %output&lt;br /&gt;
 foobar -param1 .1 -i %input -o %output  &lt;br /&gt;
&lt;br /&gt;
If your submission is in MATLAB, it should be submitted as a function. Once again, the function must contain String inputs for the full path and names of the input and output files. Parameters could also be specified as input arguments of the function. For example: &lt;br /&gt;
&lt;br /&gt;
 foobar('%input','%output')&lt;br /&gt;
 foobar(.1,'%input','%output')&lt;br /&gt;
&lt;br /&gt;
=== README File ===&lt;br /&gt;
&lt;br /&gt;
A README file accompanying each submission should contain explicit instructions on how to to run the program (as well as contact information, etc.). In particular, each command line to run should be specified, using %input for the input sound file and %output for the resulting text file.&lt;br /&gt;
&lt;br /&gt;
For instance, to test the program foobar with different values for parameters param1, the README file would look like:&lt;br /&gt;
&lt;br /&gt;
 foobar -param1 .1 -i %input -o %output&lt;br /&gt;
 foobar -param1 .15 -i %input -o %output&lt;br /&gt;
 foobar -param1 .2 -i %input -o %output&lt;br /&gt;
 foobar -param1 .25 -i %input -o %output&lt;br /&gt;
 foobar -param1 .3 -i %input -o %output&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
For a submission using MATLAB, the README file could look like:&lt;br /&gt;
&lt;br /&gt;
 matlab -r &amp;quot;foobar(.1,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 matlab -r &amp;quot;foobar(.15,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 matlab -r &amp;quot;foobar(.2,'%input','%output');quit;&amp;quot; &lt;br /&gt;
 matlab -r &amp;quot;foobar(.25,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 matlab -r &amp;quot;foobar(.3,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The different command lines to evaluate the performance of each parameter set over the whole database will be generated automatically from each line in the README file containing both '%input' and '%output' strings.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
&lt;br /&gt;
For the evalution procedure we will use&lt;br /&gt;
*'''F-measure''' - the standard calculation as used in onset and beat tracking evaluation with a +/-70ms window, see Dixon (2007).&lt;br /&gt;
&lt;br /&gt;
Given the high diversity of musical styles included in the task, results will be reported per each invidiual dataset. &lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions will be imposed.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on analysis times. Submissions exceeding this limit may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
Jose R. Zapata / joserr.zapata (at) upb.edu.co&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;br /&gt;
&lt;br /&gt;
S. Dixon, F. Gouyon and G. Widmer, [http://ismir2004.ismir.net/proceedings/p093-page-509-paper165.pdf Towards Characterisation of Music via Rhythmic Patterns], In Proceedings of the 5th International Conference on Music Information Retrieval (ISMIR 2004), pp 509-516.&lt;br /&gt;
&lt;br /&gt;
S. Dixon, [http://www.eecs.qmul.ac.uk/~simond/pub/2007/jnmr07.pdf Evaluation of audio beat tracking system BeatRoot], Journal of New Music Research, vol. 36, no. 1, pp. 39-51, 2007.&lt;br /&gt;
&lt;br /&gt;
J. A. Hockman, M. E. P. Davies, I. Fujinaga.[http://ismir2012.ismir.net/event/papers/169-ismir-2012.pdf ONE IN THE JUNGLE: Downbeat Detection in Hardcore, Jungle, and Drum and Bass], In Proceedings of 13th International Society for Music Information Retrieval Conference (ISMIR), Porto, Portugal pp. 169-174, 2012.&lt;br /&gt;
&lt;br /&gt;
F. Krebs, S. Boeck, and G. Widmer, [http://www.cp.jku.at/research/papers/Krebs_etal_ISMIR_2013.pdf Rhythmic Pattern Modeling for Beat- and Downbeat Tracking in Musical Audio], In Proceedings of 14th International Society for Music Information Retrieval Conference (ISMIR), Curitiba, Brazil, 2013.&lt;br /&gt;
&lt;br /&gt;
M. Mauch, C. Cannam, M. E. P. Davies, S. Dixon, C. Harte, S. Kolozali and D. Tidhar, [http://ismir2009.ismir.net/proceedings/LBD-18.pdf OMRAS2 Metadata Project 2009], Late-breaking session at the 10th International Conference on Music Information Retrieval, 2009.&lt;br /&gt;
&lt;br /&gt;
A. Srinivasamurthy, A. Holzapfel, and Xavier Serra, [http://www.tandfonline.com/doi/full/10.1080/09298215.2013.879902 In Search of Automatic Rhythm Analysis Methods for Turkish and Indian Art Music], Journal of New Music Research, vol. 43, no. 1, pp. 94-114, 2014.&lt;/div&gt;</summary>
		<author><name>Jose R. Zapata</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Downbeat_Estimation&amp;diff=10104</id>
		<title>2014:Audio Downbeat Estimation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Downbeat_Estimation&amp;diff=10104"/>
		<updated>2014-06-17T19:56:29Z</updated>

		<summary type="html">&lt;p&gt;Jose R. Zapata: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
'''This task is new for 2014!'''&lt;br /&gt;
&lt;br /&gt;
This text has been adapted from the Audio Beat Tracking Wiki page.  Please add your comments and discussion at the bottom of this page.&lt;br /&gt;
&lt;br /&gt;
The aim of the automatic downbeat estimation task is to identify the locations of downbeats in a collection of sound files. While this is similar to the Audio Beat Tracking task, here the aim is to find the first beat of each bar (measure) rather than all beat times. Algorithms are '''not''' required to estimate beat times or time-signature in addition to downbeats.&lt;br /&gt;
&lt;br /&gt;
Submitted algorithms will be evaluated in terms of their accuracy in finding downbeat locations (only) as annotated by musical experts across several diverse datasets.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
&lt;br /&gt;
=== Collections ===&lt;br /&gt;
'''Ballroom'''&lt;br /&gt;
The ballroom dataset contains eight different dance styles (Cha Cha, Jive, Quickstep, Rumba, Samba, Tango, Viennese Waltz and Waltz). It consists of '''697''' excerpts of 30s in duration.&lt;br /&gt;
The dataset contains two different meters 3/4 and 4/4 - but all pieces have constant meter. For further information see Dixon et al (2004) and Krebs et al (2013).&lt;br /&gt;
Note, we are using the ground truth annotations from Krebs et al. (2013) available at https://github.com/CPJKU/BallroomAnnotations&lt;br /&gt;
&lt;br /&gt;
'''Isophonics (Beatles only)'''&lt;br /&gt;
The Beatles dataset from the Centre for Digital Music at Queen Mary, University of London (http://www.isophonics.net/), as also used for Audio Chord Estimation in MIREX for many years. &lt;br /&gt;
This dataset contains '''179''' complete songs (all except Revolution 9), the majority of which are in 4/4.&lt;br /&gt;
For further information see Mauch et al (2009).&lt;br /&gt;
&lt;br /&gt;
'''Turkish Data'''&lt;br /&gt;
The Turkish corpus is an extended version of the annotated data used in Srinivasamurthy et al. (2014). It includes '''82''' excerpts of one&lt;br /&gt;
minute length each, and each piece belongs to one of three&lt;br /&gt;
rhythm classes that are referred to as usul in Turkish Art&lt;br /&gt;
music. 32 pieces are in the 9/8-usul Aksak, 20 pieces&lt;br /&gt;
in the 10/8-usul Curcuna, and 30 samples in the 8/8-usul&lt;br /&gt;
Düyek.&lt;br /&gt;
&lt;br /&gt;
'''Cretan Data'''&lt;br /&gt;
The corpus of Cretan music consists of '''42''' full length pieces of Cretan leaping dances. While there are several dances that differ in terms of their steps, the differences in&lt;br /&gt;
the sound are most noticeable in the melodic content, and all pieces can be considered to belong to one rhythmic style. All these dances are usually notated using a 2/4 time signature,&lt;br /&gt;
and the accompanying rhythmical patterns are usually played on a Cretan lute. While a variety of rhythmic patterns exist, they do not relate to a specific dance and can be&lt;br /&gt;
assumed to occur in all of the 42 songs in this corpus.&lt;br /&gt;
&lt;br /&gt;
'''Carnatic Data'''&lt;br /&gt;
The Carnatic music dataset is a subset of the CompMusic [http://compmusic.upf.edu/carnatic-rhythm-dataset Carnatic Music Rhythm Dataset]. It includes '''118''' two minute long excerpts spanning four most commonly used tālas (the rhythmic framework of Carnatic music, consisting of time cycles) of Carnatic music. There are 30 examples in each of ādi tāla (8 beats/cycle), rūpaka tāla (3 beats/cycle) and miśra chāpu tāla (7 beats/cycle), and 28 examples in khaṇḍa chāpu tāla (5 beats/cycle). The beats of the tāla in miśra chāpu and khaṇḍa chāpu are non-uniform, but for consistency with other datasets, a uniform beat pulse was obtained by interpolating the non-uniformly spaced beat locations. The recordings consist of both vocal and instrumental music recordings representative of the present day performance practice. All recordings contain percussion accompaniment, mainly the Mridangam. &lt;br /&gt;
&lt;br /&gt;
'''HJDB''' (to be confirmed)&lt;br /&gt;
The HJDB dataset contains '''236''' excerpts of Hardcore, Jungle and Drum and Bass music between 30s and 2 minutes in length. All excerpts are in 4/4 and have a constant tempo. &lt;br /&gt;
For further information see Hockman et al (2012).&lt;br /&gt;
&lt;br /&gt;
In total this makes '''1354''' excerpts (of which 259 are full length songs).&lt;br /&gt;
&lt;br /&gt;
=== Audio Formats ===&lt;br /&gt;
&lt;br /&gt;
The data are monophonic sound files&lt;br /&gt;
&lt;br /&gt;
* CD-quality (PCM, 16-bit, 44100 Hz) for all except Ballroom (originally lower quality, but resampled to 44100 Hz)&lt;br /&gt;
* single channel (mono)&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Submissions to this task will have to conform to a specified format detailed below. Submissions should be packaged and contain at least two files: The algorithm itself and a README containing contact information and detailing, in full, the use of the algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Input Data ===&lt;br /&gt;
Participating algorithms will have to read audio in the following format:&lt;br /&gt;
&lt;br /&gt;
* Sample rate: 44.1 KHz&lt;br /&gt;
* Sample size: 16 bit&lt;br /&gt;
* Number of channels: 1 (mono)&lt;br /&gt;
* Encoding: WAV &lt;br /&gt;
&lt;br /&gt;
=== Output Data ===&lt;br /&gt;
&lt;br /&gt;
The downbeat estimation algorithms will return downbeat times in an ASCII text file for each input .wav audio file. The specification of this output file is immediately below.&lt;br /&gt;
&lt;br /&gt;
=== Output File Format (Audio Downbeat Estimation) ===&lt;br /&gt;
&lt;br /&gt;
The downbeat output file format is an ASCII text format. Each downbeat time is specified, in seconds, on its own line. Specifically, &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;downbeat time (in seconds)&amp;gt;\n&lt;br /&gt;
&lt;br /&gt;
where \n denotes the end of line. The &amp;lt; and &amp;gt; characters are not included. An example output file would look something like:&lt;br /&gt;
&lt;br /&gt;
 0.243&lt;br /&gt;
 1.486&lt;br /&gt;
 2.729&lt;br /&gt;
&lt;br /&gt;
=== Algorithm Calling Format ===&lt;br /&gt;
&lt;br /&gt;
The submitted algorithm must take as arguments a SINGLE .wav file to perform the downbeat estimation as well as the full output path and filename of the output file. The ability to specify the output path and file name is essential. Denoting the input .wav file path and name as %input and the output file path and name as %output, a program called foobar could be called from the command-line as follows:&lt;br /&gt;
&lt;br /&gt;
 foobar %input %output&lt;br /&gt;
 foobar -i %input -o %output&lt;br /&gt;
&lt;br /&gt;
Moreover, if your submission takes additional parameters, such as a detection threshold, foobar could be called like:&lt;br /&gt;
&lt;br /&gt;
 foobar .1 %input %output&lt;br /&gt;
 foobar -param1 .1 -i %input -o %output  &lt;br /&gt;
&lt;br /&gt;
If your submission is in MATLAB, it should be submitted as a function. Once again, the function must contain String inputs for the full path and names of the input and output files. Parameters could also be specified as input arguments of the function. For example: &lt;br /&gt;
&lt;br /&gt;
 foobar('%input','%output')&lt;br /&gt;
 foobar(.1,'%input','%output')&lt;br /&gt;
&lt;br /&gt;
=== README File ===&lt;br /&gt;
&lt;br /&gt;
A README file accompanying each submission should contain explicit instructions on how to to run the program (as well as contact information, etc.). In particular, each command line to run should be specified, using %input for the input sound file and %output for the resulting text file.&lt;br /&gt;
&lt;br /&gt;
For instance, to test the program foobar with different values for parameters param1, the README file would look like:&lt;br /&gt;
&lt;br /&gt;
 foobar -param1 .1 -i %input -o %output&lt;br /&gt;
 foobar -param1 .15 -i %input -o %output&lt;br /&gt;
 foobar -param1 .2 -i %input -o %output&lt;br /&gt;
 foobar -param1 .25 -i %input -o %output&lt;br /&gt;
 foobar -param1 .3 -i %input -o %output&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
For a submission using MATLAB, the README file could look like:&lt;br /&gt;
&lt;br /&gt;
 matlab -r &amp;quot;foobar(.1,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 matlab -r &amp;quot;foobar(.15,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 matlab -r &amp;quot;foobar(.2,'%input','%output');quit;&amp;quot; &lt;br /&gt;
 matlab -r &amp;quot;foobar(.25,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 matlab -r &amp;quot;foobar(.3,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The different command lines to evaluate the performance of each parameter set over the whole database will be generated automatically from each line in the README file containing both '%input' and '%output' strings.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
&lt;br /&gt;
For the evalution procedure we will use&lt;br /&gt;
*'''F-measure''' - the standard calculation as used in onset and beat tracking evaluation with a +/-70ms window, see Dixon (2007).&lt;br /&gt;
&lt;br /&gt;
Given the high diversity of musical styles included in the task, results will be reported per each invidiual dataset. &lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions will be imposed.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on analysis times. Submissions exceeding this limit may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
Jose R. Zapata / joserr.zapata (at) upb.edu.co&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;br /&gt;
&lt;br /&gt;
S. Dixon, F. Gouyon and G. Widmer, [http://ismir2004.ismir.net/proceedings/p093-page-509-paper165.pdf Towards Characterisation of Music via Rhythmic Patterns], In Proceedings of the 5th International Conference on Music Information Retrieval (ISMIR 2004), pp 509-516.&lt;br /&gt;
&lt;br /&gt;
S. Dixon, [http://www.eecs.qmul.ac.uk/~simond/pub/2007/jnmr07.pdf Evaluation of audio beat tracking system BeatRoot], Journal of New Music Research, vol. 36, no. 1, pp. 39-51, 2007.&lt;br /&gt;
&lt;br /&gt;
J. A. Hockman, M. E. P. Davies, I. Fujinaga.[http://ismir2012.ismir.net/event/papers/169-ismir-2012.pdf ONE IN THE JUNGLE: Downbeat Detection in Hardcore, Jungle, and Drum and Bass], In Proceedings of 13th International Society for Music Information Retrieval Conference (ISMIR), Porto, Portugal pp. 169-174, 2012.&lt;br /&gt;
&lt;br /&gt;
F. Krebs, S. Boeck, and G. Widmer, [http://www.cp.jku.at/research/papers/Krebs_etal_ISMIR_2013.pdf Rhythmic Pattern Modeling for Beat- and Downbeat Tracking in Musical Audio], In Proceedings of 14th International Society for Music Information Retrieval Conference (ISMIR), Curitiba, Brazil, 2013.&lt;br /&gt;
&lt;br /&gt;
M. Mauch, C. Cannam, M. E. P. Davies, S. Dixon, C. Harte, S. Kolozali and D. Tidhar, [http://ismir2009.ismir.net/proceedings/LBD-18.pdf OMRAS2 Metadata Project 2009], Late-breaking session at the 10th International Conference on Music Information Retrieval, 2009.&lt;br /&gt;
&lt;br /&gt;
A. Srinivasamurthy, A. Holzapfel, and Xavier Serra, [http://www.tandfonline.com/doi/full/10.1080/09298215.2013.879902 In Search of Automatic Rhythm Analysis Methods for Turkish and Indian Art Music], Journal of New Music Research, vol. 43, no. 1, pp. 94-114, 2014.&lt;/div&gt;</summary>
		<author><name>Jose R. Zapata</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Task_Captains&amp;diff=10103</id>
		<title>2014:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Task_Captains&amp;diff=10103"/>
		<updated>2014-06-17T19:55:21Z</updated>

		<summary type="html">&lt;p&gt;Jose R. Zapata: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Like ISMIR 2013, we are prepared to improve the distribution of tasks for the upcoming MIREX 2014.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead one or more tasks, please add your name in the &amp;quot;Captains&amp;quot; column.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain(s)&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2014:Audio Beat Tracking]]&lt;br /&gt;
|Fu-Hai Frank Wu, Jose R. Zapata&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2014:Audio Chord Estimation]]&lt;br /&gt;
|Johan Pauwels&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2014:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2014:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ade&lt;br /&gt;
|[[2014:Audio Downbeat Estimation]]&lt;br /&gt;
|Matthew Davies, Sebastian Böck, Florian Krebs&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2014:Audio Key Detection]]&lt;br /&gt;
|Johan Pauwels&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2014:Audio Melody Extraction]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2014:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2014:Audio Onset Detection]]&lt;br /&gt;
|Sebastian Böck&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2014:Audio Tempo Estimation]]&lt;br /&gt;
|Aggelos Gkiokas&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2014:Audio Tag Classification]]&lt;br /&gt;
|Priya Arora (we need more task captains in this task.)&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2014:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2014:Query by Singing/Humming]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2014:Query by Tapping]]&lt;br /&gt;
| CCRMA&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2014:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2014:Symbolic Melodic Similarity]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2014:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|drts&lt;br /&gt;
|[[2014:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|Tom Collins&lt;br /&gt;
|-&lt;br /&gt;
|kgc&lt;br /&gt;
|[[2014:Audio K-POP Genre Classification]]&lt;br /&gt;
|IMIRSEL (Kahyun Choi, Peter Organisciak)&lt;br /&gt;
|-&lt;br /&gt;
|kmc&lt;br /&gt;
|[[2014:Audio K-POP Mood Classification]]&lt;br /&gt;
|IMIRSEL (Kahyun Choi, Peter Organisciak)&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jose R. Zapata</name></author>
		
	</entry>
</feed>