<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Cwillis</id>
	<title>MIREX Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Cwillis"/>
	<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/wiki/Special:Contributions/Cwillis"/>
	<updated>2026-04-30T13:17:19Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9540</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9540"/>
		<updated>2013-08-13T19:57:12Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: /* Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score====&lt;br /&gt;
&lt;br /&gt;
''This section needs fixing, it seems there are some problems with the'' math ''command''. See [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#evaluation here] for a working version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Three-layer precision (&amp;lt;math&amp;gt;P_3&amp;lt;/math&amp;gt;), three-layer recall (&amp;lt;math&amp;gt;R_3&amp;lt;/math&amp;gt;), and three-layer &amp;lt;math&amp;gt;F_1&amp;lt;/math&amp;gt; score (&amp;lt;math&amp;gt;F_3&amp;lt;/math&amp;gt;) are defined as follows:&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
F_3(\Pi, \Xi) = \frac{2 P_3(\Pi, \Xi) R_3(\Pi, \Xi)}{P_3(\Pi, \Xi) + R_3(\Pi, \Xi)},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
P_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{Q}} \sum_{j = 1}^{n_\mathcal{Q}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid i = 1,\ldots, n_\mathcal{P} \},\\[.2cm]&lt;br /&gt;
R_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{P}} \sum_{i = 1}^{n_\mathcal{P}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid j = 1,\ldots, n_\mathcal{Q} \},\\[.2cm]&lt;br /&gt;
F_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{2 P_2(\mathcal{P}, \mathcal{Q}) R_2(\mathcal{P}, \mathcal{Q})}&lt;br /&gt;
{P_2(\mathcal{P}, \mathcal{Q}) + R_2(\mathcal{P}, \mathcal{Q})},\\[.2cm]&lt;br /&gt;
P_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_Q} \sum_{l = 1}^{m_Q}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid k = 1,\ldots, m_P \},\\[.2cm]&lt;br /&gt;
R_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_P} \sum_{k = 1}^{n_P}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid l = 1,\ldots, m_Q \},\\[.2cm]&lt;br /&gt;
F_1(P, Q) &amp;amp;=&amp;amp; \frac{2 P_1(P, Q) R_1(P, Q)}{P_1(P, Q) + R_1(P, Q)},\\[.2cm]&lt;br /&gt;
P_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|Q|,\\[.2cm]&lt;br /&gt;
R_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|P|.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9539</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9539"/>
		<updated>2013-08-13T19:57:03Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score====&lt;br /&gt;
&lt;br /&gt;
''This section needs fixing, it seems there are some problems with the'' math ''command''. See [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#evaluation here] for a working version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Three-layer precision (&amp;lt;math&amp;gt;P_3&amp;lt;/math&amp;gt;), three-layer recall (&amp;lt;math&amp;gt;R_3&amp;lt;/math&amp;gt;), and three-layer &amp;lt;math&amp;gt;F_1&amp;lt;/math&amp;gt; score (&amp;lt;math&amp;gt;F_3&amp;lt;/math&amp;gt;) are defined as follows:&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{equation}&lt;br /&gt;
F_3(\Pi, \Xi) = \frac{2 P_3(\Pi, \Xi) R_3(\Pi, \Xi)}{P_3(\Pi, \Xi) + R_3(\Pi, \Xi)},&lt;br /&gt;
\end{equation}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
P_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{Q}} \sum_{j = 1}^{n_\mathcal{Q}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid i = 1,\ldots, n_\mathcal{P} \},\\[.2cm]&lt;br /&gt;
R_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{P}} \sum_{i = 1}^{n_\mathcal{P}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid j = 1,\ldots, n_\mathcal{Q} \},\\[.2cm]&lt;br /&gt;
F_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{2 P_2(\mathcal{P}, \mathcal{Q}) R_2(\mathcal{P}, \mathcal{Q})}&lt;br /&gt;
{P_2(\mathcal{P}, \mathcal{Q}) + R_2(\mathcal{P}, \mathcal{Q})},\\[.2cm]&lt;br /&gt;
P_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_Q} \sum_{l = 1}^{m_Q}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid k = 1,\ldots, m_P \},\\[.2cm]&lt;br /&gt;
R_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_P} \sum_{k = 1}^{n_P}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid l = 1,\ldots, m_Q \},\\[.2cm]&lt;br /&gt;
F_1(P, Q) &amp;amp;=&amp;amp; \frac{2 P_1(P, Q) R_1(P, Q)}{P_1(P, Q) + R_1(P, Q)},\\[.2cm]&lt;br /&gt;
P_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|Q|,\\[.2cm]&lt;br /&gt;
R_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|P|.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Discovery_of_Repeated_Themes_%26_Sections&amp;diff=9538</id>
		<title>2013:Discovery of Repeated Themes &amp; Sections</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Discovery_of_Repeated_Themes_%26_Sections&amp;diff=9538"/>
		<updated>2013-08-13T19:56:29Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: /* Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
'''In brief''': algorithms that take a single piece of music as input, and output a list of patterns repeated within that piece. Also known as ''intra-opus discovery'' (Conklin &amp;amp; Anagnostopoulou, 2001).&lt;br /&gt;
&lt;br /&gt;
We would be happy to receive ideas for improving aspects of this task. Researchers with wiki accounts are able to post comments below or to edit the relevant sections, and researchers without wiki accounts are welcome to email me directly: tom.collins(a)jku.at&lt;br /&gt;
&lt;br /&gt;
'''In more detail''': for understanding and interpreting a musical work, the discovery of repeated patterns within that piece is a crucial step. Meredith, Lemström, and Wiggins (2002) cite Schenker (1954) as claiming repetition to be 'the basis of music as an art' (p. 5), and also Lerdahl and Jackendoff (1983), who observe that 'the importance of parallelism [i.e., repetition] in musical structure cannot be overestimated. The more parallelism one can detect, the more internally coherent an analysis becomes, and the less independent information must be processed and retained in hearing or remembering a piece' (p. 52).&lt;br /&gt;
&lt;br /&gt;
On the very next page Lerdahl and Jackendoff (1983) acknowledge their 'failure to flesh out the notion of parallelism,' which is symptomatic of a more general failure in music psychology and music computing to address the discovery of repetition. Algorithms that take pieces of music as input, and output a list, visualisation, or summary of repeated patterns do exist (Chiu, Shan, Huang, &amp;amp; Li, 2009; Collins, Thurlow, Laney, Willis, &amp;amp; Garthwaite, 2010; Conklin &amp;amp; Anagnostopoulou, 2001; Forth &amp;amp; Wiggins, 2009; Hsu, Liu, &amp;amp; Chen, 2001; Knopke &amp;amp; Jürgensen, 2009; Lartillot, 2005; Meek &amp;amp; Birmingham, 2003; Meredith et al., 2002; Müller &amp;amp; Jiang, 2012; Nieto, Humphrey, &amp;amp; Bello, 2012; Peeters, 2007), but the pattern discovery task has received less attention than many other tasks in MIR. Until now!&lt;br /&gt;
&lt;br /&gt;
===What is a Pattern?===&lt;br /&gt;
&lt;br /&gt;
For the purposes of this task, a pattern is defined as a '''set of ontime-pitch pairs''' that occurs at least twice (i.e., is repeated at least once) in a piece of music. The second, third, etc. occurrences of the pattern will likely be shifted in time and perhaps also transposed, relative to the first occurrence. Ideally an algorithm would be able to discover all exact and inexact occurrences of a pattern within a piece, so in evaluating this task we are interested in both (1) whether an algorithm can discover one occurrence, up to time shift and transposition, and (2) to what extent it can find all occurrences. It has been pointed out by Lartillot and Toiviainen (2007) among others that as well as ontime-pitch patterns, there are various types of repeating pattern (e.g., ontimes alone, duration, contour, harmony, etc.). For the sake of simplicity, the current task is restricted to ontime-pitch pairs.&lt;br /&gt;
&lt;br /&gt;
Some of the most recognisable riffs and motifs in music consist of as few as four ontime-pitch pairs (for example, the opening riff from 'Purple Haze' by Hendrix, or the opening of the first movement of Symphony no.5 in C minor by Beethoven). If, however, an algorithm returned all patterns consisting of four or more notes in a given piece, a lot of these patterns would not be perceptually salient or analytically interesting. Happily, solutions have been proposed for trying to determine which are the most noticeable and/or important patterns, which are of middling importance, and which have occurred by chance (Cambouropoulos, 2006; Conklin, 2010a, 2010b). Collins, Laney, Willis, &amp;amp; Garthwaite (2011) conducted a meta-analysis and experimental validation of many proposed solutions. More information about the differences between motif, theme, and repeated section can be found in answer to Question 6.6.&lt;br /&gt;
&lt;br /&gt;
==Data==&lt;br /&gt;
&lt;br /&gt;
My colleagues and I at the [http://www.cp.jku.at/ Department of Computational Perception], [http://www.jku.at/ Johannes Kepler University], are compiling a database of classical music annotated with repeated themes and sections (mainly from [http://kern.ccarh.org/ KernScores]; see also Flossmann, Goebl, Grachten, Niedemayer, &amp;amp; Widmer, 2010). To encourage participation in the pattern discovery task, we are offering a representative sample called the [https://dl.dropbox.com/u/11997856/JKU/JKUPDD-Aug2013.zip JKU Patterns Development Database (~340 MB, August 2013 version)]. (If you prefer, [https://dl.dropbox.com/u/11997856/JKU/JKUPDD-noAudio-Aug2013.zip here] is a smaller version with no audio, ~40 MB.) Symbolic and audio versions are crossed with monophonic and polyphonic versions, giving up to four versions of the task in total. Researchers are welcome to submit to more than one version of the task.&lt;br /&gt;
&lt;br /&gt;
As a ground truth, we are basing motifs and themes on Barlow and Morgenstern's (1953) ''[http://www.multimedialibrary.com/barlow/index.asp Dictionary of Musical Themes]'', Schoenberg's (1967) ''Fundamentals of Musical Composition'', and Bruhn's (1993) ''[http://www-personal.umich.edu/~siglind/text.htm J. S. Bach’s Well-Tempered Clavier: In-depth Analysis and Interpretation]''. Repeated sections are based on those marked by the composer. For one of the pieces we created our own annotation. A paper that describes our construction of the Development Database and use of the sources is currently under preparation. No ground truth is perfect: we have chosen the sources as being relatively uncontroversial and transparent, but we would welcome ideas and suggestions from other researchers. As a quick example, [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#figure1 Figure 1] is an excerpt from Beethoven's op.2 no.1 mvt.3, with a ground-truth pattern marked as &amp;lt;math&amp;gt;P_1&amp;lt;/math&amp;gt; (first occurrence) and &amp;lt;math&amp;gt;P_2&amp;lt;/math&amp;gt; (second occurrence).&lt;br /&gt;
&lt;br /&gt;
==Submission Format==&lt;br /&gt;
&lt;br /&gt;
===Symbolic Version===&lt;br /&gt;
&lt;br /&gt;
Participants are able to choose from a number of symbolic representations (MIDI, kern, csv with columns for ontime, MIDI note number, staff height, duration, and staff number), as there may be differing opinions about which aspects of a representation are most useful for discovering repeated patterns. This choice also reflects the importance of designing pattern discovery code that functions irrespective of the exact input format (Wiggins, 2007). For the purposes of standardised evaluation, participants will need to convert each occurrence of a discovered pattern to a point set consisting of event ontimes and MIDI note numbers. For instance, the point-set representation for &amp;lt;math&amp;gt;P_1&amp;lt;/math&amp;gt; in [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#figure1 Figure 1] is&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;P_1 = \{(-1, 60),\ (-1, 68),\ (0, 61),\ (0, 70),\ (1, 58),\ (1, 67),\ (2, 53),&amp;lt;/math&amp;gt;&lt;br /&gt;
:::&amp;lt;math&amp;gt;(3, 60),\ (3, 68),\ (4, 56),\ (4, 65),\ (5, 53),\ (5, 56),\ (5, 60),\ (5, 65),&amp;lt;/math&amp;gt;&lt;br /&gt;
:::&amp;lt;math&amp;gt;(6, 55),\ (6, 58),\ (6, 60),\ (6, 64),\ (7, 53),\ (7, 56),\ (7, 60),\ (7, 65),&amp;lt;/math&amp;gt;&lt;br /&gt;
::::&amp;lt;math&amp;gt;(8, 52),\ (8, 55),\ (8, 60),\ (8, 67),&amp;lt;/math&amp;gt;&lt;br /&gt;
:::&amp;lt;math&amp;gt;(9, 53),\ (9, 56),\ (9, 60),\ (9, 70),\ (10, 68)\}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sectional repetitions are expanded in all pieces, i.e. as the piece would be heard in a performance. In the monophonic version, pieces consisting of voiced polyphony (e.g., a fugue or choral work) are ''unfolded'', meaning each voice is extracted and re-encoded monophonically, one after the other in the order highest staff to lowest. For example, a fugue with upper, middle, and lower voices would be re-encoded with the upper voice heard first in isolation, followed by the middle voice, and then lower voice. In the monophonic version, pieces consisting of unvoiced polyphony are converted to monophony using the ''clipped skyline'' approach.&lt;br /&gt;
&lt;br /&gt;
===Audio Version===&lt;br /&gt;
&lt;br /&gt;
For the audio version of the task, participating algorithms will have to read audio in wav format, sample rate 44.1 KHz, 16 bit, mono. These wav files are rendered (synthesised) in a metronomically exact fashion from the corresponding symbolic data. Beats per minute (BPM) are different for different pieces, but this information is located in the corresponding kern file (e.g., in a kern file '*MM192' means 192 BPM).&lt;br /&gt;
&lt;br /&gt;
As with the symbolic version of the task, for the purposes of standardised evaluation, participants will need to convert each occurrence of a discovered pattern to a point set consisting of event ontimes and MIDI note numbers. Even if your algorithm only returns a time interval &amp;lt;math&amp;gt;[a,\ b]&amp;lt;/math&amp;gt; in seconds for an occurrence of a pattern, this conversion will be easy enough to do: convert &amp;lt;math&amp;gt;[a,\ b]&amp;lt;/math&amp;gt; to an ontime interval &amp;lt;math&amp;gt;[c,\ d]&amp;lt;/math&amp;gt; using the BPM provided, and then use the csv file for the piece to determine which ontime-MIDI pairs are sounding in &amp;lt;math&amp;gt;[c,\ d]&amp;lt;/math&amp;gt;. (A downside to this approach is that the evaluations metrics will be slightly be punitive if not all ontime-pitch pairs sounding in &amp;lt;math&amp;gt;[c,\ d]&amp;lt;/math&amp;gt; are part of the ground truth pattern.)&lt;br /&gt;
&lt;br /&gt;
===Example Algorithm Output for a Ground-Truth Piece===&lt;br /&gt;
&lt;br /&gt;
Regardless of symbolic/audio and polyphonic/monophonic task version, the output of your pattern discovery algorithm for a given piece should adhere to the following text file format:&lt;br /&gt;
&lt;br /&gt;
 pattern1 &lt;br /&gt;
 occurrence1 &lt;br /&gt;
 7.00000, 45.00000 &lt;br /&gt;
 7.00000, 48.00000 &lt;br /&gt;
 ... &lt;br /&gt;
 11.00000, 60.00000 &lt;br /&gt;
 occurrence2 &lt;br /&gt;
 31.00000, 57.00000 &lt;br /&gt;
 31.00000, 60.00000 &lt;br /&gt;
 ... &lt;br /&gt;
 35.00000, 72.00000 &lt;br /&gt;
 occurrence3 &lt;br /&gt;
 59.00000, 57.00000 &lt;br /&gt;
 59.00000, 60.00000 &lt;br /&gt;
 ... &lt;br /&gt;
 63.00000, 72.00000 &lt;br /&gt;
 pattern2 &lt;br /&gt;
 occurrence1 &lt;br /&gt;
 7.00000, 45.00000 &lt;br /&gt;
 7.00000, 48.00000 &lt;br /&gt;
 ... &lt;br /&gt;
 11.00000, 57.00000 &lt;br /&gt;
 occurrence2 &lt;br /&gt;
 27.00000, 48.00000 &lt;br /&gt;
 27.00000, 52.00000 &lt;br /&gt;
 ... &lt;br /&gt;
 59.00000, 60.00000 &lt;br /&gt;
 ... &lt;br /&gt;
 patternM &lt;br /&gt;
 occurrence1 &lt;br /&gt;
 9.00000, 58.00000 &lt;br /&gt;
 9.50000, 52.00000 &lt;br /&gt;
 ... &lt;br /&gt;
 12.00000, 60.0000 &lt;br /&gt;
 ...&lt;br /&gt;
 occurrencem &lt;br /&gt;
 100.00000, 62.00000 &lt;br /&gt;
 100.50000, 55.00000 &lt;br /&gt;
 ...&lt;br /&gt;
 103.00000, 61.00000&lt;br /&gt;
&lt;br /&gt;
That is, ontimes are in the left-hand column and MIDI note numbers are in the right. Each occurrence of a discovered pattern is given before moving on to the next pattern. Occurrences do not have to be of the same length, nor do they have to be constrained to exact or transposed repetition (e.g., variations are permitted). Neither the patterns nor the occurrences of patterns need to be in temporal order: the evaluation metrics are robust to different orders.&lt;br /&gt;
&lt;br /&gt;
Order does matter, however, in the following two respects: if possible (1) place the patterns in decreasing order of predicted perceptual salience/musical importance; (2) define ''occurrence1'' to be the ''prototypical'' occurrence of each pattern. Fulfilling point (1) is not essential (could defer to future work), but it concerns an application of discovery algorithms wherein a user browses the output patterns. It would be convenient for the user to be shown the most important patterns first, and one metric below (called first five target proportion) evaluates this aspect of algorithm performance. Fulfilling point (2) is important if your discovery method is capable of retrieving inexact occurrences. Some metrics below are designed for assessing the capability for retrieving inexact occurrences, but others are simply concerned with whether or not the prototypical occurrence is discovered. The evaluation code will consider ''occurrence1'' to be the ''prototype''.&lt;br /&gt;
&lt;br /&gt;
==Evaluation Procedure==&lt;br /&gt;
&lt;br /&gt;
'''In brief''': An implementation of the evaluation metrics and example code are bundled with the Development Database, to save participants having to implement the evaluation metrics themselves. Participating algorithms will be evaluated against the following metrics:&lt;br /&gt;
* establishment precision, establishment recall, and establishment F1 score (defined by Tom Collins);&lt;br /&gt;
* occurrence precision, occurrence recall, and occurrence F1 score (defined by Tom Collins);&lt;br /&gt;
* three-layer precision, three-layer recall, and three-layer F1 score (defined by David Meredith);&lt;br /&gt;
* runtime, fifth return time, first five target proportion (Tom Collins) and first five precision (David Meredith);&lt;br /&gt;
* standard precision, recall, and F1 score;&lt;br /&gt;
&lt;br /&gt;
===Standard Precision, Recall, and F1 Score===&lt;br /&gt;
&lt;br /&gt;
'''In more detail''': Denote the &amp;lt;math&amp;gt;n_{\mathcal{P}}&amp;lt;/math&amp;gt; patterns in a ground truth by &amp;lt;math&amp;gt;\Pi = \{ \mathcal{P}_1, \mathcal{P}_2,\ldots, \mathcal{P}_{n_\mathcal{P}} \}&amp;lt;/math&amp;gt;, and the &amp;lt;math&amp;gt;n_{\mathcal{Q}}&amp;lt;/math&amp;gt; patterns in an algorithm&amp;amp;rsquo;s output by &amp;lt;math&amp;gt;\Xi = \{ \mathcal{Q}_1, \mathcal{Q}_2,\ldots, \mathcal{Q}_{n_\mathcal{Q}} \}&amp;lt;/math&amp;gt;. If the algorithm discovers &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; of the ground truth patterns, up to translation, then the standard ''precision'' of the algorithm is defined as &amp;lt;math&amp;gt;P = k/n_{\mathcal{Q}}&amp;lt;/math&amp;gt;, the standard ''recall'' of the algorithm is defined as &amp;lt;math&amp;gt;R = k/n_{\mathcal{P}}&amp;lt;/math&amp;gt;, and the standard ''F1 score'' as &amp;lt;math&amp;gt;F_1 = 2PR/(P + R).\ &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above metrics, which were used by Collins et al. (2010) in one of the first evaluations of a pattern discovery task, are very strict: an output pattern &amp;lt;math&amp;gt;Q&amp;lt;/math&amp;gt; may have only one point different from a large ground truth pattern &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt;, but this will not count as a successful discovery. Therefore, we propose the following new metrics, which are robust to slight differences between output and ground truth patterns.&lt;br /&gt;
&lt;br /&gt;
===Robust Versions of Precision, Recall, and F1 score===&lt;br /&gt;
&lt;br /&gt;
====Symbolic Musical Similarity and the Score Matrix====&lt;br /&gt;
&lt;br /&gt;
Suppose that in the ground truth there is a pattern &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; with occurrences &amp;lt;math&amp;gt;\mathcal{P} = \{ P_1, P_2,\ldots, P_{m_P} \}&amp;lt;/math&amp;gt;, and in an algorithm's output there is a pattern &amp;lt;math&amp;gt;Q&amp;lt;/math&amp;gt; with occurrences &amp;lt;math&amp;gt;\mathcal{Q} = \{ Q_1, Q_2,\ldots, Q_{m_Q} \}&amp;lt;/math&amp;gt;. Central to evaluating an algorithm is measuring the extent to which &amp;lt;math&amp;gt;\mathcal{Q}&amp;lt;/math&amp;gt; constitutes the discovery of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt;. In order to measure this, we need to be able to compute the symbolic musical similarity of &amp;lt;math&amp;gt;P_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Q_j&amp;lt;/math&amp;gt;. We can use the simple ''cardinality score'' for symbolic musical similarity,&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
s_c(P_i, Q_j) = |P_i \cap Q_j|/ \max \{ |P_i|, |Q_j| \},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or the slightly more involved ''normalised matching score'' &amp;lt;math&amp;gt;s_m(P_i, Q_j)&amp;lt;/math&amp;gt;, after Arzt, Böck, and Widmer (2012). Some examples of cardinality and matching scores between original and mutant versions of the theme from Beethoven's op.2 no.2 mvt.3 are given in [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#figure2 Figure 2].&lt;br /&gt;
&lt;br /&gt;
Either of these similarity measures, denoted &amp;lt;math&amp;gt;s(P_i, Q_j)&amp;lt;/math&amp;gt;, can be recorded in a so-called ''score matrix'',&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
s(\mathcal{P}, \mathcal{Q}) = \left(&lt;br /&gt;
\begin{array}{cccc}&lt;br /&gt;
s(P_1, Q_1) &amp;amp; s(P_1, Q_2) &amp;amp; \cdots &amp;amp; s(P_1, Q_{m_Q}) \\&lt;br /&gt;
s(P_2, Q_1) &amp;amp; s(P_2, Q_2) &amp;amp; \cdots &amp;amp; s(P_2, Q_{m_Q}) \\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots \\&lt;br /&gt;
s(P_{m_P}, Q_1) &amp;amp; s(P_{m_P}, Q_2) &amp;amp; \cdots &amp;amp; s(P_{m_P}, Q_{m_Q})&lt;br /&gt;
\end{array} \right).&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
         &lt;br /&gt;
The score matrix shows how all occurrences &amp;lt;math&amp;gt;\mathcal{Q} = \{ Q_1, Q_2,\ldots, Q_{m_Q} \}&amp;lt;/math&amp;gt; of a pattern in an algorithm's output compare to all occurrences &amp;lt;math&amp;gt;\mathcal{P} = \{ P_1, P_2,\ldots, P_{m_P} \}&amp;lt;/math&amp;gt; of a ground truth pattern.&lt;br /&gt;
&lt;br /&gt;
====Establishment Precision, Establishment Recall, and Establishment F1 Score====&lt;br /&gt;
&lt;br /&gt;
Summaries of the ''score matrix'' will be necessary for evaluating all of an algorithm's output against the whole ground truth for a piece. For instance, we may be interested in whether an algorithm is capable of ''establishing'' that a pattern &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; is repeated at least once during a piece, and less interested in whether the algorithm can retrieve all occurrences of &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; (exact and inexact). In this case, the maximum entry in the score matrix, denoted &amp;lt;math&amp;gt;S(\mathcal{P}, \mathcal{Q})&amp;lt;/math&amp;gt;, is the appropriate summary. For a piece's ground truth &amp;lt;math&amp;gt;\Pi = \{ \mathcal{P}_1, \mathcal{P}_2,\ldots, \mathcal{P}_{n_\mathcal{P}} \}&amp;lt;/math&amp;gt;, and an algorithm's entire output for that piece &amp;lt;math&amp;gt;\Xi = \{ \mathcal{Q}_1, \mathcal{Q}_2,\ldots, \mathcal{Q}_{n_\mathcal{Q}} \}&amp;lt;/math&amp;gt;, it is now possible to record the algorithm's capability for ''establishing'' that patterns in &amp;lt;math&amp;gt;\Pi&amp;lt;/math&amp;gt; are repeated at least once during the piece, using the so-called ''establishment matrix'',&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
S(\Pi, \Xi) = \left(&lt;br /&gt;
\begin{array}{cccc}&lt;br /&gt;
S(\mathcal{P}_1, \mathcal{Q}_1) &amp;amp; S(\mathcal{P}_1, \mathcal{Q}_2) &amp;amp; \cdots &amp;amp; S(\mathcal{P}_1, \mathcal{Q}_{n_Q}) \\&lt;br /&gt;
S(\mathcal{P}_2, \mathcal{Q}_1) &amp;amp; S(\mathcal{P}_2, \mathcal{Q}_2) &amp;amp; \cdots &amp;amp; S(\mathcal{P}_2, \mathcal{Q}_{n_Q}) \\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots \\&lt;br /&gt;
S(\mathcal{P}_{n_P}, \mathcal{Q}_1) &amp;amp; S(\mathcal{P}_{n_P}, \mathcal{Q}_2) &amp;amp; \cdots &amp;amp; S(\mathcal{P}_{n_P}, \mathcal{Q}_{n_Q})&lt;br /&gt;
\end{array} \right).&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''establishment precision'' can then be calculated according to&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
P_{\text{est}} = \frac{1}{n_\mathcal{Q}} \sum_{j = 1}^{n_\mathcal{Q}}&lt;br /&gt;
\max \{ S(\mathcal{P}_i, \mathcal{Q}_j) \mid i = 1,\ldots, n_\mathcal{P} \}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If an algorithm discovers &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; of the ground-truth patterns exactly, and misses the remaining &amp;lt;math&amp;gt;n_\mathcal{Q} - k&amp;lt;/math&amp;gt; patterns completely, then the establishment precision is equal to standard precision (&amp;lt;math&amp;gt;= k/n_\mathcal{Q}&amp;lt;/math&amp;gt;). The ''establishment recall'' is defined as&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
R_{\text{est}} = \frac{1}{n_\mathcal{P}} \sum_{i = 1}^{n_\mathcal{P}}&lt;br /&gt;
\max \{ S(\mathcal{P}_i, \mathcal{Q}_j) \mid j = 1,\ldots, n_\mathcal{Q} \}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The establishment F1 score is defined as above, but replacing precision with establishment precision, and recall with establishment recall.&lt;br /&gt;
&lt;br /&gt;
====Occurrence Precision, Occurrence Recall, and Occurrence F1 Score====&lt;br /&gt;
&lt;br /&gt;
As mentioned above, there is a difference between a pattern discovery algorithm (or listener) being able to establish the existence of a repeated pattern, and being able to retrieve all occurrences. We showed how to measure the extent to which an algorithm is capable of establishing that a pattern &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; is repeated at least once during a piece. Now we focus on an algorithm's ability to retrieve ''all'' occurrences of &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; (exact and inexact). These metrics will favour an algorithm that is strong at retrieving all occurrences of the patterns it discovers, even if the algorithm fails completely to discover many of the salient patterns in a piece.&lt;br /&gt;
&lt;br /&gt;
The indices &amp;lt;math&amp;gt;I&amp;lt;/math&amp;gt; of the estalishment matrix with values greater than or equal to some threshold (default value &amp;lt;math&amp;gt;c = .75&amp;lt;/math&amp;gt;) indicate which ground truth patterns an algorithm is considered to have discovered. We will focus on these indices to define a so-called ''occurrence matrix''. Denoted &amp;lt;math&amp;gt;O(\Pi, \Xi)&amp;lt;/math&amp;gt;, the ''occurrence matrix'' begins as an &amp;lt;math&amp;gt;n_\mathcal{P} \times n_\mathcal{Q}&amp;lt;/math&amp;gt; zero matrix. Then for each index pair &amp;lt;math&amp;gt;(i, j) \in I&amp;lt;/math&amp;gt;, we calculate the precision of the ''score matrix'' &amp;lt;math&amp;gt;s(\mathcal{P}_i, \mathcal{Q}_j)&amp;lt;/math&amp;gt;, and record this scalar as element &amp;lt;math&amp;gt;(i, j)&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;O(\Pi, \Xi)&amp;lt;/math&amp;gt;. The precision of &amp;lt;math&amp;gt;s(\mathcal{P}_i, \mathcal{Q}_j)&amp;lt;/math&amp;gt; indicates the precision with which algorithm output &amp;lt;math&amp;gt;\mathcal{Q}_j&amp;lt;/math&amp;gt; retrieved the ground truth item &amp;lt;math&amp;gt;\mathcal{P}_i&amp;lt;/math&amp;gt;. The ''occurrence precision'', denoted &amp;lt;math&amp;gt;P_{\text{occ}}&amp;lt;/math&amp;gt;, is then defined as the precision of the occurrence matrix &amp;lt;math&amp;gt;O(\Pi, \Xi)&amp;lt;/math&amp;gt;, with the sum taken over nonzero columns. The ''occurrence recall'', denoted &amp;lt;math&amp;gt;R_{\text{occ}}&amp;lt;/math&amp;gt;, is defined analogously, but replacing mentions of 'precision' and 'columns' above with 'recall' and 'rows.' The occurrence &amp;lt;math&amp;gt;F_1&amp;lt;/math&amp;gt; score can be defined also.&lt;br /&gt;
&lt;br /&gt;
====Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score====&lt;br /&gt;
&lt;br /&gt;
''This section needs fixing, it seems there are some problems with the'' math ''command''. See [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#evaluation here] for a working version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Three-layer precision (&amp;lt;math&amp;gt;P_3&amp;lt;/math&amp;gt;), three-layer recall (&amp;lt;math&amp;gt;R_3&amp;lt;/math&amp;gt;), and three-layer &amp;lt;math&amp;gt;F_1&amp;lt;/math&amp;gt; score (&amp;lt;math&amp;gt;F_3&amp;lt;/math&amp;gt;) are defined as follows:&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{equation}&lt;br /&gt;
F_3(\Pi, \Xi) = \frac{2 P_3(\Pi, \Xi) R_3(\Pi, \Xi)}{P_3(\Pi, \Xi) + R_3(\Pi, \Xi)},&lt;br /&gt;
\end{equation}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
P_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{Q}} \sum_{j = 1}^{n_\mathcal{Q}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid i = 1,\ldots, n_\mathcal{P} \},\\[.2cm]&lt;br /&gt;
R_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{P}} \sum_{i = 1}^{n_\mathcal{P}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid j = 1,\ldots, n_\mathcal{Q} \},\\[.2cm]&lt;br /&gt;
F_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{2 P_2(\mathcal{P}, \mathcal{Q}) R_2(\mathcal{P}, \mathcal{Q})}&lt;br /&gt;
{P_2(\mathcal{P}, \mathcal{Q}) + R_2(\mathcal{P}, \mathcal{Q})},\\[.2cm]&lt;br /&gt;
P_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_Q} \sum_{l = 1}^{m_Q}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid k = 1,\ldots, m_P \},\\[.2cm]&lt;br /&gt;
R_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_P} \sum_{k = 1}^{n_P}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid l = 1,\ldots, m_Q \},\\[.2cm]&lt;br /&gt;
F_1(P, Q) &amp;amp;=&amp;amp; \frac{2 P_1(P, Q) R_1(P, Q)}{P_1(P, Q) + R_1(P, Q)},\\[.2cm]&lt;br /&gt;
P_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|Q|,\\[.2cm]&lt;br /&gt;
R_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|P|.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Runtime, Fifth Return Time, and First Five Target Proportion===&lt;br /&gt;
&lt;br /&gt;
Overall runtime is an important metric. Those wishing to develop pattern discovery algorithms for on-the-fly browsing, however, may find it more relevant to know the time taken to return a smaller number of patterns. (E.g., while the user browses, the algorithm can continue to discover extra patterns.) Fifth return time (FRT) is the time taken for the first five patterns to be output by an algorithm. As these patterns are of little use if none of them are ground truth, we counterbalance FRT with another metric called first five target proportion (FFTP), which is the establishment recall calculation applied to the first five columns only of the establihsment matrix &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;. First five precision (FFP) is the three-layer precision calculation applied to the first five output patterns only.&lt;br /&gt;
&lt;br /&gt;
===Friedman Tests for the Pattern Discovery Task===&lt;br /&gt;
&lt;br /&gt;
The Friedman test will be used to investigate whether any algorithms rank consistently higher or lower than the others, with regard to metrics for individual pieces.&lt;br /&gt;
&lt;br /&gt;
==Available Code==&lt;br /&gt;
&lt;br /&gt;
Entering an existing MIREX task, where results have been improving for up to 7 years, can be a daunting prospect. The pattern discovery task, on the other hand, is new, and so is a great opportunity for Master's and PhD students to make their mark in MIR. To this end, it should be noted that the following code is freely available, and that students/researchers are very welcome to define pattern discovery algorithms by altering/extending this code, or to use it as a point of comparison with their own algorithms. Please feel free to ask questions, either via this wiki, or by email to authors of the relevant papers.&lt;br /&gt;
*[https://code.google.com/p/chromamorph/source/browse/#svn%2Ftrunk%2FPoints%2Fsrc%2Fcom%2Fchromamorph%2Fpoints018 Java implementation] of algorithms from Meredith et al. (2002) and Meredith (2006).&lt;br /&gt;
*[http://www.tomcollinsresearch.net/supporting-material.html Matlab implementation] of algorithms from Collins et al. (2010). (Agree to GNU licence and then download Patterns-Aug2012.zip.)&lt;br /&gt;
*If you would like to participate in the audio version but are missing an F0 estimator, then you could use the [http://mtg.upf.edu/technologies/melodia MELODIA plug-in] as described by Salamon and Gómez (2012).&lt;br /&gt;
*Please add links to more implementations here.&lt;br /&gt;
*...&lt;br /&gt;
&lt;br /&gt;
==Questions and Comments==&lt;br /&gt;
&lt;br /&gt;
===Please Can You Give an Overview of the Development Database's Folder Structre?===&lt;br /&gt;
&lt;br /&gt;
Users are encouraged to run their algorithms on either the text file representations of pieces (contained in 'lisp' folders) and their constituent patterns, or the csv file representations (beware rounding errors). The columns represent ontime (measured from zero in crotchet beats), MIDI note number; morphetic pitch number, duration (measure in crotchet beats), and staff number (integers from zero for the top staff). Users are discouraged from running their algorithms on the midi file representations. The midi files were created and included in the distribution for the purposes of mistake checking, but do not necessarily begin in the correct bar position and contain an extra quiet note at the end to avoid clipping.&lt;br /&gt;
&lt;br /&gt;
If you are writing your own code for iterating over the ground truth patterns, the annotation folders to include for the polyphonic version of the task are are 'bruhn', 'barlowAndMorgensternRevised', 	'sectionalRepetitions', 'schoenberg', and 'tomCollins'; for the	monophonic task it is 'bruhn', 'barlowAndMorgenstern',	'barlowAndMorgensternRevised', 'sectionalRepetitions', 'schoenberg', and	'tomCollins'. Please note, a faithful barlowAndMorgenstern folder is included in the polyphonic ground truth for the sake of comparison with the revised folder, but it should/will not be iterated over for the evaluation. This is because the barlowAndMorgenstern originals contain some monophonic patterns that ought to be polyphonic (e.g., because a figure in one voice never occurs independently of a simulatneous figure in another voice) and some patterns have erroneous lengths (e.g., a theme is curtailed at five bars because it fits neatly on the page, but in reality the repetition extends for one or two more bars).&lt;br /&gt;
&lt;br /&gt;
Occurrences of patterns consist of (ontime, MIDI note number) pairs. For example, see bachBWV889Fg -&amp;gt; polyphonic -&amp;gt; repeatedPatterns -&amp;gt; bruhn -&amp;gt; A -&amp;gt; occurrences. Inexact occurrences of a pattern are handled as follows: the prototypical version of a pattern is defined at the top level, e.g., bachBWV889Fg -&amp;gt; polyphonic -&amp;gt; repeatedPatterns -&amp;gt; bruhn -&amp;gt; A -&amp;gt; lisp. This definition may be shifted in time towards the beginning of the piece, but is in the correct bar position. The prototypical version	of a pattern is always defined as 'occ1' in the occurrences folder. All of the definitions in the occurrences folder correspond exactly to (ontime, MIDI note number) pairs from the piece (i.e., none of these are shifted in time).&lt;br /&gt;
&lt;br /&gt;
===How Is Pattern Discovery Different to the [[2012:Structural_Segmentation]] Task?===&lt;br /&gt;
			&lt;br /&gt;
We expect structural segmentation algorithms to be adaptable to pattern discovery, so would really welcome segmentation researchers to submit to the pattern discovery task as well. The two tasks are different as follows: structural segmentation results in a list of labelled time intervals that cover an entire piece of music, such as&lt;br /&gt;
&lt;br /&gt;
   0.000    4.273     A&lt;br /&gt;
   4.273    8.469     A&lt;br /&gt;
   8.469   21.321     B&lt;br /&gt;
  21.321   25.734     A&lt;br /&gt;
     ...      ...     ...&lt;br /&gt;
 175.012  179.108     A&lt;br /&gt;
&lt;br /&gt;
* The output of a pattern discovery algorithm will not necessarily cover an entire piece. A four-bar theme beginning in bar 1 might be the only output of a pattern discovery algorithm, even if the piece is much longer and contains other material.&lt;br /&gt;
* Whereas the output of a structural segmentation algorithm is non-overlapping, the output of a pattern discovery algorithm might be overlapping or even nested (hierarchical). For instance, the four-bar theme mentioned above might be output, as well as a sectional repetition that lasts from bars 1-8.&lt;br /&gt;
&lt;br /&gt;
===How Is Pattern Discovery Different to Pattern Matching, Or the [[2012:Symbolic_Melodic_Similarity]] Task?===&lt;br /&gt;
			&lt;br /&gt;
In a typical pattern matching task, more or less exact instances of a ''given query'' are retrieved from some larger dataset, and ranked by an appropriate measure of relevance to the original query (e.g., Barton, Cambouropoulos, Iliopoulos, &amp;amp; Lipták, 2012). The setup of pattern discovery is fundamentally different: there are no queries given to begin with, just single pieces of music and the requirement to discover repeating patterns within each piece.&lt;br /&gt;
&lt;br /&gt;
The melodic similarity task fits the pattern matching paradigm, and so is also different to pattern discovery. In the melodic similarity task, algorithms are given a melodic query, and retrieve a supposedly relevant melody from the database. The similarity of the query and the algorithm's match is assessed by human listeners.&lt;br /&gt;
&lt;br /&gt;
===Why Not Just Use Optical Music Recognition to Detect Sectional Repetitions?===&lt;br /&gt;
&lt;br /&gt;
One could use optical music recognition instead, although what we are trying&lt;br /&gt;
to understand and model is a listener's awareness of thematic material and&lt;br /&gt;
sectional repetitions, which often exists without access to staff notation. It would also be interesting to apply pattern discovery to music for which there is no staff notation.&lt;br /&gt;
&lt;br /&gt;
===This Is Intra-Opus Discovery, But What About Inter-Opus Discovery?===&lt;br /&gt;
&lt;br /&gt;
Inter-opus discovery, the discovery of patterns that recur across multiple pieces of music (Conklin &amp;amp; Anagnostopoulou, 2001), is an interesting problem, and one that we would be interested to see cast as a MIREX task in future.&lt;br /&gt;
Currently, lack of an appropriate ground truth is an issue here.&lt;br /&gt;
&lt;br /&gt;
===There Are Some Issues With the MIDI Files, Please Can You Clarify?===&lt;br /&gt;
&lt;br /&gt;
The MIDI files were created and are provided for the purposes of sonifying and checking the symbolic data, and are not intended to be used themselves for input to the pattern discovery algorithms (please see the folders called 'csv' and/or 'lisp' instead). They are not ideal for input for the following reasons: (1) correct pitch spelling is lost, whereas this is maintained by presenting MIDI note number and morphetic pitch number side by side in the 'csv' and 'lisp' folders; (2) each MIDI file is zeroed in the sense that it begins more or less immediately, even if the pattern occurrence it represents occurs halfway through a piece; (3) each MIDI file also contains one extra, very quiet, low note to avoid clipping in the sound file.&lt;br /&gt;
&lt;br /&gt;
===What is the Difference Between a Motif, a Theme, and a Repeated Section?===&lt;br /&gt;
&lt;br /&gt;
Dictionary definitions of '''motif''', '''theme''', and '''repeated section''' are given below. To make the definitions more concrete, I refer to the top system of [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#figure2 Figure 2]. In terms of ontime-pitch pairs, the motif here consists of {(2, C#5), (2.25, A4), (2.5, E5), (2.75, C#5), (3, A5)}, beginning on beat 3 of bar 2 and ending on beat 1 of bar 3. This is repeated an octave lower one bar later, and occurs with a slightly different intervallic configuration at the very beginning. The theme, according to Barlow and Morgenstern (1948), lasts from the upbeat of bar 1, to beat 2 of bar 4. Bars 5-8 are not shown in [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#figure2 Figure 2], but there is a repeated section consisting of bars 1-8. So one might infer from this example that typically a motif lasts less than one bar, a theme 4-8 bars, and a repeated section 8+ bars.&lt;br /&gt;
&lt;br /&gt;
According to Drabkin (2001a), a &amp;quot;motif may be of any size, and is most commonly regarded as the shortest subdivision of a theme or phrase that still maintains its identity as an idea.&amp;quot; A theme is the &amp;quot;musical material on which part or all of a work is based, usually having a recognizable melody and sometimes perceivable as a complete musical expression in itself&amp;quot; Drabkin (2001b). A repeated section is the &amp;quot;restatement of a portion of a musical composition of any length from a single bar to a whole section, or occasionally the whole piece. Since the Classical period, repeated passages have not usually been written out; instead they are enclosed within the signs ||: and :||&amp;quot; (Tilmouth, 2001).&lt;br /&gt;
&lt;br /&gt;
==Time And Hardware Limits==&lt;br /&gt;
&lt;br /&gt;
Try to make sure that your algorithm's runtime for the entire Development Database is 24 hours or less on a standard desktop computer, then there should be no need to place further limits on analysis times for the Test Database.&lt;br /&gt;
&lt;br /&gt;
==Potential Participants==&lt;br /&gt;
&lt;br /&gt;
*Please add name and email here.&lt;br /&gt;
*...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Acknowledgments==&lt;br /&gt;
&lt;br /&gt;
Thank you to the following for feedback on this task description: Ashley Burgoyne, Emilios Cambouropoulos, Darrell Conklin, Stephen Downie, Morwaread Farbood, Jamie Forth, Nanzhu Jiang, Ian Knopke, Olivier Lartillot, David Meredith, Oriol Nieto, Eleanor Selfridge-Field, and Geraint Wiggins.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
*Andreas Arzt, Sebastian Böck, and Gerhard Widmer. [http://www.cp.jku.at/research/papers/Arzt_etal_ISMIR_2012.pdf Fast identification of piece and score position via symbolic fingerprinting]. In F. Gouyon, P. Herrera, L.G. Martin, and M. Müller (Eds), ''Proc ISMIR'', pp. 433-438, Porto, 2012.&lt;br /&gt;
&lt;br /&gt;
*Harold Barlow and Sam Morgenstern. ''A dictionary of musical themes''. Crown Publishers, New York, 1948.&lt;br /&gt;
&lt;br /&gt;
*Siglind Bruhn. ''J.S. Bach's Well-Tempered Clavier: in-depth analysis and interpretation''. Mainer International, Hong Kong, 1993.&lt;br /&gt;
&lt;br /&gt;
*Carl Barton, Emilios Cambouropoulos, Costas S. Iliopoulos, and Zsuzsanna Lipták. Melodic string matching via interval consolidation and fragmentation. In L. Iliadis, I. Maglogiannis, H. Papadopoulos, K. Karatzas, and S. Sioutas (Eds), ''Artificial Intelligence Applications and Innovations'', pp. 460-469. Springer, Berlin, 2012.&lt;br /&gt;
&lt;br /&gt;
*Emilios Cambouropoulos. Musical parallelism and melodic segmentation: a computational approach. ''Music Perception'', 23(3):249-267, 2006.&lt;br /&gt;
&lt;br /&gt;
*Shih-Chuan Chiu, Man-Kwan Shan, Jiun-Long Huang, and Hua-Fu Li. Mining polyphonic repeating patterns from music data using bit-string based approaches. In R. Radhakrishnan and R. Yan (Eds), ''Proc IEEE International Conference on Multimedia and Expo'', pp. 1170-1173, New York, 2009.&lt;br /&gt;
&lt;br /&gt;
*Tom Collins, Jeremy Thurlow, Robin Laney, Alistair Willis, and Paul H. Garthwaite. [http://oro.open.ac.uk/21837/ A comparative evaluation of algorithms for discovering translational patterns in Baroque keyboard works]. In J.S. Downie and R. Veltkamp (Eds), ''Proc ISMIR'', pp. 3-8, Utrecht, 2010. [http://www.tomcollinsresearch.net/supporting-material.html Supporting material]&lt;br /&gt;
&lt;br /&gt;
*Tom Collins, Robin Laney, Alistair Willis, and Paul H. Garthwaite. [http://oro.open.ac.uk/24818/ Modeling pattern importance in Chopin's mazurkas]. ''Music Perception'', 28(4):387-414, 2011. [http://www.tomcollinsresearch.net/supporting-material.html Supporting material]&lt;br /&gt;
&lt;br /&gt;
*Darrell Conklin. Discovery of distinctive patterns in music. ''Intelligent Data Analysis'', 14(5):547-554, 2010a.&lt;br /&gt;
&lt;br /&gt;
*Darrell Conklin. Distinctive patterns in the first movement of Brahms' String Quartet in C minor. ''Journal of Mathematics and Music'', 4(2):85-92, 2010b.&lt;br /&gt;
&lt;br /&gt;
*Darrell Conklin and Christina Anagnostopoulou. Representation and discovery of multiple viewpoint patterns. In A. Schloss, R. Dannenberg, and P. Driessen (Eds), ''Proc ICMC'', pp. 479-485, Cuba, 2001.&lt;br /&gt;
&lt;br /&gt;
*William Drabkin. Motif. In S. Sadie and J. Tyrrell (Eds), &amp;quot;The new Grove dictionary of music and musicians&amp;quot;. Macmillan, London, UK, 2nd edition, 2001a.&lt;br /&gt;
&lt;br /&gt;
*William Drabkin. Theme. In S. Sadie and J. Tyrrell (Eds), &amp;quot;The new Grove dictionary of music and musicians&amp;quot;. Macmillan, London, UK, 2nd edition, 2001b.&lt;br /&gt;
&lt;br /&gt;
*Sebastian Flossmann, Werner Goebl, Maarten Grachten, Bernhard Niedemayer, and Gerhard Widmer. [http://www.cp.jku.at/research/papers/flossmann_etal_jnmr_2010.pdf The Magaloff project: an interim report]. ''Journal of New Music Research'', 39(4):363-377, 2010.&lt;br /&gt;
&lt;br /&gt;
*Jamie Forth and Geraint A. Wiggins. An approach for identifying salient repetition in multidimensional representations of polyphonic music. In J. Chan, J. Daykin, and M.S. Rahman (Eds), ''London algorithmics 2008: Theory and practice'', pp. 44-58. College Publications, London, 2009.&lt;br /&gt;
&lt;br /&gt;
*Jia-Lien Hsu, Chih-Chin Liu, and Arbee L.P. Chen. Discovering nontrivial repeating patterns in music data. ''IEEE Transactions on Multimedia'', 3(3):311-325, 2001.&lt;br /&gt;
&lt;br /&gt;
*Ian Knopke and Frauke Jürgensen. A system for identifying common melodic phrases in the masses of Palestrina. ''Journal of New Music Research'', 38(2):171-181, 2009.&lt;br /&gt;
&lt;br /&gt;
*Olivier Lartillot. Efficient extraction of closed motivic patterns in multidimensional symbolic representations of music. In J.D. Reiss and G.A. Wiggins (Eds), ''Proc ISMIR'', pp. 191-198, London, 2005.&lt;br /&gt;
&lt;br /&gt;
*Olivier Lartillot and Petri Toiviainen. Motivic matching strategies for automated pattern extraction. ''Musicae Scientiae'', Discussion Forum 4A:281-314, 2007.&lt;br /&gt;
&lt;br /&gt;
*Fred Lerdahl and Ray Jackendoff. ''A generative theory of tonal music''. MIT Press, Cambridge, MA, 1983.&lt;br /&gt;
&lt;br /&gt;
*Colin Meek and William P. Birmingham. Automatic thematic extractor. ''Journal of Intelligent Information Systems'', 21(1):9-33, 2003.&lt;br /&gt;
&lt;br /&gt;
*David Meredith, Kjell Lemstr&amp;amp;oumlm, and Geraint A. Wiggins. Algorithms for discovering repeated patterns in multidimensional representations of polyphonic music. ''Journal of New Music Research'', 31(4):321-345, 2002.&lt;br /&gt;
&lt;br /&gt;
*David Meredith. Point-set algorithms for pattern discovery and pattern matching in music. In T. Crawford and R. Veltkamp (Eds), ''Proc Dagstuhl Seminar on Content-Based Retrieval'', 23 pp., Dagstuhl, 2006.&lt;br /&gt;
&lt;br /&gt;
*Meinard Müller and Nanzhu Jiang. A scape plot representation for visualizing repetitive structures of music recordings. In F. Gouyon, P. Herrera, L.G. Martin, and M. Müller (Eds), ''Proc ISMIR'', pp. 97-102, Porto, 2012.&lt;br /&gt;
					 &lt;br /&gt;
*Oriol Nieto, Eric J. Humphrey, Juan Pablo Bello. Compressing music recordings into audio summaries. In F. Gouyon, P. Herrera, L.G. Martin, and M. Müller (Eds), ''Proc ISMIR'', pp. 313-318, Porto, 2012.&lt;br /&gt;
&lt;br /&gt;
*Geoffroy Peeters. Sequence representation of music structure using higher-order similarity matrix and maximum-likelihood approach. In S. Dixon, D. Bainbridge, and R. Typke (Eds), ''Proc ISMIR'', pp. 35-40, Vienna, 2007.&lt;br /&gt;
&lt;br /&gt;
*Justin Salamon and Emilia Gómez. Melody extraction from polyphonic music signals using pitch contour characteristics. ''IEEE Transactions on Audio, Speech and Language Processing'', 20(6):1759-1770, 2012.&lt;br /&gt;
&lt;br /&gt;
*Heinrich Schenker. ''Harmony''. University of Chicago Press, London, 1954. (Translated by Elisabeth Mann Borgese and edited by Oswald Jonas. Original work published 1906 by Cotta, Stuttgart).&lt;br /&gt;
&lt;br /&gt;
*Arnold Schoenberg. ''Fundamentals of Musical Composition''. Faber and Faber, London, 1967.&lt;br /&gt;
&lt;br /&gt;
*Michael Tilmouth. Repeat. In S. Sadie and J. Tyrrell (Eds), &amp;quot;The new Grove dictionary of music and musicians&amp;quot;. Macmillan, London, UK, 2nd edition, 2001.&lt;br /&gt;
&lt;br /&gt;
*Avery Wang. An industrial strength audio search algorithm. In H.H. Hoos and D. Bainbridge (Eds), ''Proc ISMIR'', Baltimore, MD, 2003.&lt;br /&gt;
&lt;br /&gt;
*Geraint A. Wiggins. Computer-representation of music in the research environment. In T. Crawford and L. Gibson (Eds), ''Modern methods for musicology: prospects, proposals and realities'', pp. 7-22. Ashgate, Oxford, UK, 2007.&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9537</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9537"/>
		<updated>2013-08-13T19:55:48Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: /* Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score====&lt;br /&gt;
&lt;br /&gt;
''This section needs fixing, it seems there are some problems with the'' math ''command''. See [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#evaluation here] for a working version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Three-layer precision (&amp;lt;math&amp;gt;P_3&amp;lt;/math&amp;gt;), three-layer recall (&amp;lt;math&amp;gt;R_3&amp;lt;/math&amp;gt;), and three-layer &amp;lt;math&amp;gt;F_1&amp;lt;/math&amp;gt; score (&amp;lt;math&amp;gt;F_3&amp;lt;/math&amp;gt;) are defined as follows:&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
F_3(\Pi, \Xi) = \frac{2 P_3(\Pi, \Xi) R_3(\Pi, \Xi)}{P_3(\Pi, \Xi) + R_3(\Pi, \Xi)},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
P_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{Q}} \sum_{j = 1}^{n_\mathcal{Q}}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9536</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9536"/>
		<updated>2013-08-13T19:55:20Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: /* Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score====&lt;br /&gt;
&lt;br /&gt;
''This section needs fixing, it seems there are some problems with the'' math ''command''. See [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#evaluation here] for a working version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Three-layer precision (&amp;lt;math&amp;gt;P_3&amp;lt;/math&amp;gt;), three-layer recall (&amp;lt;math&amp;gt;R_3&amp;lt;/math&amp;gt;), and three-layer &amp;lt;math&amp;gt;F_1&amp;lt;/math&amp;gt; score (&amp;lt;math&amp;gt;F_3&amp;lt;/math&amp;gt;) are defined as follows:&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
F_3(\Pi, \Xi) = \frac{2 P_3(\Pi, \Xi) R_3(\Pi, \Xi)}{P_3(\Pi, \Xi) + R_3(\Pi, \Xi)},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
P_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{Q}} \sum_{j = 1}^{n_\mathcal{Q}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid i = 1,\ldots, n_\mathcal{P} \},\\[.2cm]&lt;br /&gt;
R_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{P}} \sum_{i = 1}^{n_\mathcal{P}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid j = 1,\ldots, n_\mathcal{Q} \},\\[.2cm]&lt;br /&gt;
F_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{2 P_2(\mathcal{P}, \mathcal{Q}) R_2(\mathcal{P}, \mathcal{Q})}&lt;br /&gt;
{P_2(\mathcal{P}, \mathcal{Q}) + R_2(\mathcal{P}, \mathcal{Q})},\\[.2cm]&lt;br /&gt;
P_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_Q} \sum_{l = 1}^{m_Q}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid k = 1,\ldots, m_P \},\\[.2cm]&lt;br /&gt;
R_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_P} \sum_{k = 1}^{n_P}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid l = 1,\ldots, m_Q \},\\[.2cm]&lt;br /&gt;
F_1(P, Q) &amp;amp;=&amp;amp; \frac{2 P_1(P, Q) R_1(P, Q)}{P_1(P, Q) + R_1(P, Q)},\\[.2cm]&lt;br /&gt;
P_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|Q|,\\[.2cm]&lt;br /&gt;
R_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|P|.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9535</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9535"/>
		<updated>2013-08-13T19:55:10Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: /* Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score====&lt;br /&gt;
&lt;br /&gt;
''This section needs fixing, it seems there are some problems with the'' math ''command''. See [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#evaluation here] for a working version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Three-layer precision (&amp;lt;math&amp;gt;P_3&amp;lt;/math&amp;gt;), three-layer recall (&amp;lt;math&amp;gt;R_3&amp;lt;/math&amp;gt;), and three-layer &amp;lt;math&amp;gt;F_1&amp;lt;/math&amp;gt; score (&amp;lt;math&amp;gt;F_3&amp;lt;/math&amp;gt;) are defined as follows:&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
F_3(\Pi, \Xi) = \frac{2 P_3(\Pi, \Xi) R_3(\Pi, \Xi)}{P_3(\Pi, \Xi) + R_3(\Pi, \Xi)},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{eqnarray}&lt;br /&gt;
P_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{Q}} \sum_{j = 1}^{n_\mathcal{Q}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid i = 1,\ldots, n_\mathcal{P} \},\\[.2cm]&lt;br /&gt;
R_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{P}} \sum_{i = 1}^{n_\mathcal{P}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid j = 1,\ldots, n_\mathcal{Q} \},\\[.2cm]&lt;br /&gt;
F_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{2 P_2(\mathcal{P}, \mathcal{Q}) R_2(\mathcal{P}, \mathcal{Q})}&lt;br /&gt;
{P_2(\mathcal{P}, \mathcal{Q}) + R_2(\mathcal{P}, \mathcal{Q})},\\[.2cm]&lt;br /&gt;
P_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_Q} \sum_{l = 1}^{m_Q}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid k = 1,\ldots, m_P \},\\[.2cm]&lt;br /&gt;
R_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_P} \sum_{k = 1}^{n_P}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid l = 1,\ldots, m_Q \},\\[.2cm]&lt;br /&gt;
F_1(P, Q) &amp;amp;=&amp;amp; \frac{2 P_1(P, Q) R_1(P, Q)}{P_1(P, Q) + R_1(P, Q)},\\[.2cm]&lt;br /&gt;
P_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|Q|,\\[.2cm]&lt;br /&gt;
R_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|P|.&lt;br /&gt;
\end{eqnarray}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9534</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9534"/>
		<updated>2013-08-13T19:54:57Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score====&lt;br /&gt;
&lt;br /&gt;
''This section needs fixing, it seems there are some problems with the'' math ''command''. See [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#evaluation here] for a working version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Three-layer precision (&amp;lt;math&amp;gt;P_3&amp;lt;/math&amp;gt;), three-layer recall (&amp;lt;math&amp;gt;R_3&amp;lt;/math&amp;gt;), and three-layer &amp;lt;math&amp;gt;F_1&amp;lt;/math&amp;gt; score (&amp;lt;math&amp;gt;F_3&amp;lt;/math&amp;gt;) are defined as follows:&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{equation}&lt;br /&gt;
F_3(\Pi, \Xi) = \frac{2 P_3(\Pi, \Xi) R_3(\Pi, \Xi)}{P_3(\Pi, \Xi) + R_3(\Pi, \Xi)},&lt;br /&gt;
\end{equation}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{eqnarray}&lt;br /&gt;
P_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{Q}} \sum_{j = 1}^{n_\mathcal{Q}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid i = 1,\ldots, n_\mathcal{P} \},\\[.2cm]&lt;br /&gt;
R_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{P}} \sum_{i = 1}^{n_\mathcal{P}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid j = 1,\ldots, n_\mathcal{Q} \},\\[.2cm]&lt;br /&gt;
F_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{2 P_2(\mathcal{P}, \mathcal{Q}) R_2(\mathcal{P}, \mathcal{Q})}&lt;br /&gt;
{P_2(\mathcal{P}, \mathcal{Q}) + R_2(\mathcal{P}, \mathcal{Q})},\\[.2cm]&lt;br /&gt;
P_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_Q} \sum_{l = 1}^{m_Q}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid k = 1,\ldots, m_P \},\\[.2cm]&lt;br /&gt;
R_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_P} \sum_{k = 1}^{n_P}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid l = 1,\ldots, m_Q \},\\[.2cm]&lt;br /&gt;
F_1(P, Q) &amp;amp;=&amp;amp; \frac{2 P_1(P, Q) R_1(P, Q)}{P_1(P, Q) + R_1(P, Q)},\\[.2cm]&lt;br /&gt;
P_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|Q|,\\[.2cm]&lt;br /&gt;
R_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|P|.&lt;br /&gt;
\end{eqnarray}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9533</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9533"/>
		<updated>2013-08-13T19:31:33Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;math&amp;gt;&lt;br /&gt;
F_3(\Pi, \Xi) = \frac{2 P_3(\Pi, \Xi) R_3(\Pi, \Xi)}{P_3(\Pi, \Xi) + R_3(\Pi, \Xi)},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9532</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9532"/>
		<updated>2013-08-13T19:30:33Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: /* Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{equation}&lt;br /&gt;
F_3(\Pi, \Xi) = \frac{2 P_3(\Pi, \Xi) R_3(\Pi, \Xi)}{P_3(\Pi, \Xi) + R_3(\Pi, \Xi)},&lt;br /&gt;
\end{equation}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9531</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9531"/>
		<updated>2013-08-13T19:27:30Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score====&lt;br /&gt;
&lt;br /&gt;
''This section needs fixing, it seems there are some problems with the'' math ''command''. See [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#evaluation here] for a working version.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;F_3&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9530</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9530"/>
		<updated>2013-08-13T19:27:21Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score====&lt;br /&gt;
&lt;br /&gt;
''This section needs fixing, it seems there are some problems with the'' math ''command''. See [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#evaluation here] for a working version.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;F_3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Three-layer precision (&amp;lt;math&amp;gt;P_3&amp;lt;/math&amp;gt;), three-layer recall (&amp;lt;math&amp;gt;R_3&amp;lt;/math&amp;gt;), and three-layer &amp;lt;math&amp;gt;F_1&amp;lt;/math&amp;gt; score (&amp;lt;math&amp;gt;F_3&amp;lt;/math&amp;gt;) are defined as follows:&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{equation}&lt;br /&gt;
F_3(\Pi, \Xi) = \frac{2 P_3(\Pi, \Xi) R_3(\Pi, \Xi)}{P_3(\Pi, \Xi) + R_3(\Pi, \Xi)},&lt;br /&gt;
\end{equation}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9529</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9529"/>
		<updated>2013-08-13T19:08:30Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: /* Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score====&lt;br /&gt;
&lt;br /&gt;
''This section needs fixing, it seems there are some problems with the'' math ''command''. See [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#evaluation here] for a working version.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;F_3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Three-layer precision (&amp;lt;math&amp;gt;P_3&amp;lt;/math&amp;gt;), three-layer recall (&amp;lt;math&amp;gt;R_3&amp;lt;/math&amp;gt;), and three-layer &amp;lt;math&amp;gt;F_1&amp;lt;/math&amp;gt; score (&amp;lt;math&amp;gt;F_3&amp;lt;/math&amp;gt;) are defined as follows:&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{equation}&lt;br /&gt;
F_3(\Pi, \Xi) = \frac{2 P_3(\Pi, \Xi) R_3(\Pi, \Xi)}{P_3(\Pi, \Xi) + R_3(\Pi, \Xi)},&lt;br /&gt;
\end{equation}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{eqnarray}&lt;br /&gt;
P_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{Q}} \sum_{j = 1}^{n_\mathcal{Q}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid i = 1,\ldots, n_\mathcal{P} \},\\[.2cm]&lt;br /&gt;
R_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{P}} \sum_{i = 1}^{n_\mathcal{P}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid j = 1,\ldots, n_\mathcal{Q} \},\\[.2cm]&lt;br /&gt;
F_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{2 P_2(\mathcal{P}, \mathcal{Q}) R_2(\mathcal{P}, \mathcal{Q})}&lt;br /&gt;
{P_2(\mathcal{P}, \mathcal{Q}) + R_2(\mathcal{P}, \mathcal{Q})},\\[.2cm]&lt;br /&gt;
P_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_Q} \sum_{l = 1}^{m_Q}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid k = 1,\ldots, m_P \},\\[.2cm]&lt;br /&gt;
R_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_P} \sum_{k = 1}^{n_P}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid l = 1,\ldots, m_Q \},\\[.2cm]&lt;br /&gt;
F_1(P, Q) &amp;amp;=&amp;amp; \frac{2 P_1(P, Q) R_1(P, Q)}{P_1(P, Q) + R_1(P, Q)},\\[.2cm]&lt;br /&gt;
P_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|Q|,\\[.2cm]&lt;br /&gt;
R_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|P|.&lt;br /&gt;
\end{eqnarray}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9528</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9528"/>
		<updated>2013-08-13T19:08:15Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: /* Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score====&lt;br /&gt;
&lt;br /&gt;
''This section needs fixing, it seems there are some problems with the'' math ''command''. See [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#evaluation here] for a working version.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Three-layer precision (&amp;lt;math&amp;gt;P_3&amp;lt;/math&amp;gt;), three-layer recall (&amp;lt;math&amp;gt;R_3&amp;lt;/math&amp;gt;), and three-layer &amp;lt;math&amp;gt;F_1&amp;lt;/math&amp;gt; score (&amp;lt;math&amp;gt;F_3&amp;lt;/math&amp;gt;) are defined as follows:&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{equation}&lt;br /&gt;
F_3(\Pi, \Xi) = \frac{2 P_3(\Pi, \Xi) R_3(\Pi, \Xi)}{P_3(\Pi, \Xi) + R_3(\Pi, \Xi)},&lt;br /&gt;
\end{equation}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{eqnarray}&lt;br /&gt;
P_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{Q}} \sum_{j = 1}^{n_\mathcal{Q}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid i = 1,\ldots, n_\mathcal{P} \},\\[.2cm]&lt;br /&gt;
R_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{P}} \sum_{i = 1}^{n_\mathcal{P}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid j = 1,\ldots, n_\mathcal{Q} \},\\[.2cm]&lt;br /&gt;
F_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{2 P_2(\mathcal{P}, \mathcal{Q}) R_2(\mathcal{P}, \mathcal{Q})}&lt;br /&gt;
{P_2(\mathcal{P}, \mathcal{Q}) + R_2(\mathcal{P}, \mathcal{Q})},\\[.2cm]&lt;br /&gt;
P_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_Q} \sum_{l = 1}^{m_Q}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid k = 1,\ldots, m_P \},\\[.2cm]&lt;br /&gt;
R_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_P} \sum_{k = 1}^{n_P}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid l = 1,\ldots, m_Q \},\\[.2cm]&lt;br /&gt;
F_1(P, Q) &amp;amp;=&amp;amp; \frac{2 P_1(P, Q) R_1(P, Q)}{P_1(P, Q) + R_1(P, Q)},\\[.2cm]&lt;br /&gt;
P_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|Q|,\\[.2cm]&lt;br /&gt;
R_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|P|.&lt;br /&gt;
\end{eqnarray}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9527</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9527"/>
		<updated>2013-08-13T19:06:18Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Three-Layer Precision, Three-Layer Recall, and Three-Layer F1 Score====&lt;br /&gt;
&lt;br /&gt;
''This section needs fixing, it seems there are some problems with the'' math ''command''. See [http://www.tomcollinsresearch.net/mirex-pattern-discovery-task.html#evaluation here] for a working version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Three-layer precision (&amp;lt;math&amp;gt;P_3&amp;lt;/math&amp;gt;), three-layer recall (&amp;lt;math&amp;gt;R_3&amp;lt;/math&amp;gt;), and three-layer &amp;lt;math&amp;gt;F_1&amp;lt;/math&amp;gt; score (&amp;lt;math&amp;gt;F_3&amp;lt;/math&amp;gt;) are defined as follows:&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{equation}&lt;br /&gt;
F_3(\Pi, \Xi) = \frac{2 P_3(\Pi, \Xi) R_3(\Pi, \Xi)}{P_3(\Pi, \Xi) + R_3(\Pi, \Xi)},&lt;br /&gt;
\end{equation}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{eqnarray}&lt;br /&gt;
P_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{Q}} \sum_{j = 1}^{n_\mathcal{Q}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid i = 1,\ldots, n_\mathcal{P} \},\\[.2cm]&lt;br /&gt;
R_3(\Pi, \Xi) &amp;amp;=&amp;amp; \frac{1}{n_\mathcal{P}} \sum_{i = 1}^{n_\mathcal{P}}&lt;br /&gt;
\max \{ F_2(\mathcal{P}_i, \mathcal{Q}_j) \mid j = 1,\ldots, n_\mathcal{Q} \},\\[.2cm]&lt;br /&gt;
F_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{2 P_2(\mathcal{P}, \mathcal{Q}) R_2(\mathcal{P}, \mathcal{Q})}&lt;br /&gt;
{P_2(\mathcal{P}, \mathcal{Q}) + R_2(\mathcal{P}, \mathcal{Q})},\\[.2cm]&lt;br /&gt;
P_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_Q} \sum_{l = 1}^{m_Q}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid k = 1,\ldots, m_P \},\\[.2cm]&lt;br /&gt;
R_2(\mathcal{P}, \mathcal{Q}) &amp;amp;=&amp;amp; \frac{1}{m_P} \sum_{k = 1}^{n_P}&lt;br /&gt;
\max \{ F_1(P_k, Q_l) \mid l = 1,\ldots, m_Q \},\\[.2cm]&lt;br /&gt;
F_1(P, Q) &amp;amp;=&amp;amp; \frac{2 P_1(P, Q) R_1(P, Q)}{P_1(P, Q) + R_1(P, Q)},\\[.2cm]&lt;br /&gt;
P_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|Q|,\\[.2cm]&lt;br /&gt;
R_1(P, Q) &amp;amp;=&amp;amp; |P \cap Q|/|P|.&lt;br /&gt;
\end{eqnarray}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9526</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9526"/>
		<updated>2013-08-13T19:05:08Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;math&amp;gt;0&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9525</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9525"/>
		<updated>2013-08-13T19:03:05Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9524</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9524"/>
		<updated>2013-08-13T19:01:15Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;math&amp;gt;\frac{dy}{dx}\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9523</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9523"/>
		<updated>2013-08-13T18:17:59Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;math&amp;gt;x=y&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9522</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Test&amp;diff=9522"/>
		<updated>2013-08-13T18:17:38Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: Created page with &amp;quot;&amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9467</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9467"/>
		<updated>2013-07-09T20:32:13Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In response to discussions at ISMIR 2012, we are prepared to improve the distribution of tasks for the upcoming MIREX 2013.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead one or more tasks, please add your name in the &amp;quot;Captains&amp;quot; column.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain(s)&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|Fu-Hai Frank Wu, (Andreas Ehmann?)&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|John Ashley Burgoyne, W. Bas de Haas, Johan Pauwels&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|Johan Pauwels&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|Sebastian Böck, (Andreas Ehmann?)&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|Aggelos Gkiokas, Anders Elowsson&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|(Xiao Hu?)&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|Mert Bay&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|drts&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|Tom Collins&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Running_MIREX&amp;diff=9423</id>
		<title>Running MIREX</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Running_MIREX&amp;diff=9423"/>
		<updated>2013-06-16T20:28:29Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: Created page with &amp;quot;Documentation for Running MIREX tasks&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Documentation for Running MIREX tasks&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9405</id>
		<title>2013:Main Page</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9405"/>
		<updated>2013-06-12T13:27:48Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to MIREX 2013==&lt;br /&gt;
&lt;br /&gt;
This is the main page for the ninth running of the Music Information Retrieval Evaluation eXchange (MIREX 2013). The International Music Information Retrieval Systems Evaluation Laboratory ([https://music-ir.org/evaluation IMIRSEL]) at the Graduate School of Library and Information Science ([http://www.lis.illinois.edu GSLIS]), University of Illinois at Urbana-Champaign ([http://www.illinois.edu UIUC]) is the principal organizer of MIREX 2013. &lt;br /&gt;
&lt;br /&gt;
The MIREX 2013 community will hold its annual meeting as part of [http://ismir2013.ismir.net/ The 14th International Conference on Music Information Retrieval], ISMIR 2013, which will be held in Curitiba, PR, Brazil, the 4-8 November, 2013. The MIREX plenary and poster sessions will be held during the conference.&lt;br /&gt;
&lt;br /&gt;
J. Stephen Downie&amp;lt;br&amp;gt;&lt;br /&gt;
Director, IMIRSEL&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==New Task Leadership Model==&lt;br /&gt;
&lt;br /&gt;
In response to discussions at ISMIR 2012, we are prepared to improve the distribution of tasks for the upcoming MIREX 2013.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead a task, please add your name to the &amp;quot;Captains&amp;quot; column on the new [[2013:Task Captains]] page. Please direct any communication to the [https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest] mailing list.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
&lt;br /&gt;
We really need leaders to help us this year!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Deadline Dates==&lt;br /&gt;
&lt;br /&gt;
TBA: 2013 dates have not been established.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;&amp;lt;b&amp;gt;Nota Bene:&amp;lt;/b&amp;gt; &amp;lt;/i&amp;gt;In the past we have been rather flexible about deadlines. This year, however, we simply do not have the time flexibility, sorry.&lt;br /&gt;
&lt;br /&gt;
Please, please, please, let's start getting those submissions made. The sooner we have the code, the sooner we can start running the evaluations.&lt;br /&gt;
&lt;br /&gt;
PS: If you have a slower running algorithm, help us help you by getting your code in ASAP. Please do pay attention to runtime limits.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Submission Instructions==&lt;br /&gt;
* Be sure to read through the rest of this page&lt;br /&gt;
* Be sure to read though the task pages for which you are submitting&lt;br /&gt;
* Be sure to follow the [[2009:Best Coding Practices for MIREX | Best Coding Practices for MIREX]]&lt;br /&gt;
* Be sure to follow the  [[MIREX 2013 Submission Instructions]] including both the tutorial video and the text&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Possible Evaluation Tasks==&lt;br /&gt;
&lt;br /&gt;
* [[2013:Audio Classification (Train/Test) Tasks]], incorporating:&lt;br /&gt;
** Audio US Pop Genre Classification&lt;br /&gt;
** Audio Latin Genre Classification&lt;br /&gt;
** Audio Music Mood Classification&lt;br /&gt;
** Audio Classical Composer Identification&lt;br /&gt;
* [[2013:Audio Cover Song Identification]]&lt;br /&gt;
* [[2013:Audio Tag Classification]] &lt;br /&gt;
* [[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
* [[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
* [[2013:Audio Onset Detection]]&lt;br /&gt;
* [[2013:Audio Key Detection]]&lt;br /&gt;
* [[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
* [[2013:Query by Singing/Humming]]&lt;br /&gt;
* [[2013:Audio Melody Extraction]]&lt;br /&gt;
* [[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
* [[2013:Audio Chord Estimation]]&lt;br /&gt;
* [[2013:Query by Tapping]]&lt;br /&gt;
* [[2013:Audio Beat Tracking]]&lt;br /&gt;
* [[2013:Structural Segmentation]]&lt;br /&gt;
* [[2013:Audio Tempo Estimation]]&lt;br /&gt;
* [[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Note to New Participants===&lt;br /&gt;
Please take the time to read the following review articles that explain the history and structure of MIREX.&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen (2008). The Music Information Retrieval Evaluation Exchange (2005-2007):&amp;lt;br&amp;gt;&lt;br /&gt;
A window into music information retrieval research.''Acoustical Science and Technology 29'' (4): 247-255. &amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://dx.doi.org/10.1250/ast.29.247 http://dx.doi.org/10.1250/ast.29.247]&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen, Andreas F. Ehmann, Mert Bay and M. Cameron Jones. (2010).&amp;lt;br&amp;gt;&lt;br /&gt;
The Music Information Retrieval Evaluation eXchange: Some Observations and Insights.&amp;lt;br&amp;gt;&lt;br /&gt;
''Advances in Music Information Retrieval'' Vol. 274, pp. 93-115&amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://bit.ly/KpM5u5 http://bit.ly/KpM5u5]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Runtime Limits===&lt;br /&gt;
&lt;br /&gt;
We reserve the right to stop any process that exceeds runtime limits for each task.  We will do our best to notify you in enough time to allow revisions, but this may not be possible in some cases. Please respect the published runtime limits.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Note to All Participants===&lt;br /&gt;
&lt;br /&gt;
Because MIREX is premised upon the sharing of ideas and results, '''ALL''' MIREX participants are expected to:&lt;br /&gt;
&lt;br /&gt;
# submit a DRAFT 2-3 page extended abstract PDF in the ISMIR format about the submitted programme(s) to help us and the community better understand how the algorithm works when submitting their programme(s).&lt;br /&gt;
# submit a FINALIZED 2-3 page extended abstract PDF in the ISMIR format prior to ISMIR 2013 for posting on the respective results pages (sometimes the same abstract can be used for multiple submissions; in many cases the DRAFT and FINALIZED abstracts are the same)&lt;br /&gt;
# present a poster at the MIREX 2013 poster session at ISMIR 2013&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Software Dependency Requests===&lt;br /&gt;
If you have not submitted to MIREX before or are unsure whether IMIRSEL currently supports some of the software/architecture dependencies for your submission a [https://docs.google.com/spreadsheet/viewform?formkey=dFpmNF9PUGdvd1o1OHVhMkZ4cXZvdkE6MA#gid=0 dependency request form is available]. Please submit details of your dependencies on this form and the IMIRSEL team will attempt to satisfy them for you. &lt;br /&gt;
&lt;br /&gt;
Due to the high volume of submissions expected at MIREX 2013, submissions with difficult to satisfy dependencies that the team has not been given sufficient notice of may result in the submission being rejected.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Finally, you will also be expected to detail your software/architecture dependencies in a README file to be provided to the submission system.&lt;br /&gt;
&lt;br /&gt;
==Getting Involved in MIREX 2013==&lt;br /&gt;
MIREX is a community-based endeavour. Be a part of the community and help make MIREX 2012 the best yet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Mailing List Participation===&lt;br /&gt;
If you are interested in formal MIR evaluation, you should also subscribe to the &amp;quot;MIREX&amp;quot; (aka &amp;quot;EvalFest&amp;quot;) mail list and participate in the community discussions about defining and running MIREX 2013 tasks. Subscription information at: &lt;br /&gt;
[https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest Central]. &lt;br /&gt;
&lt;br /&gt;
If you are participating in MIREX 2013, it is VERY IMPORTANT that you are subscribed to EvalFest. Deadlines, task updates and other important information will be announced via this mailing list. Please use the EvalFest for discussion of MIREX task proposals and other MIREX related issues. This wiki (MIREX 2013 wiki) will be used to embody and disseminate task proposals, however, task related discussions should be conducted on the MIREX organization mailing list (EvalFest) rather than on this wiki, but should be summarized here. &lt;br /&gt;
&lt;br /&gt;
Where possible, definitions or example code for new evaluation metrics or tasks should be provided to the IMIRSEL team who will embody them in software as part of the NEMA analytics framework, which will be released to the community at or before ISMIR 2013 - providing a standardised set of interfaces and output to disciplined evaluation procedures for a great many MIR tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Wiki Participation===&lt;br /&gt;
If you find that you cannot edit a MIREX wiki page, you will need to create a new account via: [[Special:Userlogin]].&lt;br /&gt;
&lt;br /&gt;
Please note that because of &amp;quot;spam-bots&amp;quot;, MIREX wiki registration requests may be moderated by IMIRSEL members. It might take up to 24 hours for approval (Thank you for your patience!).&lt;br /&gt;
&lt;br /&gt;
==MIREX 2005 - 2012 Wikis==&lt;br /&gt;
Content from MIREX 2005 - 2012 are available at:&lt;br /&gt;
'''[[2012:Main_Page|MIREX 2012]]''' &lt;br /&gt;
'''[[2011:Main_Page|MIREX 2011]]''' &lt;br /&gt;
'''[[2010:Main_Page|MIREX 2010]]''' &lt;br /&gt;
'''[[2009:Main_Page|MIREX 2009]]''' &lt;br /&gt;
'''[[2008:Main_Page|MIREX 2008]]''' &lt;br /&gt;
'''[[2007:Main_Page|MIREX 2007]]''' &lt;br /&gt;
'''[[2006:Main_Page|MIREX 2006]]''' &lt;br /&gt;
'''[[2005:Main_Page|MIREX 2005]]'''&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9404</id>
		<title>2013:Main Page</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9404"/>
		<updated>2013-06-12T13:26:43Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: /* New Task Leadership Model */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to MIREX 2013==&lt;br /&gt;
&lt;br /&gt;
This is the main page for the ninth running of the Music Information Retrieval Evaluation eXchange (MIREX 2013). The International Music Information Retrieval Systems Evaluation Laboratory ([https://music-ir.org/evaluation IMIRSEL]) at the Graduate School of Library and Information Science ([http://www.lis.illinois.edu GSLIS]), University of Illinois at Urbana-Champaign ([http://www.illinois.edu UIUC]) is the principal organizer of MIREX 2013. &lt;br /&gt;
&lt;br /&gt;
The MIREX 2013 community will hold its annual meeting as part of [http://ismir2013.ismir.net/ The 14th International Conference on Music Information Retrieval], ISMIR 2013, which will be held in Curitiba, PR, Brazil, the 4-8 November, 2013. The MIREX plenary and poster sessions will be held during the conference.&lt;br /&gt;
&lt;br /&gt;
J. Stephen Downie&amp;lt;br&amp;gt;&lt;br /&gt;
Director, IMIRSEL&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==New Task Leadership Model==&lt;br /&gt;
&lt;br /&gt;
In response to discussions at ISMIR 2012, we are prepared to improve the distribution of tasks for the upcoming MIREX 2013.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead a task, please add your name to the &amp;quot;Captains&amp;quot; column on the new [[2013:Task Captains]] page. Please direct any communication to the [https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest] mailing list.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
&lt;br /&gt;
We really need leaders to help us this year!&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Deadline Dates==&lt;br /&gt;
&lt;br /&gt;
TBA: 2013 dates have not been established.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;&amp;lt;b&amp;gt;Nota Bene:&amp;lt;/b&amp;gt; &amp;lt;/i&amp;gt;In the past we have been rather flexible about deadlines. This year, however, we simply do not have the time flexibility, sorry.&lt;br /&gt;
&lt;br /&gt;
Please, please, please, let's start getting those submissions made. The sooner we have the code, the sooner we can start running the evaluations.&lt;br /&gt;
&lt;br /&gt;
PS: If you have a slower running algorithm, help us help you by getting your code in ASAP. Please do pay attention to runtime limits.&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Submission Instructions==&lt;br /&gt;
* Be sure to read through the rest of this page&lt;br /&gt;
* Be sure to read though the task pages for which you are submitting&lt;br /&gt;
* Be sure to follow the [[2009:Best Coding Practices for MIREX | Best Coding Practices for MIREX]]&lt;br /&gt;
* Be sure to follow the  [[MIREX 2013 Submission Instructions]] including both the tutorial video and the text&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Possible Evaluation Tasks==&lt;br /&gt;
&lt;br /&gt;
* [[2013:Audio Classification (Train/Test) Tasks]], incorporating:&lt;br /&gt;
** Audio US Pop Genre Classification&lt;br /&gt;
** Audio Latin Genre Classification&lt;br /&gt;
** Audio Music Mood Classification&lt;br /&gt;
** Audio Classical Composer Identification&lt;br /&gt;
* [[2013:Audio Cover Song Identification]]&lt;br /&gt;
* [[2013:Audio Tag Classification]] &lt;br /&gt;
* [[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
* [[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
* [[2013:Audio Onset Detection]]&lt;br /&gt;
* [[2013:Audio Key Detection]]&lt;br /&gt;
* [[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
* [[2013:Query by Singing/Humming]]&lt;br /&gt;
* [[2013:Audio Melody Extraction]]&lt;br /&gt;
* [[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
* [[2013:Audio Chord Estimation]]&lt;br /&gt;
* [[2013:Query by Tapping]]&lt;br /&gt;
* [[2013:Audio Beat Tracking]]&lt;br /&gt;
* [[2013:Structural Segmentation]]&lt;br /&gt;
* [[2013:Audio Tempo Estimation]]&lt;br /&gt;
* [[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
&lt;br /&gt;
===Note to New Participants===&lt;br /&gt;
Please take the time to read the following review articles that explain the history and structure of MIREX.&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen (2008). The Music Information Retrieval Evaluation Exchange (2005-2007):&amp;lt;br&amp;gt;&lt;br /&gt;
A window into music information retrieval research.''Acoustical Science and Technology 29'' (4): 247-255. &amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://dx.doi.org/10.1250/ast.29.247 http://dx.doi.org/10.1250/ast.29.247]&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen, Andreas F. Ehmann, Mert Bay and M. Cameron Jones. (2010).&amp;lt;br&amp;gt;&lt;br /&gt;
The Music Information Retrieval Evaluation eXchange: Some Observations and Insights.&amp;lt;br&amp;gt;&lt;br /&gt;
''Advances in Music Information Retrieval'' Vol. 274, pp. 93-115&amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://bit.ly/KpM5u5 http://bit.ly/KpM5u5]&lt;br /&gt;
&lt;br /&gt;
===Runtime Limits===&lt;br /&gt;
&lt;br /&gt;
We reserve the right to stop any process that exceeds runtime limits for each task.  We will do our best to notify you in enough time to allow revisions, but this may not be possible in some cases. Please respect the published runtime limits.&lt;br /&gt;
&lt;br /&gt;
===Note to All Participants===&lt;br /&gt;
&lt;br /&gt;
Because MIREX is premised upon the sharing of ideas and results, '''ALL''' MIREX participants are expected to:&lt;br /&gt;
&lt;br /&gt;
# submit a DRAFT 2-3 page extended abstract PDF in the ISMIR format about the submitted programme(s) to help us and the community better understand how the algorithm works when submitting their programme(s).&lt;br /&gt;
# submit a FINALIZED 2-3 page extended abstract PDF in the ISMIR format prior to ISMIR 2013 for posting on the respective results pages (sometimes the same abstract can be used for multiple submissions; in many cases the DRAFT and FINALIZED abstracts are the same)&lt;br /&gt;
# present a poster at the MIREX 2013 poster session at ISMIR 2013&lt;br /&gt;
&lt;br /&gt;
===Software Dependency Requests===&lt;br /&gt;
If you have not submitted to MIREX before or are unsure whether IMIRSEL/NEMA currently supports some of the software/architecture dependencies for your submission a [https://docs.google.com/spreadsheet/viewform?formkey=dFpmNF9PUGdvd1o1OHVhMkZ4cXZvdkE6MA#gid=0 dependency request form is available]. Please submit details of your dependencies on this form and the IMIRSEL team will attempt to satisfy them for you. &lt;br /&gt;
&lt;br /&gt;
Due to the high volume of submissions expected at MIREX 2013, submissions with difficult to satisfy dependencies that the team has not been given sufficient notice of may result in the submission being rejected.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Finally, you will also be expected to detail your software/architecture dependencies in a README file to be provided to the submission system.&lt;br /&gt;
&lt;br /&gt;
==Getting Involved in MIREX 2013==&lt;br /&gt;
MIREX is a community-based endeavour. Be a part of the community and help make MIREX 2012 the best yet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Mailing List Participation===&lt;br /&gt;
If you are interested in formal MIR evaluation, you should also subscribe to the &amp;quot;MIREX&amp;quot; (aka &amp;quot;EvalFest&amp;quot;) mail list and participate in the community discussions about defining and running MIREX 2013 tasks. Subscription information at: &lt;br /&gt;
[https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest Central]. &lt;br /&gt;
&lt;br /&gt;
If you are participating in MIREX 2013, it is VERY IMPORTANT that you are subscribed to EvalFest. Deadlines, task updates and other important information will be announced via this mailing list. Please use the EvalFest for discussion of MIREX task proposals and other MIREX related issues. This wiki (MIREX 2013 wiki) will be used to embody and disseminate task proposals, however, task related discussions should be conducted on the MIREX organization mailing list (EvalFest) rather than on this wiki, but should be summarized here. &lt;br /&gt;
&lt;br /&gt;
Where possible, definitions or example code for new evaluation metrics or tasks should be provided to the IMIRSEL team who will embody them in software as part of the NEMA analytics framework, which will be released to the community at or before ISMIR 2013 - providing a standardised set of interfaces and output to disciplined evaluation procedures for a great many MIR tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Wiki Participation===&lt;br /&gt;
If you find that you cannot edit a MIREX wiki page, you will need to create a new account via: [[Special:Userlogin]].&lt;br /&gt;
&lt;br /&gt;
Please note that because of &amp;quot;spam-bots&amp;quot;, MIREX wiki registration requests may be moderated by IMIRSEL members. It might take up to 24 hours for approval (Thank you for your patience!).&lt;br /&gt;
&lt;br /&gt;
==MIREX 2005 - 2012 Wikis==&lt;br /&gt;
Content from MIREX 2005 - 2012 are available at:&lt;br /&gt;
'''[[2012:Main_Page|MIREX 2012]]''' &lt;br /&gt;
'''[[2011:Main_Page|MIREX 2011]]''' &lt;br /&gt;
'''[[2010:Main_Page|MIREX 2010]]''' &lt;br /&gt;
'''[[2009:Main_Page|MIREX 2009]]''' &lt;br /&gt;
'''[[2008:Main_Page|MIREX 2008]]''' &lt;br /&gt;
'''[[2007:Main_Page|MIREX 2007]]''' &lt;br /&gt;
'''[[2006:Main_Page|MIREX 2006]]''' &lt;br /&gt;
'''[[2005:Main_Page|MIREX 2005]]'''&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9403</id>
		<title>2013:Main Page</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9403"/>
		<updated>2013-06-12T13:24:36Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: /* MIREX 2013 Deadline Dates */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to MIREX 2013==&lt;br /&gt;
&lt;br /&gt;
This is the main page for the ninth running of the Music Information Retrieval Evaluation eXchange (MIREX 2013). The International Music Information Retrieval Systems Evaluation Laboratory ([https://music-ir.org/evaluation IMIRSEL]) at the Graduate School of Library and Information Science ([http://www.lis.illinois.edu GSLIS]), University of Illinois at Urbana-Champaign ([http://www.illinois.edu UIUC]) is the principal organizer of MIREX 2013. &lt;br /&gt;
&lt;br /&gt;
The MIREX 2013 community will hold its annual meeting as part of [http://ismir2013.ismir.net/ The 14th International Conference on Music Information Retrieval], ISMIR 2013, which will be held in Curitiba, PR, Brazil, the 4-8 November, 2013. The MIREX plenary and poster sessions will be held during the conference.&lt;br /&gt;
&lt;br /&gt;
J. Stephen Downie&amp;lt;br&amp;gt;&lt;br /&gt;
Director, IMIRSEL&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==New Task Leadership Model==&lt;br /&gt;
&lt;br /&gt;
In response to discussions at ISMIR 2012, we are prepared to improve the distribution of tasks for the upcoming MIREX 2013.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead a task, please add your name to the &amp;quot;Captains&amp;quot; column on the new [[2013:Task Captains]] page. Please direct any communication to the [https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest] mailing list.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Subscription information at: EvalFest Central. &lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
&lt;br /&gt;
We really need leaders to help us this year!&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Deadline Dates==&lt;br /&gt;
&lt;br /&gt;
TBA: 2013 dates have not been established.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;&amp;lt;b&amp;gt;Nota Bene:&amp;lt;/b&amp;gt; &amp;lt;/i&amp;gt;In the past we have been rather flexible about deadlines. This year, however, we simply do not have the time flexibility, sorry.&lt;br /&gt;
&lt;br /&gt;
Please, please, please, let's start getting those submissions made. The sooner we have the code, the sooner we can start running the evaluations.&lt;br /&gt;
&lt;br /&gt;
PS: If you have a slower running algorithm, help us help you by getting your code in ASAP. Please do pay attention to runtime limits.&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Submission Instructions==&lt;br /&gt;
* Be sure to read through the rest of this page&lt;br /&gt;
* Be sure to read though the task pages for which you are submitting&lt;br /&gt;
* Be sure to follow the [[2009:Best Coding Practices for MIREX | Best Coding Practices for MIREX]]&lt;br /&gt;
* Be sure to follow the  [[MIREX 2013 Submission Instructions]] including both the tutorial video and the text&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Possible Evaluation Tasks==&lt;br /&gt;
&lt;br /&gt;
* [[2013:Audio Classification (Train/Test) Tasks]], incorporating:&lt;br /&gt;
** Audio US Pop Genre Classification&lt;br /&gt;
** Audio Latin Genre Classification&lt;br /&gt;
** Audio Music Mood Classification&lt;br /&gt;
** Audio Classical Composer Identification&lt;br /&gt;
* [[2013:Audio Cover Song Identification]]&lt;br /&gt;
* [[2013:Audio Tag Classification]] &lt;br /&gt;
* [[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
* [[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
* [[2013:Audio Onset Detection]]&lt;br /&gt;
* [[2013:Audio Key Detection]]&lt;br /&gt;
* [[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
* [[2013:Query by Singing/Humming]]&lt;br /&gt;
* [[2013:Audio Melody Extraction]]&lt;br /&gt;
* [[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
* [[2013:Audio Chord Estimation]]&lt;br /&gt;
* [[2013:Query by Tapping]]&lt;br /&gt;
* [[2013:Audio Beat Tracking]]&lt;br /&gt;
* [[2013:Structural Segmentation]]&lt;br /&gt;
* [[2013:Audio Tempo Estimation]]&lt;br /&gt;
* [[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
&lt;br /&gt;
===Note to New Participants===&lt;br /&gt;
Please take the time to read the following review articles that explain the history and structure of MIREX.&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen (2008). The Music Information Retrieval Evaluation Exchange (2005-2007):&amp;lt;br&amp;gt;&lt;br /&gt;
A window into music information retrieval research.''Acoustical Science and Technology 29'' (4): 247-255. &amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://dx.doi.org/10.1250/ast.29.247 http://dx.doi.org/10.1250/ast.29.247]&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen, Andreas F. Ehmann, Mert Bay and M. Cameron Jones. (2010).&amp;lt;br&amp;gt;&lt;br /&gt;
The Music Information Retrieval Evaluation eXchange: Some Observations and Insights.&amp;lt;br&amp;gt;&lt;br /&gt;
''Advances in Music Information Retrieval'' Vol. 274, pp. 93-115&amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://bit.ly/KpM5u5 http://bit.ly/KpM5u5]&lt;br /&gt;
&lt;br /&gt;
===Runtime Limits===&lt;br /&gt;
&lt;br /&gt;
We reserve the right to stop any process that exceeds runtime limits for each task.  We will do our best to notify you in enough time to allow revisions, but this may not be possible in some cases. Please respect the published runtime limits.&lt;br /&gt;
&lt;br /&gt;
===Note to All Participants===&lt;br /&gt;
&lt;br /&gt;
Because MIREX is premised upon the sharing of ideas and results, '''ALL''' MIREX participants are expected to:&lt;br /&gt;
&lt;br /&gt;
# submit a DRAFT 2-3 page extended abstract PDF in the ISMIR format about the submitted programme(s) to help us and the community better understand how the algorithm works when submitting their programme(s).&lt;br /&gt;
# submit a FINALIZED 2-3 page extended abstract PDF in the ISMIR format prior to ISMIR 2013 for posting on the respective results pages (sometimes the same abstract can be used for multiple submissions; in many cases the DRAFT and FINALIZED abstracts are the same)&lt;br /&gt;
# present a poster at the MIREX 2013 poster session at ISMIR 2013&lt;br /&gt;
&lt;br /&gt;
===Software Dependency Requests===&lt;br /&gt;
If you have not submitted to MIREX before or are unsure whether IMIRSEL/NEMA currently supports some of the software/architecture dependencies for your submission a [https://docs.google.com/spreadsheet/viewform?formkey=dFpmNF9PUGdvd1o1OHVhMkZ4cXZvdkE6MA#gid=0 dependency request form is available]. Please submit details of your dependencies on this form and the IMIRSEL team will attempt to satisfy them for you. &lt;br /&gt;
&lt;br /&gt;
Due to the high volume of submissions expected at MIREX 2013, submissions with difficult to satisfy dependencies that the team has not been given sufficient notice of may result in the submission being rejected.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Finally, you will also be expected to detail your software/architecture dependencies in a README file to be provided to the submission system.&lt;br /&gt;
&lt;br /&gt;
==Getting Involved in MIREX 2013==&lt;br /&gt;
MIREX is a community-based endeavour. Be a part of the community and help make MIREX 2012 the best yet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Mailing List Participation===&lt;br /&gt;
If you are interested in formal MIR evaluation, you should also subscribe to the &amp;quot;MIREX&amp;quot; (aka &amp;quot;EvalFest&amp;quot;) mail list and participate in the community discussions about defining and running MIREX 2013 tasks. Subscription information at: &lt;br /&gt;
[https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest Central]. &lt;br /&gt;
&lt;br /&gt;
If you are participating in MIREX 2013, it is VERY IMPORTANT that you are subscribed to EvalFest. Deadlines, task updates and other important information will be announced via this mailing list. Please use the EvalFest for discussion of MIREX task proposals and other MIREX related issues. This wiki (MIREX 2013 wiki) will be used to embody and disseminate task proposals, however, task related discussions should be conducted on the MIREX organization mailing list (EvalFest) rather than on this wiki, but should be summarized here. &lt;br /&gt;
&lt;br /&gt;
Where possible, definitions or example code for new evaluation metrics or tasks should be provided to the IMIRSEL team who will embody them in software as part of the NEMA analytics framework, which will be released to the community at or before ISMIR 2013 - providing a standardised set of interfaces and output to disciplined evaluation procedures for a great many MIR tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Wiki Participation===&lt;br /&gt;
If you find that you cannot edit a MIREX wiki page, you will need to create a new account via: [[Special:Userlogin]].&lt;br /&gt;
&lt;br /&gt;
Please note that because of &amp;quot;spam-bots&amp;quot;, MIREX wiki registration requests may be moderated by IMIRSEL members. It might take up to 24 hours for approval (Thank you for your patience!).&lt;br /&gt;
&lt;br /&gt;
==MIREX 2005 - 2012 Wikis==&lt;br /&gt;
Content from MIREX 2005 - 2012 are available at:&lt;br /&gt;
'''[[2012:Main_Page|MIREX 2012]]''' &lt;br /&gt;
'''[[2011:Main_Page|MIREX 2011]]''' &lt;br /&gt;
'''[[2010:Main_Page|MIREX 2010]]''' &lt;br /&gt;
'''[[2009:Main_Page|MIREX 2009]]''' &lt;br /&gt;
'''[[2008:Main_Page|MIREX 2008]]''' &lt;br /&gt;
'''[[2007:Main_Page|MIREX 2007]]''' &lt;br /&gt;
'''[[2006:Main_Page|MIREX 2006]]''' &lt;br /&gt;
'''[[2005:Main_Page|MIREX 2005]]'''&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9402</id>
		<title>2013:Main Page</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9402"/>
		<updated>2013-06-12T13:24:03Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to MIREX 2013==&lt;br /&gt;
&lt;br /&gt;
This is the main page for the ninth running of the Music Information Retrieval Evaluation eXchange (MIREX 2013). The International Music Information Retrieval Systems Evaluation Laboratory ([https://music-ir.org/evaluation IMIRSEL]) at the Graduate School of Library and Information Science ([http://www.lis.illinois.edu GSLIS]), University of Illinois at Urbana-Champaign ([http://www.illinois.edu UIUC]) is the principal organizer of MIREX 2013. &lt;br /&gt;
&lt;br /&gt;
The MIREX 2013 community will hold its annual meeting as part of [http://ismir2013.ismir.net/ The 14th International Conference on Music Information Retrieval], ISMIR 2013, which will be held in Curitiba, PR, Brazil, the 4-8 November, 2013. The MIREX plenary and poster sessions will be held during the conference.&lt;br /&gt;
&lt;br /&gt;
J. Stephen Downie&amp;lt;br&amp;gt;&lt;br /&gt;
Director, IMIRSEL&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==New Task Leadership Model==&lt;br /&gt;
&lt;br /&gt;
In response to discussions at ISMIR 2012, we are prepared to improve the distribution of tasks for the upcoming MIREX 2013.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead a task, please add your name to the &amp;quot;Captains&amp;quot; column on the new [[2013:Task Captains]] page. Please direct any communication to the [https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest] mailing list.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Subscription information at: EvalFest Central. &lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
&lt;br /&gt;
We really need leaders to help us this year!&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Deadline Dates==&lt;br /&gt;
&lt;br /&gt;
TBA&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;&amp;lt;b&amp;gt;Nota Bene:&amp;lt;/b&amp;gt; &amp;lt;/i&amp;gt;In the past we have been rather flexible about deadlines. This year, however, we simply do not have the time flexibility, sorry.&lt;br /&gt;
&lt;br /&gt;
Please, please, please, let's start getting those submissions made. The sooner we have the code, the sooner we can start running the evaluations.&lt;br /&gt;
&lt;br /&gt;
PS: If you have a slower running algorithm, help us help you by getting your code in ASAP. Please do pay attention to runtime limits.&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Submission Instructions==&lt;br /&gt;
* Be sure to read through the rest of this page&lt;br /&gt;
* Be sure to read though the task pages for which you are submitting&lt;br /&gt;
* Be sure to follow the [[2009:Best Coding Practices for MIREX | Best Coding Practices for MIREX]]&lt;br /&gt;
* Be sure to follow the  [[MIREX 2013 Submission Instructions]] including both the tutorial video and the text&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Possible Evaluation Tasks==&lt;br /&gt;
&lt;br /&gt;
* [[2013:Audio Classification (Train/Test) Tasks]], incorporating:&lt;br /&gt;
** Audio US Pop Genre Classification&lt;br /&gt;
** Audio Latin Genre Classification&lt;br /&gt;
** Audio Music Mood Classification&lt;br /&gt;
** Audio Classical Composer Identification&lt;br /&gt;
* [[2013:Audio Cover Song Identification]]&lt;br /&gt;
* [[2013:Audio Tag Classification]] &lt;br /&gt;
* [[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
* [[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
* [[2013:Audio Onset Detection]]&lt;br /&gt;
* [[2013:Audio Key Detection]]&lt;br /&gt;
* [[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
* [[2013:Query by Singing/Humming]]&lt;br /&gt;
* [[2013:Audio Melody Extraction]]&lt;br /&gt;
* [[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
* [[2013:Audio Chord Estimation]]&lt;br /&gt;
* [[2013:Query by Tapping]]&lt;br /&gt;
* [[2013:Audio Beat Tracking]]&lt;br /&gt;
* [[2013:Structural Segmentation]]&lt;br /&gt;
* [[2013:Audio Tempo Estimation]]&lt;br /&gt;
* [[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
&lt;br /&gt;
===Note to New Participants===&lt;br /&gt;
Please take the time to read the following review articles that explain the history and structure of MIREX.&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen (2008). The Music Information Retrieval Evaluation Exchange (2005-2007):&amp;lt;br&amp;gt;&lt;br /&gt;
A window into music information retrieval research.''Acoustical Science and Technology 29'' (4): 247-255. &amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://dx.doi.org/10.1250/ast.29.247 http://dx.doi.org/10.1250/ast.29.247]&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen, Andreas F. Ehmann, Mert Bay and M. Cameron Jones. (2010).&amp;lt;br&amp;gt;&lt;br /&gt;
The Music Information Retrieval Evaluation eXchange: Some Observations and Insights.&amp;lt;br&amp;gt;&lt;br /&gt;
''Advances in Music Information Retrieval'' Vol. 274, pp. 93-115&amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://bit.ly/KpM5u5 http://bit.ly/KpM5u5]&lt;br /&gt;
&lt;br /&gt;
===Runtime Limits===&lt;br /&gt;
&lt;br /&gt;
We reserve the right to stop any process that exceeds runtime limits for each task.  We will do our best to notify you in enough time to allow revisions, but this may not be possible in some cases. Please respect the published runtime limits.&lt;br /&gt;
&lt;br /&gt;
===Note to All Participants===&lt;br /&gt;
&lt;br /&gt;
Because MIREX is premised upon the sharing of ideas and results, '''ALL''' MIREX participants are expected to:&lt;br /&gt;
&lt;br /&gt;
# submit a DRAFT 2-3 page extended abstract PDF in the ISMIR format about the submitted programme(s) to help us and the community better understand how the algorithm works when submitting their programme(s).&lt;br /&gt;
# submit a FINALIZED 2-3 page extended abstract PDF in the ISMIR format prior to ISMIR 2013 for posting on the respective results pages (sometimes the same abstract can be used for multiple submissions; in many cases the DRAFT and FINALIZED abstracts are the same)&lt;br /&gt;
# present a poster at the MIREX 2013 poster session at ISMIR 2013&lt;br /&gt;
&lt;br /&gt;
===Software Dependency Requests===&lt;br /&gt;
If you have not submitted to MIREX before or are unsure whether IMIRSEL/NEMA currently supports some of the software/architecture dependencies for your submission a [https://docs.google.com/spreadsheet/viewform?formkey=dFpmNF9PUGdvd1o1OHVhMkZ4cXZvdkE6MA#gid=0 dependency request form is available]. Please submit details of your dependencies on this form and the IMIRSEL team will attempt to satisfy them for you. &lt;br /&gt;
&lt;br /&gt;
Due to the high volume of submissions expected at MIREX 2013, submissions with difficult to satisfy dependencies that the team has not been given sufficient notice of may result in the submission being rejected.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Finally, you will also be expected to detail your software/architecture dependencies in a README file to be provided to the submission system.&lt;br /&gt;
&lt;br /&gt;
==Getting Involved in MIREX 2013==&lt;br /&gt;
MIREX is a community-based endeavour. Be a part of the community and help make MIREX 2012 the best yet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Mailing List Participation===&lt;br /&gt;
If you are interested in formal MIR evaluation, you should also subscribe to the &amp;quot;MIREX&amp;quot; (aka &amp;quot;EvalFest&amp;quot;) mail list and participate in the community discussions about defining and running MIREX 2013 tasks. Subscription information at: &lt;br /&gt;
[https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest Central]. &lt;br /&gt;
&lt;br /&gt;
If you are participating in MIREX 2013, it is VERY IMPORTANT that you are subscribed to EvalFest. Deadlines, task updates and other important information will be announced via this mailing list. Please use the EvalFest for discussion of MIREX task proposals and other MIREX related issues. This wiki (MIREX 2013 wiki) will be used to embody and disseminate task proposals, however, task related discussions should be conducted on the MIREX organization mailing list (EvalFest) rather than on this wiki, but should be summarized here. &lt;br /&gt;
&lt;br /&gt;
Where possible, definitions or example code for new evaluation metrics or tasks should be provided to the IMIRSEL team who will embody them in software as part of the NEMA analytics framework, which will be released to the community at or before ISMIR 2013 - providing a standardised set of interfaces and output to disciplined evaluation procedures for a great many MIR tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Wiki Participation===&lt;br /&gt;
If you find that you cannot edit a MIREX wiki page, you will need to create a new account via: [[Special:Userlogin]].&lt;br /&gt;
&lt;br /&gt;
Please note that because of &amp;quot;spam-bots&amp;quot;, MIREX wiki registration requests may be moderated by IMIRSEL members. It might take up to 24 hours for approval (Thank you for your patience!).&lt;br /&gt;
&lt;br /&gt;
==MIREX 2005 - 2012 Wikis==&lt;br /&gt;
Content from MIREX 2005 - 2012 are available at:&lt;br /&gt;
'''[[2012:Main_Page|MIREX 2012]]''' &lt;br /&gt;
'''[[2011:Main_Page|MIREX 2011]]''' &lt;br /&gt;
'''[[2010:Main_Page|MIREX 2010]]''' &lt;br /&gt;
'''[[2009:Main_Page|MIREX 2009]]''' &lt;br /&gt;
'''[[2008:Main_Page|MIREX 2008]]''' &lt;br /&gt;
'''[[2007:Main_Page|MIREX 2007]]''' &lt;br /&gt;
'''[[2006:Main_Page|MIREX 2006]]''' &lt;br /&gt;
'''[[2005:Main_Page|MIREX 2005]]'''&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9401</id>
		<title>2013:Main Page</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9401"/>
		<updated>2013-06-12T13:23:19Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: /* Note to All Participants */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to MIREX 2013==&lt;br /&gt;
&lt;br /&gt;
This is the main page for the ninth running of the Music Information Retrieval Evaluation eXchange (MIREX 2013). The International Music Information Retrieval Systems Evaluation Laboratory ([https://music-ir.org/evaluation IMIRSEL]) at the Graduate School of Library and Information Science ([http://www.lis.illinois.edu GSLIS]), University of Illinois at Urbana-Champaign ([http://www.illinois.edu UIUC]) is the principal organizer of MIREX 2013. &lt;br /&gt;
&lt;br /&gt;
The MIREX 2013 community will hold its annual meeting as part of [http://ismir2013.ismir.net/ The 14th International Conference on Music Information Retrieval], ISMIR 2013, which will be held in Curitiba, PR, Brazil, the 4-8 November, 2013. The MIREX plenary and poster sessions will be held during the conference.&lt;br /&gt;
&lt;br /&gt;
J. Stephen Downie&amp;lt;br&amp;gt;&lt;br /&gt;
Director, IMIRSEL&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==New Task Leadership Model==&lt;br /&gt;
&lt;br /&gt;
In response to discussions at ISMIR 2012, we are prepared to improve the distribution of tasks for the upcoming MIREX 2013.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead a task, please add your name to the &amp;quot;Captains&amp;quot; column on the new [[2013:Task Captains]] page. Please direct any communication to the [https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest] mailing list.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Subscription information at: EvalFest Central. &lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
&lt;br /&gt;
We really need leaders to help us this year!&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Deadline Dates==&lt;br /&gt;
&lt;br /&gt;
TBA&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;&amp;lt;b&amp;gt;Nota Bene:&amp;lt;/b&amp;gt; &amp;lt;/i&amp;gt;In the past we have been rather flexible about deadlines. This year, however, we simply do not have the time flexibility, sorry.&lt;br /&gt;
&lt;br /&gt;
Please, please, please, let's start getting those submissions made. The sooner we have the code, the sooner we can start running the evaluations.&lt;br /&gt;
&lt;br /&gt;
PS: If you have a slower running algorithm, help us help you by getting your code in ASAP. Please do pay attention to runtime limits.&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Submission Instructions==&lt;br /&gt;
* Be sure to read through the rest of this page&lt;br /&gt;
* Be sure to read though the task pages for which you are submitting&lt;br /&gt;
* Be sure to follow the [[2009:Best Coding Practices for MIREX | Best Coding Practices for MIREX]]&lt;br /&gt;
* Be sure to follow the  [[MIREX 2013 Submission Instructions]] including both the tutorial video and the text&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Possible Evaluation Tasks==&lt;br /&gt;
&lt;br /&gt;
* [[2013:Audio Classification (Train/Test) Tasks]], incorporating:&lt;br /&gt;
** Audio US Pop Genre Classification&lt;br /&gt;
** Audio Latin Genre Classification&lt;br /&gt;
** Audio Music Mood Classification&lt;br /&gt;
** Audio Classical Composer Identification&lt;br /&gt;
* [[2013:Audio Cover Song Identification]]&lt;br /&gt;
* [[2013:Audio Tag Classification]] &lt;br /&gt;
* [[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
* [[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
* [[2013:Audio Onset Detection]]&lt;br /&gt;
* [[2013:Audio Key Detection]]&lt;br /&gt;
* [[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
* [[2013:Query by Singing/Humming]]&lt;br /&gt;
* [[2013:Audio Melody Extraction]]&lt;br /&gt;
* [[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
* [[2013:Audio Chord Estimation]]&lt;br /&gt;
* [[2013:Query by Tapping]]&lt;br /&gt;
* [[2013:Audio Beat Tracking]]&lt;br /&gt;
* [[2013:Structural Segmentation]]&lt;br /&gt;
* [[2013:Audio Tempo Estimation]]&lt;br /&gt;
* [[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
&lt;br /&gt;
===Note to New Participants===&lt;br /&gt;
Please take the time to read the following review articles that explain the history and structure of MIREX.&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen (2008). The Music Information Retrieval Evaluation Exchange (2005-2007):&amp;lt;br&amp;gt;&lt;br /&gt;
A window into music information retrieval research.''Acoustical Science and Technology 29'' (4): 247-255. &amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://dx.doi.org/10.1250/ast.29.247 http://dx.doi.org/10.1250/ast.29.247]&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen, Andreas F. Ehmann, Mert Bay and M. Cameron Jones. (2010).&amp;lt;br&amp;gt;&lt;br /&gt;
The Music Information Retrieval Evaluation eXchange: Some Observations and Insights.&amp;lt;br&amp;gt;&lt;br /&gt;
''Advances in Music Information Retrieval'' Vol. 274, pp. 93-115&amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://bit.ly/KpM5u5 http://bit.ly/KpM5u5]&lt;br /&gt;
&lt;br /&gt;
===Note to All Participants===&lt;br /&gt;
&lt;br /&gt;
'''A note about runtime limits:  We reserve the right to stop any process that exceeds runtime limits for each task.  We will do our best to notify you in enough time to allow revisions, but this may not be possible in some cases. Please respect the published runtime limits.'''&lt;br /&gt;
&lt;br /&gt;
Because MIREX is premised upon the sharing of ideas and results, '''ALL''' MIREX participants are expected to:&lt;br /&gt;
&lt;br /&gt;
# submit a DRAFT 2-3 page extended abstract PDF in the ISMIR format about the submitted programme(s) to help us and the community better understand how the algorithm works when submitting their programme(s).&lt;br /&gt;
# submit a FINALIZED 2-3 page extended abstract PDF in the ISMIR format prior to ISMIR 2013 for posting on the respective results pages (sometimes the same abstract can be used for multiple submissions; in many cases the DRAFT and FINALIZED abstracts are the same)&lt;br /&gt;
# present a poster at the MIREX 2013 poster session at ISMIR 2013&lt;br /&gt;
&lt;br /&gt;
===Software Dependency Requests===&lt;br /&gt;
If you have not submitted to MIREX before or are unsure whether IMIRSEL/NEMA currently supports some of the software/architecture dependencies for your submission a [https://docs.google.com/spreadsheet/viewform?formkey=dFpmNF9PUGdvd1o1OHVhMkZ4cXZvdkE6MA#gid=0 dependency request form is available]. Please submit details of your dependencies on this form and the IMIRSEL team will attempt to satisfy them for you. &lt;br /&gt;
&lt;br /&gt;
Due to the high volume of submissions expected at MIREX 2013, submissions with difficult to satisfy dependencies that the team has not been given sufficient notice of may result in the submission being rejected.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Finally, you will also be expected to detail your software/architecture dependencies in a README file to be provided to the submission system.&lt;br /&gt;
&lt;br /&gt;
==Getting Involved in MIREX 2013==&lt;br /&gt;
MIREX is a community-based endeavour. Be a part of the community and help make MIREX 2012 the best yet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Mailing List Participation===&lt;br /&gt;
If you are interested in formal MIR evaluation, you should also subscribe to the &amp;quot;MIREX&amp;quot; (aka &amp;quot;EvalFest&amp;quot;) mail list and participate in the community discussions about defining and running MIREX 2013 tasks. Subscription information at: &lt;br /&gt;
[https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest Central]. &lt;br /&gt;
&lt;br /&gt;
If you are participating in MIREX 2013, it is VERY IMPORTANT that you are subscribed to EvalFest. Deadlines, task updates and other important information will be announced via this mailing list. Please use the EvalFest for discussion of MIREX task proposals and other MIREX related issues. This wiki (MIREX 2013 wiki) will be used to embody and disseminate task proposals, however, task related discussions should be conducted on the MIREX organization mailing list (EvalFest) rather than on this wiki, but should be summarized here. &lt;br /&gt;
&lt;br /&gt;
Where possible, definitions or example code for new evaluation metrics or tasks should be provided to the IMIRSEL team who will embody them in software as part of the NEMA analytics framework, which will be released to the community at or before ISMIR 2013 - providing a standardised set of interfaces and output to disciplined evaluation procedures for a great many MIR tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Wiki Participation===&lt;br /&gt;
If you find that you cannot edit a MIREX wiki page, you will need to create a new account via: [[Special:Userlogin]].&lt;br /&gt;
&lt;br /&gt;
Please note that because of &amp;quot;spam-bots&amp;quot;, MIREX wiki registration requests may be moderated by IMIRSEL members. It might take up to 24 hours for approval (Thank you for your patience!).&lt;br /&gt;
&lt;br /&gt;
==MIREX 2005 - 2012 Wikis==&lt;br /&gt;
Content from MIREX 2005 - 2012 are available at:&lt;br /&gt;
'''[[2012:Main_Page|MIREX 2012]]''' &lt;br /&gt;
'''[[2011:Main_Page|MIREX 2011]]''' &lt;br /&gt;
'''[[2010:Main_Page|MIREX 2010]]''' &lt;br /&gt;
'''[[2009:Main_Page|MIREX 2009]]''' &lt;br /&gt;
'''[[2008:Main_Page|MIREX 2008]]''' &lt;br /&gt;
'''[[2007:Main_Page|MIREX 2007]]''' &lt;br /&gt;
'''[[2006:Main_Page|MIREX 2006]]''' &lt;br /&gt;
'''[[2005:Main_Page|MIREX 2005]]'''&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9400</id>
		<title>2013:Main Page</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9400"/>
		<updated>2013-06-12T13:19:26Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: /* New Task Leadership Model */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to MIREX 2013==&lt;br /&gt;
&lt;br /&gt;
This is the main page for the ninth running of the Music Information Retrieval Evaluation eXchange (MIREX 2013). The International Music Information Retrieval Systems Evaluation Laboratory ([https://music-ir.org/evaluation IMIRSEL]) at the Graduate School of Library and Information Science ([http://www.lis.illinois.edu GSLIS]), University of Illinois at Urbana-Champaign ([http://www.illinois.edu UIUC]) is the principal organizer of MIREX 2013. &lt;br /&gt;
&lt;br /&gt;
The MIREX 2013 community will hold its annual meeting as part of [http://ismir2013.ismir.net/ The 14th International Conference on Music Information Retrieval], ISMIR 2013, which will be held in Curitiba, PR, Brazil, the 4-8 November, 2013. The MIREX plenary and poster sessions will be held during the conference.&lt;br /&gt;
&lt;br /&gt;
J. Stephen Downie&amp;lt;br&amp;gt;&lt;br /&gt;
Director, IMIRSEL&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==New Task Leadership Model==&lt;br /&gt;
&lt;br /&gt;
In response to discussions at ISMIR 2012, we are prepared to improve the distribution of tasks for the upcoming MIREX 2013.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead a task, please add your name to the &amp;quot;Captains&amp;quot; column on the new [[2013:Task Captains]] page. Please direct any communication to the [https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest] mailing list.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Subscription information at: EvalFest Central. &lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
&lt;br /&gt;
We really need leaders to help us this year!&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Deadline Dates==&lt;br /&gt;
&lt;br /&gt;
TBA&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;&amp;lt;b&amp;gt;Nota Bene:&amp;lt;/b&amp;gt; &amp;lt;/i&amp;gt;In the past we have been rather flexible about deadlines. This year, however, we simply do not have the time flexibility, sorry.&lt;br /&gt;
&lt;br /&gt;
Please, please, please, let's start getting those submissions made. The sooner we have the code, the sooner we can start running the evaluations.&lt;br /&gt;
&lt;br /&gt;
PS: If you have a slower running algorithm, help us help you by getting your code in ASAP. Please do pay attention to runtime limits.&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Submission Instructions==&lt;br /&gt;
* Be sure to read through the rest of this page&lt;br /&gt;
* Be sure to read though the task pages for which you are submitting&lt;br /&gt;
* Be sure to follow the [[2009:Best Coding Practices for MIREX | Best Coding Practices for MIREX]]&lt;br /&gt;
* Be sure to follow the  [[MIREX 2013 Submission Instructions]] including both the tutorial video and the text&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Possible Evaluation Tasks==&lt;br /&gt;
&lt;br /&gt;
* [[2013:Audio Classification (Train/Test) Tasks]], incorporating:&lt;br /&gt;
** Audio US Pop Genre Classification&lt;br /&gt;
** Audio Latin Genre Classification&lt;br /&gt;
** Audio Music Mood Classification&lt;br /&gt;
** Audio Classical Composer Identification&lt;br /&gt;
* [[2013:Audio Cover Song Identification]]&lt;br /&gt;
* [[2013:Audio Tag Classification]] &lt;br /&gt;
* [[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
* [[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
* [[2013:Audio Onset Detection]]&lt;br /&gt;
* [[2013:Audio Key Detection]]&lt;br /&gt;
* [[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
* [[2013:Query by Singing/Humming]]&lt;br /&gt;
* [[2013:Audio Melody Extraction]]&lt;br /&gt;
* [[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
* [[2013:Audio Chord Estimation]]&lt;br /&gt;
* [[2013:Query by Tapping]]&lt;br /&gt;
* [[2013:Audio Beat Tracking]]&lt;br /&gt;
* [[2013:Structural Segmentation]]&lt;br /&gt;
* [[2013:Audio Tempo Estimation]]&lt;br /&gt;
* [[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
&lt;br /&gt;
===Note to New Participants===&lt;br /&gt;
Please take the time to read the following review articles that explain the history and structure of MIREX.&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen (2008). The Music Information Retrieval Evaluation Exchange (2005-2007):&amp;lt;br&amp;gt;&lt;br /&gt;
A window into music information retrieval research.''Acoustical Science and Technology 29'' (4): 247-255. &amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://dx.doi.org/10.1250/ast.29.247 http://dx.doi.org/10.1250/ast.29.247]&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen, Andreas F. Ehmann, Mert Bay and M. Cameron Jones. (2010).&amp;lt;br&amp;gt;&lt;br /&gt;
The Music Information Retrieval Evaluation eXchange: Some Observations and Insights.&amp;lt;br&amp;gt;&lt;br /&gt;
''Advances in Music Information Retrieval'' Vol. 274, pp. 93-115&amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://bit.ly/KpM5u5 http://bit.ly/KpM5u5]&lt;br /&gt;
&lt;br /&gt;
===Note to All Participants===&lt;br /&gt;
Because MIREX is premised upon the sharing of ideas and results, '''ALL''' MIREX participants are expected to:&lt;br /&gt;
&lt;br /&gt;
# submit a DRAFT 2-3 page extended abstract PDF in the ISMIR format about the submitted programme(s) to help us and the community better understand how the algorithm works when submitting their programme(s).&lt;br /&gt;
# submit a FINALIZED 2-3 page extended abstract PDF in the ISMIR format prior to ISMIR 2013 for posting on the respective results pages (sometimes the same abstract can be used for multiple submissions; in many cases the DRAFT and FINALIZED abstracts are the same)&lt;br /&gt;
# present a poster at the MIREX 2013 poster session at ISMIR 2013&lt;br /&gt;
&lt;br /&gt;
===Software Dependency Requests===&lt;br /&gt;
If you have not submitted to MIREX before or are unsure whether IMIRSEL/NEMA currently supports some of the software/architecture dependencies for your submission a [https://docs.google.com/spreadsheet/viewform?formkey=dFpmNF9PUGdvd1o1OHVhMkZ4cXZvdkE6MA#gid=0 dependency request form is available]. Please submit details of your dependencies on this form and the IMIRSEL team will attempt to satisfy them for you. &lt;br /&gt;
&lt;br /&gt;
Due to the high volume of submissions expected at MIREX 2013, submissions with difficult to satisfy dependencies that the team has not been given sufficient notice of may result in the submission being rejected.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Finally, you will also be expected to detail your software/architecture dependencies in a README file to be provided to the submission system.&lt;br /&gt;
&lt;br /&gt;
==Getting Involved in MIREX 2013==&lt;br /&gt;
MIREX is a community-based endeavour. Be a part of the community and help make MIREX 2012 the best yet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Mailing List Participation===&lt;br /&gt;
If you are interested in formal MIR evaluation, you should also subscribe to the &amp;quot;MIREX&amp;quot; (aka &amp;quot;EvalFest&amp;quot;) mail list and participate in the community discussions about defining and running MIREX 2013 tasks. Subscription information at: &lt;br /&gt;
[https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest Central]. &lt;br /&gt;
&lt;br /&gt;
If you are participating in MIREX 2013, it is VERY IMPORTANT that you are subscribed to EvalFest. Deadlines, task updates and other important information will be announced via this mailing list. Please use the EvalFest for discussion of MIREX task proposals and other MIREX related issues. This wiki (MIREX 2013 wiki) will be used to embody and disseminate task proposals, however, task related discussions should be conducted on the MIREX organization mailing list (EvalFest) rather than on this wiki, but should be summarized here. &lt;br /&gt;
&lt;br /&gt;
Where possible, definitions or example code for new evaluation metrics or tasks should be provided to the IMIRSEL team who will embody them in software as part of the NEMA analytics framework, which will be released to the community at or before ISMIR 2013 - providing a standardised set of interfaces and output to disciplined evaluation procedures for a great many MIR tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Wiki Participation===&lt;br /&gt;
If you find that you cannot edit a MIREX wiki page, you will need to create a new account via: [[Special:Userlogin]].&lt;br /&gt;
&lt;br /&gt;
Please note that because of &amp;quot;spam-bots&amp;quot;, MIREX wiki registration requests may be moderated by IMIRSEL members. It might take up to 24 hours for approval (Thank you for your patience!).&lt;br /&gt;
&lt;br /&gt;
==MIREX 2005 - 2012 Wikis==&lt;br /&gt;
Content from MIREX 2005 - 2012 are available at:&lt;br /&gt;
'''[[2012:Main_Page|MIREX 2012]]''' &lt;br /&gt;
'''[[2011:Main_Page|MIREX 2011]]''' &lt;br /&gt;
'''[[2010:Main_Page|MIREX 2010]]''' &lt;br /&gt;
'''[[2009:Main_Page|MIREX 2009]]''' &lt;br /&gt;
'''[[2008:Main_Page|MIREX 2008]]''' &lt;br /&gt;
'''[[2007:Main_Page|MIREX 2007]]''' &lt;br /&gt;
'''[[2006:Main_Page|MIREX 2006]]''' &lt;br /&gt;
'''[[2005:Main_Page|MIREX 2005]]'''&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9399</id>
		<title>2013:Main Page</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9399"/>
		<updated>2013-06-12T13:17:32Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to MIREX 2013==&lt;br /&gt;
&lt;br /&gt;
This is the main page for the ninth running of the Music Information Retrieval Evaluation eXchange (MIREX 2013). The International Music Information Retrieval Systems Evaluation Laboratory ([https://music-ir.org/evaluation IMIRSEL]) at the Graduate School of Library and Information Science ([http://www.lis.illinois.edu GSLIS]), University of Illinois at Urbana-Champaign ([http://www.illinois.edu UIUC]) is the principal organizer of MIREX 2013. &lt;br /&gt;
&lt;br /&gt;
The MIREX 2013 community will hold its annual meeting as part of [http://ismir2013.ismir.net/ The 14th International Conference on Music Information Retrieval], ISMIR 2013, which will be held in Curitiba, PR, Brazil, the 4-8 November, 2013. The MIREX plenary and poster sessions will be held during the conference.&lt;br /&gt;
&lt;br /&gt;
J. Stephen Downie&amp;lt;br&amp;gt;&lt;br /&gt;
Director, IMIRSEL&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==New Task Leadership Model==&lt;br /&gt;
&lt;br /&gt;
In response to discussions at ISMIR 2012, we are prepared to improve the distribution of tasks for the upcoming MIREX 2013.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead a task, please add your name to the &amp;quot;Captains&amp;quot; column on the new [[2013:Task Captains]] page.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
&lt;br /&gt;
We really need leaders to help us this year!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Deadline Dates==&lt;br /&gt;
&lt;br /&gt;
TBA&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;&amp;lt;b&amp;gt;Nota Bene:&amp;lt;/b&amp;gt; &amp;lt;/i&amp;gt;In the past we have been rather flexible about deadlines. This year, however, we simply do not have the time flexibility, sorry.&lt;br /&gt;
&lt;br /&gt;
Please, please, please, let's start getting those submissions made. The sooner we have the code, the sooner we can start running the evaluations.&lt;br /&gt;
&lt;br /&gt;
PS: If you have a slower running algorithm, help us help you by getting your code in ASAP. Please do pay attention to runtime limits.&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Submission Instructions==&lt;br /&gt;
* Be sure to read through the rest of this page&lt;br /&gt;
* Be sure to read though the task pages for which you are submitting&lt;br /&gt;
* Be sure to follow the [[2009:Best Coding Practices for MIREX | Best Coding Practices for MIREX]]&lt;br /&gt;
* Be sure to follow the  [[MIREX 2013 Submission Instructions]] including both the tutorial video and the text&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Possible Evaluation Tasks==&lt;br /&gt;
&lt;br /&gt;
* [[2013:Audio Classification (Train/Test) Tasks]], incorporating:&lt;br /&gt;
** Audio US Pop Genre Classification&lt;br /&gt;
** Audio Latin Genre Classification&lt;br /&gt;
** Audio Music Mood Classification&lt;br /&gt;
** Audio Classical Composer Identification&lt;br /&gt;
* [[2013:Audio Cover Song Identification]]&lt;br /&gt;
* [[2013:Audio Tag Classification]] &lt;br /&gt;
* [[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
* [[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
* [[2013:Audio Onset Detection]]&lt;br /&gt;
* [[2013:Audio Key Detection]]&lt;br /&gt;
* [[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
* [[2013:Query by Singing/Humming]]&lt;br /&gt;
* [[2013:Audio Melody Extraction]]&lt;br /&gt;
* [[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
* [[2013:Audio Chord Estimation]]&lt;br /&gt;
* [[2013:Query by Tapping]]&lt;br /&gt;
* [[2013:Audio Beat Tracking]]&lt;br /&gt;
* [[2013:Structural Segmentation]]&lt;br /&gt;
* [[2013:Audio Tempo Estimation]]&lt;br /&gt;
* [[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
&lt;br /&gt;
===Note to New Participants===&lt;br /&gt;
Please take the time to read the following review articles that explain the history and structure of MIREX.&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen (2008). The Music Information Retrieval Evaluation Exchange (2005-2007):&amp;lt;br&amp;gt;&lt;br /&gt;
A window into music information retrieval research.''Acoustical Science and Technology 29'' (4): 247-255. &amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://dx.doi.org/10.1250/ast.29.247 http://dx.doi.org/10.1250/ast.29.247]&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen, Andreas F. Ehmann, Mert Bay and M. Cameron Jones. (2010).&amp;lt;br&amp;gt;&lt;br /&gt;
The Music Information Retrieval Evaluation eXchange: Some Observations and Insights.&amp;lt;br&amp;gt;&lt;br /&gt;
''Advances in Music Information Retrieval'' Vol. 274, pp. 93-115&amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://bit.ly/KpM5u5 http://bit.ly/KpM5u5]&lt;br /&gt;
&lt;br /&gt;
===Note to All Participants===&lt;br /&gt;
Because MIREX is premised upon the sharing of ideas and results, '''ALL''' MIREX participants are expected to:&lt;br /&gt;
&lt;br /&gt;
# submit a DRAFT 2-3 page extended abstract PDF in the ISMIR format about the submitted programme(s) to help us and the community better understand how the algorithm works when submitting their programme(s).&lt;br /&gt;
# submit a FINALIZED 2-3 page extended abstract PDF in the ISMIR format prior to ISMIR 2013 for posting on the respective results pages (sometimes the same abstract can be used for multiple submissions; in many cases the DRAFT and FINALIZED abstracts are the same)&lt;br /&gt;
# present a poster at the MIREX 2013 poster session at ISMIR 2013&lt;br /&gt;
&lt;br /&gt;
===Software Dependency Requests===&lt;br /&gt;
If you have not submitted to MIREX before or are unsure whether IMIRSEL/NEMA currently supports some of the software/architecture dependencies for your submission a [https://docs.google.com/spreadsheet/viewform?formkey=dFpmNF9PUGdvd1o1OHVhMkZ4cXZvdkE6MA#gid=0 dependency request form is available]. Please submit details of your dependencies on this form and the IMIRSEL team will attempt to satisfy them for you. &lt;br /&gt;
&lt;br /&gt;
Due to the high volume of submissions expected at MIREX 2013, submissions with difficult to satisfy dependencies that the team has not been given sufficient notice of may result in the submission being rejected.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Finally, you will also be expected to detail your software/architecture dependencies in a README file to be provided to the submission system.&lt;br /&gt;
&lt;br /&gt;
==Getting Involved in MIREX 2013==&lt;br /&gt;
MIREX is a community-based endeavour. Be a part of the community and help make MIREX 2012 the best yet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Mailing List Participation===&lt;br /&gt;
If you are interested in formal MIR evaluation, you should also subscribe to the &amp;quot;MIREX&amp;quot; (aka &amp;quot;EvalFest&amp;quot;) mail list and participate in the community discussions about defining and running MIREX 2013 tasks. Subscription information at: &lt;br /&gt;
[https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest Central]. &lt;br /&gt;
&lt;br /&gt;
If you are participating in MIREX 2013, it is VERY IMPORTANT that you are subscribed to EvalFest. Deadlines, task updates and other important information will be announced via this mailing list. Please use the EvalFest for discussion of MIREX task proposals and other MIREX related issues. This wiki (MIREX 2013 wiki) will be used to embody and disseminate task proposals, however, task related discussions should be conducted on the MIREX organization mailing list (EvalFest) rather than on this wiki, but should be summarized here. &lt;br /&gt;
&lt;br /&gt;
Where possible, definitions or example code for new evaluation metrics or tasks should be provided to the IMIRSEL team who will embody them in software as part of the NEMA analytics framework, which will be released to the community at or before ISMIR 2013 - providing a standardised set of interfaces and output to disciplined evaluation procedures for a great many MIR tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Wiki Participation===&lt;br /&gt;
If you find that you cannot edit a MIREX wiki page, you will need to create a new account via: [[Special:Userlogin]].&lt;br /&gt;
&lt;br /&gt;
Please note that because of &amp;quot;spam-bots&amp;quot;, MIREX wiki registration requests may be moderated by IMIRSEL members. It might take up to 24 hours for approval (Thank you for your patience!).&lt;br /&gt;
&lt;br /&gt;
==MIREX 2005 - 2012 Wikis==&lt;br /&gt;
Content from MIREX 2005 - 2012 are available at:&lt;br /&gt;
'''[[2012:Main_Page|MIREX 2012]]''' &lt;br /&gt;
'''[[2011:Main_Page|MIREX 2011]]''' &lt;br /&gt;
'''[[2010:Main_Page|MIREX 2010]]''' &lt;br /&gt;
'''[[2009:Main_Page|MIREX 2009]]''' &lt;br /&gt;
'''[[2008:Main_Page|MIREX 2008]]''' &lt;br /&gt;
'''[[2007:Main_Page|MIREX 2007]]''' &lt;br /&gt;
'''[[2006:Main_Page|MIREX 2006]]''' &lt;br /&gt;
'''[[2005:Main_Page|MIREX 2005]]'''&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9398</id>
		<title>2013:Main Page</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9398"/>
		<updated>2013-06-12T13:13:26Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to MIREX 2013==&lt;br /&gt;
&lt;br /&gt;
This is the main page for the eighth running of the Music Information Retrieval Evaluation eXchange (MIREX 2013). The International Music Information Retrieval Systems Evaluation Laboratory ([https://music-ir.org/evaluation IMIRSEL]) at the Graduate School of Library and Information Science ([http://www.lis.illinois.edu GSLIS]), University of Illinois at Urbana-Champaign ([http://www.illinois.edu UIUC]) is the principal organizer of MIREX 2013. &lt;br /&gt;
&lt;br /&gt;
The MIREX 2013 community will hold its annual meeting as part of [http://ismir2013.ismir.net/ The 14th International Conference on Music Information Retrieval], ISMIR 2013, which will be held in Curitiba, PR, Brazil, the 4-8 November, 2013. The MIREX plenary and poster sessions will be held during the conference.&lt;br /&gt;
&lt;br /&gt;
J. Stephen Downie&amp;lt;br&amp;gt;&lt;br /&gt;
Director, IMIRSEL&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==New Task Leadership Model==&lt;br /&gt;
&lt;br /&gt;
In response to discussions at ISMIR 2012, we are prepared to improve the distribution of tasks for the upcoming MIREX 2013.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead a task, please add your name to the &amp;quot;Captains&amp;quot; column on the new [[2013:Task Captains]] page.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
&lt;br /&gt;
We really need leaders to help us this year!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Deadline Dates==&lt;br /&gt;
&lt;br /&gt;
TBA&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;&amp;lt;b&amp;gt;Nota Bene:&amp;lt;/b&amp;gt; &amp;lt;/i&amp;gt;In the past we have been rather flexible about deadlines. This year, however, we simply do not have the time flexibility, sorry.&lt;br /&gt;
&lt;br /&gt;
Please, please, please, let's start getting those submissions made. The sooner we have the code, the sooner we can start running the evaluations.&lt;br /&gt;
&lt;br /&gt;
PS: If you have a slower running algorithm, help us help you by getting your code in ASAP. Please do pay attention to runtime limits.&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Submission Instructions==&lt;br /&gt;
* Be sure to read through the rest of this page&lt;br /&gt;
* Be sure to read though the task pages for which you are submitting&lt;br /&gt;
* Be sure to follow the [[2009:Best Coding Practices for MIREX | Best Coding Practices for MIREX]]&lt;br /&gt;
* Be sure to follow the  [[MIREX 2013 Submission Instructions]] including both the tutorial video and the text&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Possible Evaluation Tasks==&lt;br /&gt;
&lt;br /&gt;
* [[2013:Audio Classification (Train/Test) Tasks]], incorporating:&lt;br /&gt;
** Audio US Pop Genre Classification&lt;br /&gt;
** Audio Latin Genre Classification&lt;br /&gt;
** Audio Music Mood Classification&lt;br /&gt;
** Audio Classical Composer Identification&lt;br /&gt;
* [[2013:Audio Cover Song Identification]]&lt;br /&gt;
* [[2013:Audio Tag Classification]] &lt;br /&gt;
* [[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
* [[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
* [[2013:Audio Onset Detection]]&lt;br /&gt;
* [[2013:Audio Key Detection]]&lt;br /&gt;
* [[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
* [[2013:Query by Singing/Humming]]&lt;br /&gt;
* [[2013:Audio Melody Extraction]]&lt;br /&gt;
* [[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
* [[2013:Audio Chord Estimation]]&lt;br /&gt;
* [[2013:Query by Tapping]]&lt;br /&gt;
* [[2013:Audio Beat Tracking]]&lt;br /&gt;
* [[2013:Structural Segmentation]]&lt;br /&gt;
* [[2013:Audio Tempo Estimation]]&lt;br /&gt;
* [[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
&lt;br /&gt;
===Note to New Participants===&lt;br /&gt;
Please take the time to read the following review articles that explain the history and structure of MIREX.&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen (2008). The Music Information Retrieval Evaluation Exchange (2005-2007):&amp;lt;br&amp;gt;&lt;br /&gt;
A window into music information retrieval research.''Acoustical Science and Technology 29'' (4): 247-255. &amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://dx.doi.org/10.1250/ast.29.247 http://dx.doi.org/10.1250/ast.29.247]&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen, Andreas F. Ehmann, Mert Bay and M. Cameron Jones. (2010).&amp;lt;br&amp;gt;&lt;br /&gt;
The Music Information Retrieval Evaluation eXchange: Some Observations and Insights.&amp;lt;br&amp;gt;&lt;br /&gt;
''Advances in Music Information Retrieval'' Vol. 274, pp. 93-115&amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://bit.ly/KpM5u5 http://bit.ly/KpM5u5]&lt;br /&gt;
&lt;br /&gt;
===Note to All Participants===&lt;br /&gt;
Because MIREX is premised upon the sharing of ideas and results, '''ALL''' MIREX participants are expected to:&lt;br /&gt;
&lt;br /&gt;
# submit a DRAFT 2-3 page extended abstract PDF in the ISMIR format about the submitted programme(s) to help us and the community better understand how the algorithm works when submitting their programme(s).&lt;br /&gt;
# submit a FINALIZED 2-3 page extended abstract PDF in the ISMIR format prior to ISMIR 2013 for posting on the respective results pages (sometimes the same abstract can be used for multiple submissions; in many cases the DRAFT and FINALIZED abstracts are the same)&lt;br /&gt;
# present a poster at the MIREX 2013 poster session at ISMIR 2013&lt;br /&gt;
&lt;br /&gt;
===Software Dependency Requests===&lt;br /&gt;
If you have not submitted to MIREX before or are unsure whether IMIRSEL/NEMA currently supports some of the software/architecture dependencies for your submission a [https://docs.google.com/spreadsheet/viewform?formkey=dFpmNF9PUGdvd1o1OHVhMkZ4cXZvdkE6MA#gid=0 dependency request form is available]. Please submit details of your dependencies on this form and the IMIRSEL team will attempt to satisfy them for you. &lt;br /&gt;
&lt;br /&gt;
Due to the high volume of submissions expected at MIREX 2013, submissions with difficult to satisfy dependencies that the team has not been given sufficient notice of may result in the submission being rejected.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Finally, you will also be expected to detail your software/architecture dependencies in a README file to be provided to the submission system.&lt;br /&gt;
&lt;br /&gt;
==Getting Involved in MIREX 2013==&lt;br /&gt;
MIREX is a community-based endeavour. Be a part of the community and help make MIREX 2012 the best yet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Mailing List Participation===&lt;br /&gt;
If you are interested in formal MIR evaluation, you should also subscribe to the &amp;quot;MIREX&amp;quot; (aka &amp;quot;EvalFest&amp;quot;) mail list and participate in the community discussions about defining and running MIREX 2013 tasks. Subscription information at: &lt;br /&gt;
[https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest Central]. &lt;br /&gt;
&lt;br /&gt;
If you are participating in MIREX 2013, it is VERY IMPORTANT that you are subscribed to EvalFest. Deadlines, task updates and other important information will be announced via this mailing list. Please use the EvalFest for discussion of MIREX task proposals and other MIREX related issues. This wiki (MIREX 2013 wiki) will be used to embody and disseminate task proposals, however, task related discussions should be conducted on the MIREX organization mailing list (EvalFest) rather than on this wiki, but should be summarized here. &lt;br /&gt;
&lt;br /&gt;
Where possible, definitions or example code for new evaluation metrics or tasks should be provided to the IMIRSEL team who will embody them in software as part of the NEMA analytics framework, which will be released to the community at or before ISMIR 2013 - providing a standardised set of interfaces and output to disciplined evaluation procedures for a great many MIR tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Wiki Participation===&lt;br /&gt;
If you find that you cannot edit a MIREX wiki page, you will need to create a new account via: [[Special:Userlogin]].&lt;br /&gt;
&lt;br /&gt;
Please note that because of &amp;quot;spam-bots&amp;quot;, MIREX wiki registration requests may be moderated by IMIRSEL members. It might take up to 24 hours for approval (Thank you for your patience!).&lt;br /&gt;
&lt;br /&gt;
==MIREX 2005 - 2012 Wikis==&lt;br /&gt;
Content from MIREX 2005 - 2012 are available at:&lt;br /&gt;
'''[[2012:Main_Page|MIREX 2012]]''' &lt;br /&gt;
'''[[2011:Main_Page|MIREX 2011]]''' &lt;br /&gt;
'''[[2010:Main_Page|MIREX 2010]]''' &lt;br /&gt;
'''[[2009:Main_Page|MIREX 2009]]''' &lt;br /&gt;
'''[[2008:Main_Page|MIREX 2008]]''' &lt;br /&gt;
'''[[2007:Main_Page|MIREX 2007]]''' &lt;br /&gt;
'''[[2006:Main_Page|MIREX 2006]]''' &lt;br /&gt;
'''[[2005:Main_Page|MIREX 2005]]'''&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9397</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9397"/>
		<updated>2013-06-12T13:11:53Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In response to discussions at ISMIR 2012, we are prepared to improve the distribution of tasks for the upcoming MIREX 2013.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead one or more tasks, please add your name in the &amp;quot;Captains&amp;quot; column.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain(s)&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|Mert Bay?&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|drts&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|Tom Collins?&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9396</id>
		<title>2013:Main Page</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Main_Page&amp;diff=9396"/>
		<updated>2013-06-12T13:11:04Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to MIREX 2013==&lt;br /&gt;
&lt;br /&gt;
This is the main page for the eighth running of the Music Information Retrieval Evaluation eXchange (MIREX 2013). The International Music Information Retrieval Systems Evaluation Laboratory ([https://music-ir.org/evaluation IMIRSEL]) at the Graduate School of Library and Information Science ([http://www.lis.illinois.edu GSLIS]), University of Illinois at Urbana-Champaign ([http://www.illinois.edu UIUC]) is the principal organizer of MIREX 2013. &lt;br /&gt;
&lt;br /&gt;
The MIREX 2013 community will hold its annual meeting as part of [http://ismir2013.ismir.net/ The 14th International Conference on Music Information Retrieval], ISMIR 2013, which will be held in Curitiba, PR, Brazil, the 4-8 November, 2013. The MIREX plenary and poster sessions will be held during the conference.&lt;br /&gt;
&lt;br /&gt;
J. Stephen Downie&amp;lt;br&amp;gt;&lt;br /&gt;
Director, IMIRSEL&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==New Task Leadership Model==&lt;br /&gt;
&lt;br /&gt;
In response to discussions at ISMIR 2012, we are prepared to improve the distribution of tasks for the upcoming MIREX 2013.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead a task, please add your name to the &amp;quot;Captains&amp;quot; column on the new [[2013:Task Captains]] page.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
&lt;br /&gt;
We really need leaders to help us with each task!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Announcing the KETI/Illinois  ''K-MIREX Collaboration''==&lt;br /&gt;
&lt;br /&gt;
The MIREX team at the University of Illinois is very proud to announce its new &amp;lt;b&amp;gt;&amp;lt;i&amp;gt;K-MIREX Collaboration&amp;lt;/i&amp;gt;&amp;lt;/b&amp;gt; with the research team led by Chai-Jong Song of the Digital Media Research Center at the Korea Electronics Technology Institute (KETI) [http://www.keti.re.kr/e-keti/ http://www.keti.re.kr/e-keti/]. Song and his KETI colleagues will be taking the lead on running the 2013 &amp;lt;b&amp;gt;Query by Singing/Humming (QBSH)&amp;lt;/b&amp;gt; and &amp;lt;b&amp;gt;Audio Melody Extraction (AME) Tasks &amp;lt;/b&amp;gt;. We do not foresee any special deviations from traditional MIREX submission procedures for these two tasks. Should they arise, participants will be informed.&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Deadline Dates==&lt;br /&gt;
&lt;br /&gt;
TBA&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;&amp;lt;b&amp;gt;Nota Bene:&amp;lt;/b&amp;gt; &amp;lt;/i&amp;gt;In the past we have been rather flexible about deadlines. This year, however, we simply do not have the time flexibility, sorry.&lt;br /&gt;
&lt;br /&gt;
Please, please, please, let's start getting those submissions made. The sooner we have the code, the sooner we can start running the evaluations.&lt;br /&gt;
&lt;br /&gt;
PS: If you have a slower running algorithm, help us help you by getting your code in ASAP. Please do pay attention to runtime limits.&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Submission Instructions==&lt;br /&gt;
* Be sure to read through the rest of this page&lt;br /&gt;
* Be sure to read though the task pages for which you are submitting&lt;br /&gt;
* Be sure to follow the [[2009:Best Coding Practices for MIREX | Best Coding Practices for MIREX]]&lt;br /&gt;
* Be sure to follow the  [[MIREX 2013 Submission Instructions]] including both the tutorial video and the text&lt;br /&gt;
&lt;br /&gt;
==MIREX 2013 Possible Evaluation Tasks==&lt;br /&gt;
&lt;br /&gt;
* [[2013:Audio Classification (Train/Test) Tasks]], incorporating:&lt;br /&gt;
** Audio US Pop Genre Classification&lt;br /&gt;
** Audio Latin Genre Classification&lt;br /&gt;
** Audio Music Mood Classification&lt;br /&gt;
** Audio Classical Composer Identification&lt;br /&gt;
* [[2013:Audio Cover Song Identification]]&lt;br /&gt;
* [[2013:Audio Tag Classification]] &lt;br /&gt;
* [[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
* [[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
* [[2013:Audio Onset Detection]]&lt;br /&gt;
* [[2013:Audio Key Detection]]&lt;br /&gt;
* [[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
* [[2013:Query by Singing/Humming]]&lt;br /&gt;
* [[2013:Audio Melody Extraction]]&lt;br /&gt;
* [[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
* [[2013:Audio Chord Estimation]]&lt;br /&gt;
* [[2013:Query by Tapping]]&lt;br /&gt;
* [[2013:Audio Beat Tracking]]&lt;br /&gt;
* [[2013:Structural Segmentation]]&lt;br /&gt;
* [[2013:Audio Tempo Estimation]]&lt;br /&gt;
* [[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
&lt;br /&gt;
===Note to New Participants===&lt;br /&gt;
Please take the time to read the following review articles that explain the history and structure of MIREX.&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen (2008). The Music Information Retrieval Evaluation Exchange (2005-2007):&amp;lt;br&amp;gt;&lt;br /&gt;
A window into music information retrieval research.''Acoustical Science and Technology 29'' (4): 247-255. &amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://dx.doi.org/10.1250/ast.29.247 http://dx.doi.org/10.1250/ast.29.247]&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen, Andreas F. Ehmann, Mert Bay and M. Cameron Jones. (2010).&amp;lt;br&amp;gt;&lt;br /&gt;
The Music Information Retrieval Evaluation eXchange: Some Observations and Insights.&amp;lt;br&amp;gt;&lt;br /&gt;
''Advances in Music Information Retrieval'' Vol. 274, pp. 93-115&amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://bit.ly/KpM5u5 http://bit.ly/KpM5u5]&lt;br /&gt;
&lt;br /&gt;
===Note to All Participants===&lt;br /&gt;
Because MIREX is premised upon the sharing of ideas and results, '''ALL''' MIREX participants are expected to:&lt;br /&gt;
&lt;br /&gt;
# submit a DRAFT 2-3 page extended abstract PDF in the ISMIR format about the submitted programme(s) to help us and the community better understand how the algorithm works when submitting their programme(s).&lt;br /&gt;
# submit a FINALIZED 2-3 page extended abstract PDF in the ISMIR format prior to ISMIR 2013 for posting on the respective results pages (sometimes the same abstract can be used for multiple submissions; in many cases the DRAFT and FINALIZED abstracts are the same)&lt;br /&gt;
# present a poster at the MIREX 2013 poster session at ISMIR 2013&lt;br /&gt;
&lt;br /&gt;
===Software Dependency Requests===&lt;br /&gt;
If you have not submitted to MIREX before or are unsure whether IMIRSEL/NEMA currently supports some of the software/architecture dependencies for your submission a [https://docs.google.com/spreadsheet/viewform?formkey=dFpmNF9PUGdvd1o1OHVhMkZ4cXZvdkE6MA#gid=0 dependency request form is available]. Please submit details of your dependencies on this form and the IMIRSEL team will attempt to satisfy them for you. &lt;br /&gt;
&lt;br /&gt;
Due to the high volume of submissions expected at MIREX 2013, submissions with difficult to satisfy dependencies that the team has not been given sufficient notice of may result in the submission being rejected.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Finally, you will also be expected to detail your software/architecture dependencies in a README file to be provided to the submission system.&lt;br /&gt;
&lt;br /&gt;
==Getting Involved in MIREX 2013==&lt;br /&gt;
MIREX is a community-based endeavour. Be a part of the community and help make MIREX 2012 the best yet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Mailing List Participation===&lt;br /&gt;
If you are interested in formal MIR evaluation, you should also subscribe to the &amp;quot;MIREX&amp;quot; (aka &amp;quot;EvalFest&amp;quot;) mail list and participate in the community discussions about defining and running MIREX 2013 tasks. Subscription information at: &lt;br /&gt;
[https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest Central]. &lt;br /&gt;
&lt;br /&gt;
If you are participating in MIREX 2013, it is VERY IMPORTANT that you are subscribed to EvalFest. Deadlines, task updates and other important information will be announced via this mailing list. Please use the EvalFest for discussion of MIREX task proposals and other MIREX related issues. This wiki (MIREX 2013 wiki) will be used to embody and disseminate task proposals, however, task related discussions should be conducted on the MIREX organization mailing list (EvalFest) rather than on this wiki, but should be summarized here. &lt;br /&gt;
&lt;br /&gt;
Where possible, definitions or example code for new evaluation metrics or tasks should be provided to the IMIRSEL team who will embody them in software as part of the NEMA analytics framework, which will be released to the community at or before ISMIR 2013 - providing a standardised set of interfaces and output to disciplined evaluation procedures for a great many MIR tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Wiki Participation===&lt;br /&gt;
If you find that you cannot edit a MIREX wiki page, you will need to create a new account via: [[Special:Userlogin]].&lt;br /&gt;
&lt;br /&gt;
Please note that because of &amp;quot;spam-bots&amp;quot;, MIREX wiki registration requests may be moderated by IMIRSEL members. It might take up to 24 hours for approval (Thank you for your patience!).&lt;br /&gt;
&lt;br /&gt;
==MIREX 2005 - 2012 Wikis==&lt;br /&gt;
Content from MIREX 2005 - 2012 are available at:&lt;br /&gt;
'''[[2012:Main_Page|MIREX 2012]]''' &lt;br /&gt;
'''[[2011:Main_Page|MIREX 2011]]''' &lt;br /&gt;
'''[[2010:Main_Page|MIREX 2010]]''' &lt;br /&gt;
'''[[2009:Main_Page|MIREX 2009]]''' &lt;br /&gt;
'''[[2008:Main_Page|MIREX 2008]]''' &lt;br /&gt;
'''[[2007:Main_Page|MIREX 2007]]''' &lt;br /&gt;
'''[[2006:Main_Page|MIREX 2006]]''' &lt;br /&gt;
'''[[2005:Main_Page|MIREX 2005]]'''&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9395</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9395"/>
		<updated>2013-06-12T13:08:16Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Below are the MIREX 2013 candidate tasks. To volunteer to lead one or more tasks, please add your name in the &amp;quot;Captains&amp;quot; column.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain(s)&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|Mert Bay?&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|drts&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|Tom Collins?&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9394</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9394"/>
		<updated>2013-06-12T13:06:20Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captains&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|Mert Bay?&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|drts&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|Tom Collins?&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9393</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9393"/>
		<updated>2013-06-12T13:05:33Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|Mert Bay?&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|drts&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|Tom Collins?&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9392</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9392"/>
		<updated>2013-06-12T12:58:08Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|???&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9391</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9391"/>
		<updated>2013-06-12T12:57:46Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]|-&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|???&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9390</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9390"/>
		<updated>2013-06-12T12:56:23Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|???&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9389</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9389"/>
		<updated>2013-06-12T12:55:28Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|???&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9388</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9388"/>
		<updated>2013-06-12T12:54:54Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
|!!ID&lt;br /&gt;
|!!Task&lt;br /&gt;
|!!Captain&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|???&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9387</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9387"/>
		<updated>2013-06-12T12:54:24Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
|!ID&lt;br /&gt;
|!Task&lt;br /&gt;
|!Captain&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|???&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9386</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9386"/>
		<updated>2013-06-12T12:53:35Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
|!ID&lt;br /&gt;
|!Task&lt;br /&gt;
|!Captain&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|???&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9385</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9385"/>
		<updated>2013-06-12T12:53:22Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
|!ID&lt;br /&gt;
|!Task&lt;br /&gt;
|!Captain&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9384</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9384"/>
		<updated>2013-06-12T12:51:27Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{|&lt;br /&gt;
|ID&lt;br /&gt;
|Task&lt;br /&gt;
|Captain&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9383</id>
		<title>2013:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Task_Captains&amp;diff=9383"/>
		<updated>2013-06-12T12:49:54Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: Created page with &amp;quot;  {| |Task |Captain |- |2013:Audio Classification (Train/Test) Tasks | |- |2013:Audio Cover Song Identification | |- |2013:Audio Tag Classification | |- |[[2013:Audio...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|Task&lt;br /&gt;
|Captain&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Symbolic Melodic Similarity]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Audio Onset Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Audio Key Detection]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Query by Singing/Humming]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Audio Melody Extraction]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Audio Chord Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Query by Tapping]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Audio Beat Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Structural Segmentation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Audio Tempo Estimation]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[2013:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Structural_Segmentation&amp;diff=9357</id>
		<title>2013:Structural Segmentation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Structural_Segmentation&amp;diff=9357"/>
		<updated>2013-06-10T16:17:26Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The aim of the MIREX structural segmentation evaluation is to identify the key structural sections in musical audio. The segment structure (or form) is one of the most important musical parameters. It is furthermore special because musical structure -- especially in popular music genres (e.g. verse, chorus, etc.) -- is accessible to everybody: it needs no particular musical knowledge. This task was first run in 2009.&lt;br /&gt;
&lt;br /&gt;
== Data == &lt;br /&gt;
&lt;br /&gt;
=== Collections ===&lt;br /&gt;
* The MIREX 2009 Collection: 297 pieces, most of it derived from the work of the Beatles.&lt;br /&gt;
&lt;br /&gt;
* MIREX 2010 RWC collection. 100 pieces of popular music. There are two ground truths. The first is the one originally included with the RWC dataset. The explanation of the second set of annotations can be found at http://hal.inria.fr/docs/00/47/34/79/PDF/PI-1948.pdf. The second set of annotations contains no labels for segments, but rather provides an annotation of segment boundaries.&lt;br /&gt;
&lt;br /&gt;
* MIREX 2012 dataset. The new data set contains over 1,000 annotated pieces covering a range of musical styles. The majority of the pieces have been annotated by two independent annotators. &lt;br /&gt;
&lt;br /&gt;
=== Audio Formats ===&lt;br /&gt;
&lt;br /&gt;
* CD-quality (PCM, 16-bit, 44100 Hz)&lt;br /&gt;
* single channel (mono)&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
&lt;br /&gt;
Submissions to this task will have to conform to a specified format detailed below. Submissions should be packaged and contain at least two files: The algorithm itself and a README containing contact information and detailing, in full, the use of the algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Input Data ===&lt;br /&gt;
Participating algorithms will have to read audio in the following format:&lt;br /&gt;
&lt;br /&gt;
* Sample rate: 44.1 KHz&lt;br /&gt;
* Sample size: 16 bit&lt;br /&gt;
* Number of channels: 1 (mono)&lt;br /&gt;
* Encoding: WAV &lt;br /&gt;
&lt;br /&gt;
=== Output Data ===&lt;br /&gt;
&lt;br /&gt;
The structural segmentation algorithms will return the segmentation in an ASCII text file for each input .wav audio file. The specification of this output file is immediately below.&lt;br /&gt;
&lt;br /&gt;
=== Output File Format (Structural Segmentation) ===&lt;br /&gt;
&lt;br /&gt;
The Structural Segmentation output file format is a tab-delimited ASCII text format. This is the same as Chris Harte's chord labelling files (.lab), and so is the same format as the ground truth as well. Onset and offset times are given in seconds, and the labels are simply letters: 'A', 'B', ... with segments referring to the same structural element having the same label.&lt;br /&gt;
&lt;br /&gt;
Three column text file of the format&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;onset_time(sec)&amp;gt;\t&amp;lt;offset_time(sec)&amp;gt;\t&amp;lt;label&amp;gt;\n&lt;br /&gt;
 &amp;lt;onset_time(sec)&amp;gt;\t&amp;lt;offset_time(sec)&amp;gt;\t&amp;lt;label&amp;gt;\n&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
where \t denotes a tab, \n denotes the end of line. The &amp;lt; and &amp;gt; characters are not included. An example output file would look something like:&lt;br /&gt;
&lt;br /&gt;
 0.000    5.223    A&lt;br /&gt;
 5.223    15.101   B&lt;br /&gt;
 15.101   20.334   A&lt;br /&gt;
&lt;br /&gt;
=== Algorithm Calling Format ===&lt;br /&gt;
&lt;br /&gt;
The submitted algorithm must take as arguments a SINGLE .wav file to perform the structural segmentation on as well as the full output path and filename of the output file. The ability to specify the output path and file name is essential. Denoting the input .wav file path and name as %input and the output file path and name as %output, a program called foobar could be called from the command-line as follows:&lt;br /&gt;
&lt;br /&gt;
 foobar %input %output&lt;br /&gt;
 foobar -i %input -o %output&lt;br /&gt;
&lt;br /&gt;
Moreover, if your submission takes additional parameters, foobar could be called like:&lt;br /&gt;
&lt;br /&gt;
 foobar .1 %input %output&lt;br /&gt;
 foobar -param1 .1 -i %input -o %output  &lt;br /&gt;
&lt;br /&gt;
If your submission is in MATLAB, it should be submitted as a function. Once again, the function must contain String inputs for the full path and names of the input and output files. Parameters could also be specified as input arguments of the function. For example: &lt;br /&gt;
&lt;br /&gt;
 foobar('%input','%output')&lt;br /&gt;
 foobar(.1,'%input','%output')&lt;br /&gt;
&lt;br /&gt;
=== README File ===&lt;br /&gt;
&lt;br /&gt;
A README file accompanying each submission should contain explicit instructions on how to to run the program (as well as contact information, etc.). In particular, each command line to run should be specified, using %input for the input sound file and %output for the resulting text file.&lt;br /&gt;
&lt;br /&gt;
For instance, to test the program foobar with a specific value for parameter param1, the README file would look like:&lt;br /&gt;
&lt;br /&gt;
 foobar -param1 .1 -i %input -o %output&lt;br /&gt;
&lt;br /&gt;
For a submission using MATLAB, the README file could look like:&lt;br /&gt;
&lt;br /&gt;
 matlab -r &amp;quot;foobar(.1,'%input','%output');quit;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
At the last ISMIR conference [http://ismir2008.ismir.net/papers/ISMIR2008_219.pdf Lukashevich] proposed a measure for segmentation evaluation. Because of the complexity of the structural segmentation task definition, several different evaluation measures will be employed to address different aspects. It should be noted that none of the evaluation measures cares about the true labels of the sections: they only denote the clustering. This means that it does not matter if the systems produce true labels such as &amp;quot;chorus&amp;quot; and &amp;quot;verse&amp;quot;, or arbitrary labels such as &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Boundary retrieval ===&lt;br /&gt;
'''Hit rate''' Found segment boundaries are accepted to be correct if they are within 0.5s ([http://ismir2007.ismir.net/proceedings/ISMIR2007_p051_turnbull.pdf Turnbull et al. ISMIR2007]) or 3s ([http://dx.doi.org/10.1109/TASL.2007.910781 Levy &amp;amp; Sandler TASLP2008]) from a border in the ground truth. Based on the matched hits, ''boundary retrieval recall rate'', ''boundary retrieval precision rate'', and ''boundary retrieval F-measure'' are be calculated.&lt;br /&gt;
&lt;br /&gt;
'''Median deviation''' Two median deviation measure between boundaries in the result and ground truth are calculated: ''median true-to-guess'' is the median time from boundaries in ground truth to the closest boundaries in the result, and ''median guess-to-true'' is similarly the median time from boundaries in the result to boundaries in ground truth. ([http://ismir2007.ismir.net/proceedings/ISMIR2007_p051_turnbull.pdf Turnbull et al. ISMIR2007])&lt;br /&gt;
&lt;br /&gt;
=== Frame clustering ===&lt;br /&gt;
Both the result and the ground truth are handled in short frames (e.g., beat or fixed 100ms). All frame pairs in a structure description are handled. The pairs in which both frames are assigned to the same cluster (i.e., have the same label) form the sets &amp;lt;math&amp;gt;P_E&amp;lt;/math&amp;gt; (for the system result) and &amp;lt;math&amp;gt;P_A&amp;lt;/math&amp;gt; (for the ground truth). The ''pairwise precision rate'' can be calculated by &amp;lt;math&amp;gt;P = \frac{|P_E \cap P_A|}{|P_E|}&amp;lt;/math&amp;gt;, ''pairwise recall rate'' by &amp;lt;math&amp;gt;R = \frac{|P_E \cap P_A|}{|P_A|}&amp;lt;/math&amp;gt;, and ''pairwise F-measure'' by &amp;lt;math&amp;gt;F=\frac{2 P R}{P + R}&amp;lt;/math&amp;gt;. ([http://dx.doi.org/10.1109/TASL.2007.910781 Levy &amp;amp; Sandler TASLP2008])&lt;br /&gt;
&lt;br /&gt;
=== Normalised conditional entropies ===&lt;br /&gt;
Over- and under segmentation based evaluation measures proposed in [http://ismir2008.ismir.net/papers/ISMIR2008_219.pdf Lukashevich ISMIR2008].&lt;br /&gt;
Structure descriptions are represented as frame sequences with the associated cluster information (similar to the Frame clustering measure). Confusion matrix between the labels in ground truth and the result is calculated. The matrix C is of size |L_A| * |L_E|, i.e., number of unique labels in the ground truth times number of unique labels in the result. From the confusion matrix, the joint distribution is calculated by normalising the values with the total number of frames F:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p_{i,j} = C_{i,j} / F&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, the two marginals are calculated:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p_i^a = \sum_{j=1}^{|L_E|} C{i,j}/F&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p_j^e = \sum_{i=1}^{|L_A|} C{i,j}/F&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Conditional distributions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p_{i,j}^{a|e} = C_{i,j} / \sum_{i=1}^{|L_A|} C{i,j}&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p_{i,j}^{e|a} = C_{i,j} / \sum_{j=1}^{|L_E|} C{i,j}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The conditional entropies will then be&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;H(E|A) = - \sum_{i=1}^{|L_A|} p_i^a \sum_{j=1}^{|L_E|} p_{i,j}^{e|a} \log_2(p_{i,j}^{e|a})&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;H(A|E) = - \sum_{j=1}^{|L_E|} p_j^e \sum_{i=1}^{|L_A|} p_{i,j}^{a|e} \log_2(p_{i,j}^{a|e})&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The final evaluation measures will then be the oversegmentation score&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;S_O = 1 - \frac{H(E|A)}{\log_2(|L_E|)}&amp;lt;/math&amp;gt; , and the undersegmentation score&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;S_U = 1 - \frac{H(A|E)}{\log_2(|L_A|)}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Development Collections == &lt;br /&gt;
*Jouni Paulus's [http://www.cs.tut.fi/sgn/arg/paulus/structure.html structure analysis page] links to a corpus of 177 Beatles songs ([http://www.cs.tut.fi/sgn/arg/paulus/beatles_sections_TUT.zip zip file]). The Beatles annotations are not a part of the TUTstructure07 dataset. That dataset contains 557 songs, a list of which is available [http://www.cs.tut.fi/sgn/arg/paulus/TUTstructure07_files.html here].&lt;br /&gt;
&lt;br /&gt;
*Ewald Peiszer's [http://www.ifs.tuwien.ac.at/mir/audiosegmentation.html thesis page] links to a portion of the corpus he used: 43 non-Beatles pop songs (including 10 J-pop songs) ([http://www.ifs.tuwien.ac.at/mir/audiosegmentation/dl/ep_groundtruth_excl_Paulus.zip zip file]).&lt;br /&gt;
&lt;br /&gt;
These public corpora give a combined 220 songs.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions will be imposed.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on analysis times. Submissions exceeding this limit may not receive a result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Audio_Beat_Tracking&amp;diff=9356</id>
		<title>2013:Audio Beat Tracking</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Audio_Beat_Tracking&amp;diff=9356"/>
		<updated>2013-06-10T16:16:45Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
The text of this section was copied from the 2012 Wiki.  Please add your comments and discussion at the bottom of this page.&lt;br /&gt;
&lt;br /&gt;
The aim of the automatic beat tracking task is to track each beat locations in a collection of sound files. Unlike the Audio Tempo Extraction task, which aim is to detect tempi for each file, the beat tracking task aims at detecting all beat locations in recordings. The algorithms will be evaluated in terms of their accuracy in predicting beat locations annotated by a group of listeners. &lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Collections ===&lt;br /&gt;
The original 2006 dataset contains 160 30-second excerpts (WAV format) used for the Audio Tempo and Beat contests in 2006.  Beat locations have been annotated in each excerpt by 40 different listeners (39 listeners for a few excerpts. The length of each excerpt is 30 seconds. These audio recordings were selected to provide a stable tempo value, a wide distribution of tempi values, and a large variety of instrumentation and musical styles. About 20% of the files contain non-binary meters, and a small number of examples contain changing meters.  One disadvantage of using this set for beat tracking is that the tempi are rather stable and this set will not test beat-tracking algorithms in their ability to track tempo changes.&lt;br /&gt;
&lt;br /&gt;
The second collection is comprised of 367 Chopin Mazurkas, represented as full audio tracks (WAV format). The Mazurka dataset contains tempo changes so it will evaluate the ability of algorithms to track these.&lt;br /&gt;
&lt;br /&gt;
The third collection was assembled and donated in 2012. This dataset contains 217 excerpts around 40s each, of which 19 are &amp;quot;easy&amp;quot; and the remaining 198 are &amp;quot;hard&amp;quot;. The harder excerpts were drawn from the following musical styles: Romantic music, ﬁlm soundtracks, blues, chanson and solo guitar. &lt;br /&gt;
&lt;br /&gt;
This dataset has been designed for radically new techniques which can contend with challenging beat tracking situations like: quiet accompaniment, expressive timing, changes in time signature, slow tempo, poor sound quality etc. So, if your beat tracker likes a 4/4 time-signature with a steady tempo and needs clear percussive onsets, don't expect it to do very well!&lt;br /&gt;
But don't be deterred, this is for the good of beat tracking. &lt;br /&gt;
&lt;br /&gt;
You can read in detail about how the dataset was made here:&lt;br /&gt;
[http://dx.doi.org/10.1109/TASL.2012.2205244 ''Selective Sampling for Beat Tracking Evaluation'']&lt;br /&gt;
&lt;br /&gt;
=== Audio Formats ===&lt;br /&gt;
&lt;br /&gt;
The data are monophonic sound files, with the associated onset times and data about the annotation robustness.&lt;br /&gt;
&lt;br /&gt;
* CD-quality (PCM, 16-bit, 44100 Hz)&lt;br /&gt;
* single channel (mono)&lt;br /&gt;
* file length between 2 and 36 seconds (total time: 14 minutes) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Submissions to this task will have to conform to a specified format detailed below. Submissions should be packaged and contain at least two files: The algorithm itself and a README containing contact information and detailing, in full, the use of the algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Input Data ===&lt;br /&gt;
Participating algorithms will have to read audio in the following format:&lt;br /&gt;
&lt;br /&gt;
* Sample rate: 44.1 KHz&lt;br /&gt;
* Sample size: 16 bit&lt;br /&gt;
* Number of channels: 1 (mono)&lt;br /&gt;
* Encoding: WAV &lt;br /&gt;
&lt;br /&gt;
=== Output Data ===&lt;br /&gt;
&lt;br /&gt;
The beat tracking algorithms will return beat-times in an ASCII text file for each input .wav audio file. The specification of this output file is immediately below.&lt;br /&gt;
&lt;br /&gt;
=== Output File Format (Audio Beat tracking) ===&lt;br /&gt;
&lt;br /&gt;
The Beat Tracking output file format is an ASCII text format. Each beat time is specified, in seconds, on its own line. Specifically, &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;beat time(in seconds)&amp;gt;\n&lt;br /&gt;
&lt;br /&gt;
where \n denotes the end of line. The &amp;lt; and &amp;gt; characters are not included. An example output file would look something like:&lt;br /&gt;
&lt;br /&gt;
 0.243&lt;br /&gt;
 0.486&lt;br /&gt;
 0.729&lt;br /&gt;
&lt;br /&gt;
=== Algorithm Calling Format ===&lt;br /&gt;
&lt;br /&gt;
The submitted algorithm must take as arguments a SINGLE .wav file to perform the onset detection on as well as the full output path and filename of the output file. The ability to specify the output path and file name is essential. Denoting the input .wav file path and name as %input and the output file path and name as %output, a program called foobar could be called from the command-line as follows:&lt;br /&gt;
&lt;br /&gt;
 foobar %input %output&lt;br /&gt;
 foobar -i %input -o %output&lt;br /&gt;
&lt;br /&gt;
Moreover, if your submission takes additional parameters, such as a detection threshold, foobar could be called like:&lt;br /&gt;
&lt;br /&gt;
 foobar .1 %input %output&lt;br /&gt;
 foobar -param1 .1 -i %input -o %output  &lt;br /&gt;
&lt;br /&gt;
If your submission is in MATLAB, it should be submitted as a function. Once again, the function must contain String inputs for the full path and names of the input and output files. Parameters could also be specified as input arguments of the function. For example: &lt;br /&gt;
&lt;br /&gt;
 foobar('%input','%output')&lt;br /&gt;
 foobar(.1,'%input','%output')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== README File ===&lt;br /&gt;
&lt;br /&gt;
A README file accompanying each submission should contain explicit instructions on how to to run the program (as well as contact information, etc.). In particular, each command line to run should be specified, using %input for the input sound file and %output for the resulting text file.&lt;br /&gt;
&lt;br /&gt;
For instance, to test the program foobar with different values for parameters param1, the README file would look like:&lt;br /&gt;
&lt;br /&gt;
 foobar -param1 .1 -i %input -o %output&lt;br /&gt;
 foobar -param1 .15 -i %input -o %output&lt;br /&gt;
 foobar -param1 .2 -i %input -o %output&lt;br /&gt;
 foobar -param1 .25 -i %input -o %output&lt;br /&gt;
 foobar -param1 .3 -i %input -o %output&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
For a submission using MATLAB, the README file could look like:&lt;br /&gt;
&lt;br /&gt;
 matlab -r &amp;quot;foobar(.1,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 matlab -r &amp;quot;foobar(.15,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 matlab -r &amp;quot;foobar(.2,'%input','%output');quit;&amp;quot; &lt;br /&gt;
 matlab -r &amp;quot;foobar(.25,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 matlab -r &amp;quot;foobar(.3,'%input','%output');quit;&amp;quot;&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The different command lines to evaluate the performance of each parameter set over the whole database will be generated automatically from each line in the README file containing both '%input' and '%output' strings.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
&lt;br /&gt;
The evaluation methods are taken from the beat evaluation toolbox and&lt;br /&gt;
are described in the following technical report: &lt;br /&gt;
&lt;br /&gt;
 M. E. P. Davies, N. Degara and M. D. Plumbley. &amp;quot;Evaluation methods for musical audio beat tracking algorithms&amp;quot;. [http://www.elec.qmul.ac.uk/people/markp/2009/DaviesDegaraPlumbley09-evaluation-tr.pdf ''Technical Report C4DM-TR-09-06'']. This link now works! :)&lt;br /&gt;
&lt;br /&gt;
For further details on the specifics of the methods please refer to the&lt;br /&gt;
paper. However, here is a brief summary with appropriate references:&lt;br /&gt;
&lt;br /&gt;
*'''F-measure''' - the standard calculation as used in onset evaluation but&lt;br /&gt;
with a 70ms window. &lt;br /&gt;
&lt;br /&gt;
 S. Dixon, &amp;quot;Onset detection revisited,&amp;quot; in ''Proceedings of 9th&lt;br /&gt;
 International Conference on Digital Audio Effects (DAFx)'', Montreal,&lt;br /&gt;
 Canada, pp. 133-137, 2006.&lt;br /&gt;
&lt;br /&gt;
 S. Dixon, &amp;quot;Evaluation of audio beat tracking system beatroot,&amp;quot; ''Journal&lt;br /&gt;
 of New Music Research'', vol. 36, no. 1, pp. 39-51, 2007.&lt;br /&gt;
&lt;br /&gt;
*'''Cemgil''' - beat accuracy is calculated using a Gaussian error function&lt;br /&gt;
with 40ms standard deviation.&lt;br /&gt;
&lt;br /&gt;
 A. T. Cemgil, B. Kappen, P. Desain, and H. Honing, &amp;quot;On tempo tracking:&lt;br /&gt;
 Tempogram representation and Kalman filtering,&amp;quot; ''Journal Of New Music&lt;br /&gt;
 Research'', vol. 28, no. 4, pp. 259-273, 2001&lt;br /&gt;
 &lt;br /&gt;
*'''Goto''' - binary decision of correct or incorrect tracking based on&lt;br /&gt;
statistical properties of a beat error sequence.&lt;br /&gt;
&lt;br /&gt;
 M. Goto and Y. Muraoka, &amp;quot;Issues in evaluating beat tracking systems,&amp;quot; in&lt;br /&gt;
 ''Working Notes of the IJCAI-97 Workshop on Issues in AI and Music -&lt;br /&gt;
 Evaluation and Assessment'', 1997, pp. 9-16.&lt;br /&gt;
&lt;br /&gt;
*'''PScore''' - McKinney's impulse train cross-correlation method as used in&lt;br /&gt;
2006.&lt;br /&gt;
&lt;br /&gt;
 M. F. McKinney, D. Moelants, M. E. P. Davies, and A. Klapuri,&lt;br /&gt;
 &amp;quot;Evaluation of audio beat tracking and music tempo extraction&lt;br /&gt;
 algorithms,&amp;quot; ''Journal of New Music Research'', vol. 36, no. 1, pp. 1-16,&lt;br /&gt;
 2007.&lt;br /&gt;
&lt;br /&gt;
*'''CMLc''', '''CMLt''', '''AMLc''', '''AMLt''' - continuity-based evaluation methods based on&lt;br /&gt;
the longest continuously correctly tracked section. &lt;br /&gt;
&lt;br /&gt;
 S. Hainsworth, &amp;quot;Techniques for the automated analysis of musical audio,&amp;quot;&lt;br /&gt;
 Ph.D. dissertation, Department of Engineering, Cambridge University,&lt;br /&gt;
 2004.&lt;br /&gt;
&lt;br /&gt;
 A. P. Klapuri, A. Eronen, and J. Astola, &amp;quot;Analysis of the meter of&lt;br /&gt;
 acoustic musical signals,&amp;quot; IEEE Transactions on Audio, Speech and&lt;br /&gt;
 Language Processing, vol. 14, no. 1, pp. 342-355, 2006.&lt;br /&gt;
&lt;br /&gt;
*'''D''', '''Dg''' - information based criteria based on analysis of a beat error&lt;br /&gt;
histogram (note the results are measured in 'bits' and not percentages),&lt;br /&gt;
see the technical report for a description.&lt;br /&gt;
&lt;br /&gt;
== Relevant Development Collections ==&lt;br /&gt;
You can find it here:&lt;br /&gt;
&lt;br /&gt;
https://www.music-ir.org/evaluation/MIREX/data/2006/beat/&lt;br /&gt;
&lt;br /&gt;
User: beattrack Password: b34trx&lt;br /&gt;
&lt;br /&gt;
https://www.music-ir.org/evaluation/MIREX/data/2006/tempo/&lt;br /&gt;
&lt;br /&gt;
User: tempo Password: t3mp0&lt;br /&gt;
&lt;br /&gt;
Data has been uploaded in both .tgz and .zip format.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions will be imposed.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 12 hours will be imposed on analysis times. Submissions exceeding this limit may not receive a result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Query_by_Tapping&amp;diff=9355</id>
		<title>2013:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Query_by_Tapping&amp;diff=9355"/>
		<updated>2013-06-10T16:15:17Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2012 page. Please add your comments and discussions for 2013. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have two corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 text files of onset time to retrieve target MIDIs in MIR_QBT. These onset files can help participant concentrate on similarity matching instead of onset detection. All onset files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate).&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2013 ==&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Query_by_Tapping&amp;diff=9354</id>
		<title>2013:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Query_by_Tapping&amp;diff=9354"/>
		<updated>2013-06-10T16:14:53Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2012 page. Please add your comments and discussions for 2013. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have two corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 text files of onset time to retrieve target MIDIs in MIR_QBT. These onset files can help participant concentrate on similarity matching instead of onset detection. All onset files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate).&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate).&lt;br /&gt;
&lt;br /&gt;
== Disucussions for 2013 == &lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Audio_Tempo_Estimation&amp;diff=9353</id>
		<title>2013:Audio Tempo Estimation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Audio_Tempo_Estimation&amp;diff=9353"/>
		<updated>2013-06-10T16:13:08Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: Created page with &amp;quot;== Description == This task compares current methods for the extraction of tempo from musical audio. We distinguish between notated tempo and perceptual tempo and will test for t...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task compares current methods for the extraction of tempo from musical audio. We distinguish between notated tempo and perceptual tempo and will test for the extraction of perceptual tempo. &lt;br /&gt;
&lt;br /&gt;
We differentiate between notated tempo and perceived tempo. If you have the notated tempo (e.g., from the score) it is straightforward attach a tempo annotation to an excerpt and run a contest for algorithms to predict the notated tempo. For excerpts for which we have no &amp;quot;official&amp;quot; tempo annotation, we can also annotate the *perceived* tempo. This is not a straightforward task and needs to be done carefully. If you ask a group of listeners (including skilled musicians) to annotate the tempo of music excerpts, they can give you different answers (they tap at different metrical levels) if they are unfamiliar with the piece. For some excerpts the perceived pulse or tempo is less ambiguous and everyone taps at the same metrical level, but for other excerpts the tempo can be quite ambiguous and you get a complete split across listeners.&lt;br /&gt;
&lt;br /&gt;
The annotation of perceptual tempo can take several forms: a probability density function as a function of tempo; a series of tempos, ranked by their respective perceptual salience; etc. These measures of perceptual tempo can be used as a ground truth on which to test algorithms for tempo extraction. The dominant perceived tempo is sometimes the same as the notated tempo but not always. A piece of music can &amp;quot;feel&amp;quot; faster or slower than it's notated tempo in that the dominant perceived pulse can be a metrical level higher or lower than the notated tempo.&lt;br /&gt;
&lt;br /&gt;
There are several reasons to examine the perceptual tempo, either in place of or in addition to the notated tempo. For many applications of automatic tempo extractors, the perceived tempo of the music is more relevant than the notated tempo. An automatic playlist generator or music navigator, for instance, might allow listeners to select or filter music by its (automatically extracted) tempo. In this case, the &amp;quot;feel&amp;quot;, or perceptual tempo may be more relevant than the notated tempo. An automatic DJ apparatus might also perform better with a representation of perceived tempo rather than notated tempo.&lt;br /&gt;
&lt;br /&gt;
A more pragmatic reason for using perceptual tempo rather than notated tempo as a ground truth for our contest is that we simply do not have the notated tempo of our test set. If we notate it by having a panel of expert listeners tap along and label the excerpts, we are by default dealing with the perceived tempo. The handling of this data as ground truth must be done with care.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Collections ===&lt;br /&gt;
MIREX 2006 Tempo dataset collected by Martin F. McKinney (Philips) and Dirk Moelants (IPEM, Ghent University). Composed of 160 30-second clips in WAV format with annotated tempos. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Audio Formats ===&lt;br /&gt;
The data are monophonic sound files, with the associated onset times and data about the annotation robustness.&lt;br /&gt;
&lt;br /&gt;
* CD-quality (PCM, 16-bit, 44100 Hz)&lt;br /&gt;
* single channel (mono)&lt;br /&gt;
* 30 second clips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Submissions to this task will have to conform to a specified format detailed below. Submissions should be packaged and contain at least two files: The algorithm itself and a README containing contact information and detailing, in full, the use of the algorithm.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Input data ===&lt;br /&gt;
Individual audio files in WAV format (30-second clips drawn from the 140 unseen tracks in the dataset). The audio recordings were selected to provide a stable tempo value, a wide distribution of tempi values, and a large variety of instrumentation and musical styles. About 20% of the files contain non-binary meters, and a small number of examples contain changing meters.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Output Data ===&lt;br /&gt;
Submitted programs should output two tempi (a slower tempo, T1, and a faster tempo, T2) as well as the strength of T1 relative to T2 (0-1). The relative strength  ST2 (not output) is simply 1 - ST1.  The tempo estimates from each algorithm should be written to a text file in the following format:&lt;br /&gt;
&lt;br /&gt;
 T1&amp;lt;tab&amp;gt;T2&amp;lt;tab&amp;gt;ST1&lt;br /&gt;
&lt;br /&gt;
E.g.&lt;br /&gt;
 60	180	0.7&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Algorithm Calling Format ===&lt;br /&gt;
&lt;br /&gt;
The submitted algorithm must take as arguments a SINGLE .wav file to perform the tempo estimation detection on as well as the full output path and filename of the output file. The ability to specify the output path and file name is essential. Denoting the input .wav file path and name as ''%input'' and the output file path and name as ''%output'', a program called foobar could be called from the command-line as follows:&lt;br /&gt;
&lt;br /&gt;
 foobar %input %output&lt;br /&gt;
or&lt;br /&gt;
 foobar -i %input -o %output&lt;br /&gt;
&lt;br /&gt;
Moreover, if your submission takes additional parameters, foobar could be called like:&lt;br /&gt;
&lt;br /&gt;
 foobar .1 %input %output&lt;br /&gt;
 foobar -param1 .1 -i %input -o %output  &lt;br /&gt;
&lt;br /&gt;
If your submission is in MATLAB, it should be submitted as a function. Once again, the function must contain String inputs for the full path and names of the input and output files. Parameters could also be specified as input arguments of the function. For example: &lt;br /&gt;
&lt;br /&gt;
 foobar('%input','%output')&lt;br /&gt;
 foobar(.1,'%input','%output')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== README File ===&lt;br /&gt;
&lt;br /&gt;
A README file accompanying each submission should contain explicit instructions on how to to run the program (as well as contact information, etc.). In particular, each command line to run should be specified, using %input for the input sound file and %output for the resulting text file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
&lt;br /&gt;
This section focuses on the mechanics of the method while we discuss the data (music excerpts and perceptual data) in the next section. There are two general steps to the method: 1) collection of perceptual tempo annotations; and 2) evaluation of tempo extraction algorithms.&lt;br /&gt;
&lt;br /&gt;
=== Perceptual tempo data collection ===&lt;br /&gt;
&lt;br /&gt;
The following procedure is described in more detail in McKinney and Moelants (2004) and Moelants and McKinney (2004). Listeners were asked to tap to the beat of a series of musical excerpts. Responses were collected and their perceived tempo was calculated. For each excerpt, a distribution of perceived tempo was generated. A relatively simple form of perceived tempo was proposed for this contest: The two highest peaks in the perceived tempo distribution for each excerpt were taken, along with their respective heights (normalized to sum to 1.0) as the two tempo candidates for that particular excerpt. The height of a peak in the distribution is assumed to represent the perceptual salience of that tempo. &lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
* McKinney, M.F. and Moelants, D. (2004), Deviations from the resonance theory of tempo induction, Conference on Interdisciplinary Musicology, Graz. URL: http://www-gewi.uni-graz.at/staff/parncutt/cim04/CIM04_paper_pdf/McKinney_Moelants_CIM04_proceedings_t.pdf&lt;br /&gt;
* Moelants, D. and McKinney, M.F. (2004), Tempo perception and musical content: What makes a piece slow, fast, or temporally ambiguous? International Conference on Music Perception &amp;amp; Cognition, Evanston, IL. URL: http://icmpc8.umn.edu/proceedings/ICMPC8/PDF/AUTHOR/MP040237.PDF &lt;br /&gt;
&lt;br /&gt;
=== Evaluation of tempo extraction algorithms ===&lt;br /&gt;
Algorithms will process musical excerpts and return the following data: Two tempi in BPM (T1 and T2, where T1 is the slower of the two tempi).  For a given algorithm, the performance, P, for each audio excerpt will be given by the following equation:&lt;br /&gt;
&lt;br /&gt;
 P = ST1 * TT1 + (1 - ST1) * TT2&lt;br /&gt;
&lt;br /&gt;
where ST1 is the relative perceptual strength of T1 (given by groundtruth data, varies from 0 to 1.0), TT1 is the ability of the algorithm to identify T1 to within 8%, and TT2 is the ability of the algorithm to identify T2 to within 8%.  No credit will be given for tempi other than T1 and T2.&lt;br /&gt;
&lt;br /&gt;
The algorithm with the best average P-score will achieve the highest rank in the task. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Test Collections ==&lt;br /&gt;
We will use a collection of 160 musical exerpts for the evaluation procedure. 40 of the excerpts have been taken from one of McKinney/Moelants previous experiments (See McKinney/Moelants ICMPC paper above).&lt;br /&gt;
&lt;br /&gt;
Excerpts were selected to provide:&lt;br /&gt;
&lt;br /&gt;
* stable tempo within each excerpt&lt;br /&gt;
* a good distribution of tempi across excerpts&lt;br /&gt;
* a large variety of instrumentation and beat strengths (with and without percussion)&lt;br /&gt;
* a variation of musical styles, including many non-western styles&lt;br /&gt;
* the presence of non-binary meters (about 20% have a ternary element and there are a few examples with odd or changing meter). &lt;br /&gt;
&lt;br /&gt;
We will provide 20 excerpts with ground truth data for participants to try/tune their algorithms before submission. The remaining 140 excerpts will be novel to all participants.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Practice Data===&lt;br /&gt;
You can find it here:&lt;br /&gt;
&lt;br /&gt;
https://www.music-ir.org/evaluation/MIREX/data/2006/beat/&lt;br /&gt;
&lt;br /&gt;
User: beattrack Password: b34trx&lt;br /&gt;
&lt;br /&gt;
https://www.music-ir.org/evaluation/MIREX/data/2006/tempo/&lt;br /&gt;
&lt;br /&gt;
User: tempo Password: t3mp0&lt;br /&gt;
&lt;br /&gt;
Data has been uploaded in both .tgz and .zip format.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions will be imposed.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 8 hours will be imposed on analysis times. Submissions exceeding this limit may not receive a result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Structural_Segmentation&amp;diff=9352</id>
		<title>2013:Structural Segmentation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Structural_Segmentation&amp;diff=9352"/>
		<updated>2013-06-10T16:12:48Z</updated>

		<summary type="html">&lt;p&gt;Cwillis: Created page with &amp;quot;== Description ==  The aim of the MIREX structural segmentation evaluation is to identify the key structural sections in musical audio. The segment structure (or form) is one of ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The aim of the MIREX structural segmentation evaluation is to identify the key structural sections in musical audio. The segment structure (or form) is one of the most important musical parameters. It is furthermore special because musical structure -- especially in popular music genres (e.g. verse, chorus, etc.) -- is accessible to everybody: it needs no particular musical knowledge. This task was first run in 2009.&lt;br /&gt;
&lt;br /&gt;
== Data == &lt;br /&gt;
&lt;br /&gt;
=== Collections ===&lt;br /&gt;
* The MIREX 2009 Collection: 297 pieces, most of it derived from the work of the Beatles.&lt;br /&gt;
&lt;br /&gt;
* MIREX 2010 RWC collection. 100 pieces of popular music. There are two ground truths. The first is the one originally included with the RWC dataset. The explanation of the second set of annotations can be found at http://hal.inria.fr/docs/00/47/34/79/PDF/PI-1948.pdf. The second set of annotations contains no labels for segments, but rather provides an annotation of segment boundaries.&lt;br /&gt;
&lt;br /&gt;
* MIREX 2012 dataset. The new data set contains over 1,000 annotated pieces covering a range of musical styles. The majority of the pieces have been annotated by two independent annotators. &lt;br /&gt;
&lt;br /&gt;
=== Audio Formats ===&lt;br /&gt;
&lt;br /&gt;
* CD-quality (PCM, 16-bit, 44100 Hz)&lt;br /&gt;
* single channel (mono)&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
&lt;br /&gt;
Submissions to this task will have to conform to a specified format detailed below. Submissions should be packaged and contain at least two files: The algorithm itself and a README containing contact information and detailing, in full, the use of the algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Input Data ===&lt;br /&gt;
Participating algorithms will have to read audio in the following format:&lt;br /&gt;
&lt;br /&gt;
* Sample rate: 44.1 KHz&lt;br /&gt;
* Sample size: 16 bit&lt;br /&gt;
* Number of channels: 1 (mono)&lt;br /&gt;
* Encoding: WAV &lt;br /&gt;
&lt;br /&gt;
=== Output Data ===&lt;br /&gt;
&lt;br /&gt;
The structural segmentation algorithms will return the segmentation in an ASCII text file for each input .wav audio file. The specification of this output file is immediately below.&lt;br /&gt;
&lt;br /&gt;
=== Output File Format (Structural Segmentation) ===&lt;br /&gt;
&lt;br /&gt;
The Structural Segmentation output file format is a tab-delimited ASCII text format. This is the same as Chris Harte's chord labelling files (.lab), and so is the same format as the ground truth as well. Onset and offset times are given in seconds, and the labels are simply letters: 'A', 'B', ... with segments referring to the same structural element having the same label.&lt;br /&gt;
&lt;br /&gt;
Three column text file of the format&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;onset_time(sec)&amp;gt;\t&amp;lt;offset_time(sec)&amp;gt;\t&amp;lt;label&amp;gt;\n&lt;br /&gt;
 &amp;lt;onset_time(sec)&amp;gt;\t&amp;lt;offset_time(sec)&amp;gt;\t&amp;lt;label&amp;gt;\n&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
where \t denotes a tab, \n denotes the end of line. The &amp;lt; and &amp;gt; characters are not included. An example output file would look something like:&lt;br /&gt;
&lt;br /&gt;
 0.000    5.223    A&lt;br /&gt;
 5.223    15.101   B&lt;br /&gt;
 15.101   20.334   A&lt;br /&gt;
&lt;br /&gt;
=== Algorithm Calling Format ===&lt;br /&gt;
&lt;br /&gt;
The submitted algorithm must take as arguments a SINGLE .wav file to perform the structural segmentation on as well as the full output path and filename of the output file. The ability to specify the output path and file name is essential. Denoting the input .wav file path and name as %input and the output file path and name as %output, a program called foobar could be called from the command-line as follows:&lt;br /&gt;
&lt;br /&gt;
 foobar %input %output&lt;br /&gt;
 foobar -i %input -o %output&lt;br /&gt;
&lt;br /&gt;
Moreover, if your submission takes additional parameters, foobar could be called like:&lt;br /&gt;
&lt;br /&gt;
 foobar .1 %input %output&lt;br /&gt;
 foobar -param1 .1 -i %input -o %output  &lt;br /&gt;
&lt;br /&gt;
If your submission is in MATLAB, it should be submitted as a function. Once again, the function must contain String inputs for the full path and names of the input and output files. Parameters could also be specified as input arguments of the function. For example: &lt;br /&gt;
&lt;br /&gt;
 foobar('%input','%output')&lt;br /&gt;
 foobar(.1,'%input','%output')&lt;br /&gt;
&lt;br /&gt;
=== README File ===&lt;br /&gt;
&lt;br /&gt;
A README file accompanying each submission should contain explicit instructions on how to to run the program (as well as contact information, etc.). In particular, each command line to run should be specified, using %input for the input sound file and %output for the resulting text file.&lt;br /&gt;
&lt;br /&gt;
For instance, to test the program foobar with a specific value for parameter param1, the README file would look like:&lt;br /&gt;
&lt;br /&gt;
 foobar -param1 .1 -i %input -o %output&lt;br /&gt;
&lt;br /&gt;
For a submission using MATLAB, the README file could look like:&lt;br /&gt;
&lt;br /&gt;
 matlab -r &amp;quot;foobar(.1,'%input','%output');quit;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
At the last ISMIR conference [http://ismir2008.ismir.net/papers/ISMIR2008_219.pdf Lukashevich] proposed a measure for segmentation evaluation. Because of the complexity of the structural segmentation task definition, several different evaluation measures will be employed to address different aspects. It should be noted that none of the evaluation measures cares about the true labels of the sections: they only denote the clustering. This means that it does not matter if the systems produce true labels such as &amp;quot;chorus&amp;quot; and &amp;quot;verse&amp;quot;, or arbitrary labels such as &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Boundary retrieval ===&lt;br /&gt;
'''Hit rate''' Found segment boundaries are accepted to be correct if they are within 0.5s ([http://ismir2007.ismir.net/proceedings/ISMIR2007_p051_turnbull.pdf Turnbull et al. ISMIR2007]) or 3s ([http://dx.doi.org/10.1109/TASL.2007.910781 Levy &amp;amp; Sandler TASLP2008]) from a border in the ground truth. Based on the matched hits, ''boundary retrieval recall rate'', ''boundary retrieval precision rate'', and ''boundary retrieval F-measure'' are be calculated.&lt;br /&gt;
&lt;br /&gt;
'''Median deviation''' Two median deviation measure between boundaries in the result and ground truth are calculated: ''median true-to-guess'' is the median time from boundaries in ground truth to the closest boundaries in the result, and ''median guess-to-true'' is similarly the median time from boundaries in the result to boundaries in ground truth. ([http://ismir2007.ismir.net/proceedings/ISMIR2007_p051_turnbull.pdf Turnbull et al. ISMIR2007])&lt;br /&gt;
&lt;br /&gt;
=== Frame clustering ===&lt;br /&gt;
Both the result and the ground truth are handled in short frames (e.g., beat or fixed 100ms). All frame pairs in a structure description are handled. The pairs in which both frames are assigned to the same cluster (i.e., have the same label) form the sets &amp;lt;math&amp;gt;P_E&amp;lt;/math&amp;gt; (for the system result) and &amp;lt;math&amp;gt;P_A&amp;lt;/math&amp;gt; (for the ground truth). The ''pairwise precision rate'' can be calculated by &amp;lt;math&amp;gt;P = \frac{|P_E \cap P_A|}{|P_E|}&amp;lt;/math&amp;gt;, ''pairwise recall rate'' by &amp;lt;math&amp;gt;R = \frac{|P_E \cap P_A|}{|P_A|}&amp;lt;/math&amp;gt;, and ''pairwise F-measure'' by &amp;lt;math&amp;gt;F=\frac{2 P R}{P + R}&amp;lt;/math&amp;gt;. ([http://dx.doi.org/10.1109/TASL.2007.910781 Levy &amp;amp; Sandler TASLP2008])&lt;br /&gt;
&lt;br /&gt;
=== Normalised conditional entropies ===&lt;br /&gt;
Over- and under segmentation based evaluation measures proposed in [http://ismir2008.ismir.net/papers/ISMIR2008_219.pdf Lukashevich ISMIR2008].&lt;br /&gt;
Structure descriptions are represented as frame sequences with the associated cluster information (similar to the Frame clustering measure). Confusion matrix between the labels in ground truth and the result is calculated. The matrix C is of size |L_A| * |L_E|, i.e., number of unique labels in the ground truth times number of unique labels in the result. From the confusion matrix, the joint distribution is calculated by normalising the values with the total number of frames F:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p_{i,j} = C_{i,j} / F&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, the two marginals are calculated:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p_i^a = \sum_{j=1}^{|L_E|} C{i,j}/F&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p_j^e = \sum_{i=1}^{|L_A|} C{i,j}/F&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Conditional distributions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p_{i,j}^{a|e} = C_{i,j} / \sum_{i=1}^{|L_A|} C{i,j}&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p_{i,j}^{e|a} = C_{i,j} / \sum_{j=1}^{|L_E|} C{i,j}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The conditional entropies will then be&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;H(E|A) = - \sum_{i=1}^{|L_A|} p_i^a \sum_{j=1}^{|L_E|} p_{i,j}^{e|a} \log_2(p_{i,j}^{e|a})&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;H(A|E) = - \sum_{j=1}^{|L_E|} p_j^e \sum_{i=1}^{|L_A|} p_{i,j}^{a|e} \log_2(p_{i,j}^{a|e})&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The final evaluation measures will then be the oversegmentation score&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;S_O = 1 - \frac{H(E|A)}{\log_2(|L_E|)}&amp;lt;/math&amp;gt; , and the undersegmentation score&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;S_U = 1 - \frac{H(A|E)}{\log_2(|L_A|)}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Development Collections == &lt;br /&gt;
*Jouni Paulus's [http://www.cs.tut.fi/sgn/arg/paulus/structure.html structure analysis page] links to a corpus of 177 Beatles songs ([http://www.cs.tut.fi/sgn/arg/paulus/beatles_sections_TUT.zip zip file]). The Beatles annotations are not a part of the TUTstructure07 dataset. That dataset contains 557 songs, a list of which is available [http://www.cs.tut.fi/sgn/arg/paulus/TUTstructure07_files.html here].&lt;br /&gt;
&lt;br /&gt;
*Ewald Peiszer's [http://www.ifs.tuwien.ac.at/mir/audiosegmentation.html thesis page] links to a portion of the corpus he used: 43 non-Beatles pop songs (including 10 J-pop songs) ([http://www.ifs.tuwien.ac.at/mir/audiosegmentation/dl/ep_groundtruth_excl_Paulus.zip zip file]).&lt;br /&gt;
&lt;br /&gt;
These public corpora give a combined 220 songs.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions will be imposed.&lt;br /&gt;
&lt;br /&gt;
A hard limit of 24 hours will be imposed on analysis times. Submissions exceeding this limit may not receive a result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
*Joan Serrà / jserra@iiia.csic.es&lt;/div&gt;</summary>
		<author><name>Cwillis</name></author>
		
	</entry>
</feed>