<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Blair+Kaneshiro</id>
	<title>MIREX Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Blair+Kaneshiro"/>
	<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/wiki/Special:Contributions/Blair_Kaneshiro"/>
	<updated>2026-04-13T20:13:28Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Task_Captains&amp;diff=10886</id>
		<title>2015:Task Captains</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Task_Captains&amp;diff=10886"/>
		<updated>2015-04-14T16:36:25Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Like ISMIR 2014, we are prepared to improve the distribution of tasks for the upcoming MIREX 2015.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead one or more tasks, please add your name in the &amp;quot;Captains&amp;quot; column.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: 20px&amp;quot;&lt;br /&gt;
!ID !! Task !! Captain(s)&lt;br /&gt;
|-&lt;br /&gt;
|abt&lt;br /&gt;
|[[2015:Audio Beat Tracking]]&lt;br /&gt;
|Sebastian Böck&lt;br /&gt;
|-&lt;br /&gt;
|ace&lt;br /&gt;
|[[2015:Audio Chord Estimation]]&lt;br /&gt;
|Johan Pauwels&lt;br /&gt;
|-&lt;br /&gt;
|act&lt;br /&gt;
|[[2015:Audio Classification (Train/Test) Tasks]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|acs&lt;br /&gt;
|[[2015:Audio Cover Song Identification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ade&lt;br /&gt;
|[[2015:Audio Downbeat Estimation]]&lt;br /&gt;
|Florian Krebs, Sebastian Böck&lt;br /&gt;
|-&lt;br /&gt;
|akd&lt;br /&gt;
|[[2015:Audio Key Detection]]&lt;br /&gt;
|Johan Pauwels&lt;br /&gt;
|-&lt;br /&gt;
|ame&lt;br /&gt;
|[[2015:Audio Melody Extraction]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|ams&lt;br /&gt;
|[[2015:Audio Music Similarity and Retrieval]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|aod&lt;br /&gt;
|[[2015:Audio Onset Detection]]&lt;br /&gt;
|Sebastian Böck&lt;br /&gt;
|-&lt;br /&gt;
|ate&lt;br /&gt;
|[[2015:Audio Tempo Estimation]]&lt;br /&gt;
|Aggelos Gkiokas&lt;br /&gt;
|-&lt;br /&gt;
|atg&lt;br /&gt;
|[[2015:Audio Tag Classification]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mf0&lt;br /&gt;
|[[2015:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qbsh&lt;br /&gt;
|[[2015:Query by Singing/Humming]]&lt;br /&gt;
|KETI&lt;br /&gt;
|-&lt;br /&gt;
|qbt&lt;br /&gt;
|[[2015:Query by Tapping]]&lt;br /&gt;
| CCRMA&lt;br /&gt;
|-&lt;br /&gt;
|scofo&lt;br /&gt;
|[[2015:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sms&lt;br /&gt;
|[[2015:Symbolic Melodic Similarity]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|struct&lt;br /&gt;
|[[2015:Structural Segmentation]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|drts&lt;br /&gt;
|[[2015:Discovery of Repeated Themes &amp;amp; Sections]]&lt;br /&gt;
|Tom Collins&lt;br /&gt;
|-&lt;br /&gt;
|afp&lt;br /&gt;
|[[2015:Audio_Fingerprinting]]&lt;br /&gt;
|Chung-Che Wang&lt;br /&gt;
|-&lt;br /&gt;
|svs&lt;br /&gt;
|[[2015:Singing_Voice_Separation]]&lt;br /&gt;
|Tak-Shing Chan, Yi-Hsuan Yang, Li Su&lt;br /&gt;
|-&lt;br /&gt;
|kgc&lt;br /&gt;
|[[2015:Audio K-POP Genre Classification]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|-&lt;br /&gt;
|kmc&lt;br /&gt;
|[[2015:Audio K-POP Mood Classification]]&lt;br /&gt;
|IMIRSEL&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10504</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10504"/>
		<updated>2014-10-08T18:31:55Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Submission deadline is September 16.&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
=== New for 2014 ===&lt;br /&gt;
* Added QBT-Extended dataset&lt;br /&gt;
* New subtask combining rhythm and pitch contours (QBT-Extended only)&lt;br /&gt;
* Minor modifications to command format - moving more toward a training/test paradigm, accommodates hidden test datasets&lt;br /&gt;
* Please make sure your list of candidates is ranked, as we will be assigning variable points for Top-10, 5, and 1 matches.&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;code&amp;gt;&amp;lt;dbMidi.list&amp;gt;&amp;lt;/code&amp;gt; is the input list of database midi files named as &amp;lt;code&amp;gt;uniq_key.mid&amp;lt;/code&amp;gt;. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;code&amp;gt;&amp;lt;dir_workspace_root&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list_train&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;&amp;lt;dbMidi_list&amp;gt;&amp;lt;/code&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;code&amp;gt;&amp;lt;query_file_list_train&amp;gt;&amp;lt;/code&amp;gt; maps each query to its associated ground truth. You can use &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; to store any temporary indexing/database structures. (You can omit &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list_train&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files (for subtask 3), then the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	qbtQuery/query_00001.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	qbtQuery/query_00002.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	qbtQuery/query_00003.y_onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list_test&amp;gt; &amp;lt;result_file&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;&amp;lt;query_file_list_test&amp;gt;&amp;lt;/code&amp;gt; is a single-column text file of input queries (the &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; files only, not the &amp;lt;code&amp;gt;.mid&amp;lt;/code&amp;gt; files), and &amp;lt;code&amp;gt;&amp;lt;result_file&amp;gt;&amp;lt;/code&amp;gt; is the filename where your script should store results. You can use &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; to store any temporary indexing/database structures. (You can omit &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;query_file_list_test&amp;lt;/code&amp;gt; thus has the following format:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset&lt;br /&gt;
 qbtQuery/query_00002.onset&lt;br /&gt;
 qbtQuery/query_00003.onset&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;result_file&amp;gt;&amp;lt;/code&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;code&amp;gt;&amp;lt;result_file&amp;gt;&amp;lt;/code&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked (closest match) MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, qbt at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10390</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10390"/>
		<updated>2014-08-28T05:22:08Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Step 0: Indexing the MIDIs collection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Submission deadline is September 16.&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
=== New for 2014 ===&lt;br /&gt;
* Added QBT-Extended dataset&lt;br /&gt;
* New subtask combining rhythm and pitch contours (QBT-Extended only)&lt;br /&gt;
* Minor modifications to command format - moving more toward a training/test paradigm, accommodates hidden test datasets&lt;br /&gt;
* Please make sure your list of candidates is ranked, as we will be assigning variable points for Top-10, 5, and 1 matches.&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;code&amp;gt;&amp;lt;dbMidi.list&amp;gt;&amp;lt;/code&amp;gt; is the input list of database midi files named as &amp;lt;code&amp;gt;uniq_key.mid&amp;lt;/code&amp;gt;. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;code&amp;gt;&amp;lt;dir_workspace_root&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;&amp;lt;dbMidi_list&amp;gt;&amp;lt;/code&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; maps each query to its associated ground truth. You can use &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; to store any temporary indexing/database structures. (You can omit &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files (for subtask 3), then the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	qbtQuery/query_00001.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	qbtQuery/query_00002.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	qbtQuery/query_00003.y_onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list&amp;gt; &amp;lt;result_file&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; are input queries, and &amp;lt;code&amp;gt;&amp;lt;result_file&amp;gt;&amp;lt;/code&amp;gt; is the filename where your script should store results. You can use &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; to store any temporary indexing/database structures. (You can omit &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;result_file&amp;gt;&amp;lt;/code&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;code&amp;gt;&amp;lt;result_file&amp;gt;&amp;lt;/code&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked (closest match) MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, qbt at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10389</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10389"/>
		<updated>2014-08-28T05:20:37Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Step 2: Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Submission deadline is September 16.&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
=== New for 2014 ===&lt;br /&gt;
* Added QBT-Extended dataset&lt;br /&gt;
* New subtask combining rhythm and pitch contours (QBT-Extended only)&lt;br /&gt;
* Minor modifications to command format - moving more toward a training/test paradigm, accommodates hidden test datasets&lt;br /&gt;
* Please make sure your list of candidates is ranked, as we will be assigning variable points for Top-10, 5, and 1 matches.&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;code&amp;gt;&amp;lt;dir_workspace_root&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;&amp;lt;dbMidi_list&amp;gt;&amp;lt;/code&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; maps each query to its associated ground truth. You can use &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; to store any temporary indexing/database structures. (You can omit &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files (for subtask 3), then the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	qbtQuery/query_00001.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	qbtQuery/query_00002.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	qbtQuery/query_00003.y_onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list&amp;gt; &amp;lt;result_file&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; are input queries, and &amp;lt;code&amp;gt;&amp;lt;result_file&amp;gt;&amp;lt;/code&amp;gt; is the filename where your script should store results. You can use &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; to store any temporary indexing/database structures. (You can omit &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;result_file&amp;gt;&amp;lt;/code&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;code&amp;gt;&amp;lt;result_file&amp;gt;&amp;lt;/code&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked (closest match) MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, qbt at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10388</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10388"/>
		<updated>2014-08-28T02:12:31Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Onset files format */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Submission deadline is September 16.&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
=== New for 2014 ===&lt;br /&gt;
* Added QBT-Extended dataset&lt;br /&gt;
* New subtask combining rhythm and pitch contours (QBT-Extended only)&lt;br /&gt;
* Minor modifications to command format - moving more toward a training/test paradigm, accommodates hidden test datasets&lt;br /&gt;
* Please make sure your list of candidates is ranked, as we will be assigning variable points for Top-10, 5, and 1 matches.&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;code&amp;gt;&amp;lt;dir_workspace_root&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;&amp;lt;dbMidi_list&amp;gt;&amp;lt;/code&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; maps each query to its associated ground truth. You can use &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; to store any temporary indexing/database structures. (You can omit &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files (for subtask 3), then the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	qbtQuery/query_00001.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	qbtQuery/query_00002.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	qbtQuery/query_00003.y_onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; are input queries, and &amp;lt;code&amp;gt;&amp;lt;resultFile&amp;gt;&amp;lt;/code&amp;gt; is the filename where your script should store results. You can use &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; to store any temporary indexing/database structures. (You can omit &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;resultFile&amp;gt;&amp;lt;/code&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;code&amp;gt;&amp;lt;resultFile&amp;gt;&amp;lt;/code&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked (closest match) MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, qbt at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10387</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10387"/>
		<updated>2014-08-28T02:11:41Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Step 2: Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Submission deadline is September 16.&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
=== New for 2014 ===&lt;br /&gt;
* Added QBT-Extended dataset&lt;br /&gt;
* New subtask combining rhythm and pitch contours (QBT-Extended only)&lt;br /&gt;
* Minor modifications to command format - moving more toward a training/test paradigm, accommodates hidden test datasets&lt;br /&gt;
* Please make sure your list of candidates is ranked, as we will be assigning variable points for Top-10, 5, and 1 matches.&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;code&amp;gt;&amp;lt;dir_workspace_root&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;&amp;lt;dbMidi_list&amp;gt;&amp;lt;/code&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; maps each query to its associated ground truth. You can use &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; to store any temporary indexing/database structures. (You can omit &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files (for subtask 3), then the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	qbtQuery/query_00001.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	qbtQuery/query_00002.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	qbtQuery/query_00003.y_onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; are input queries, and &amp;lt;code&amp;gt;&amp;lt;resultFile&amp;gt;&amp;lt;/code&amp;gt; is the filename where your script should store results. You can use &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; to store any temporary indexing/database structures. (You can omit &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;resultFile&amp;gt;&amp;lt;/code&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;code&amp;gt;&amp;lt;resultFile&amp;gt;&amp;lt;/code&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked (closest match) MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, qbt at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10386</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10386"/>
		<updated>2014-08-28T02:09:53Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Step 1: Training */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Submission deadline is September 16.&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
=== New for 2014 ===&lt;br /&gt;
* Added QBT-Extended dataset&lt;br /&gt;
* New subtask combining rhythm and pitch contours (QBT-Extended only)&lt;br /&gt;
* Minor modifications to command format - moving more toward a training/test paradigm, accommodates hidden test datasets&lt;br /&gt;
* Please make sure your list of candidates is ranked, as we will be assigning variable points for Top-10, 5, and 1 matches.&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;code&amp;gt;&amp;lt;dir_workspace_root&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;&amp;lt;dbMidi_list&amp;gt;&amp;lt;/code&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; maps each query to its associated ground truth. You can use &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; to store any temporary indexing/database structures. (You can omit &amp;lt;code&amp;gt;[dir_workspace_root]&amp;lt;/code&amp;gt; if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files (for subtask 3), then the format of &amp;lt;code&amp;gt;&amp;lt;query_file_list&amp;gt;&amp;lt;/code&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	qbtQuery/query_00001.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	qbtQuery/query_00002.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	qbtQuery/query_00003.y_onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding &amp;lt;code&amp;gt;.onset&amp;lt;/code&amp;gt; file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked (closest match) MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, qbt at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10385</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10385"/>
		<updated>2014-08-28T02:05:28Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Step 0: Indexing the MIDIs collection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Submission deadline is September 16.&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
=== New for 2014 ===&lt;br /&gt;
* Added QBT-Extended dataset&lt;br /&gt;
* New subtask combining rhythm and pitch contours (QBT-Extended only)&lt;br /&gt;
* Minor modifications to command format - moving more toward a training/test paradigm, accommodates hidden test datasets&lt;br /&gt;
* Please make sure your list of candidates is ranked, as we will be assigning variable points for Top-10, 5, and 1 matches.&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;code&amp;gt;&amp;lt;dir_workspace_root&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;dbMidi_list&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;query_file_list&amp;gt; maps each query to its associated ground truth. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are .onset and .y_onset files (for subtask 3), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	qbtQuery/query_00001.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	qbtQuery/query_00002.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	qbtQuery/query_00003.y_onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same .onset file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An .onset file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding .onset file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked (closest match) MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, qbt at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10286</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10286"/>
		<updated>2014-07-17T22:11:23Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Submission deadline is September 16.&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
=== New for 2014 ===&lt;br /&gt;
* Added QBT-Extended dataset&lt;br /&gt;
* New subtask combining rhythm and pitch contours (QBT-Extended only)&lt;br /&gt;
* Minor modifications to command format - moving more toward a training/test paradigm, accommodates hidden test datasets&lt;br /&gt;
* Please make sure your list of candidates is ranked, as we will be assigning variable points for Top-10, 5, and 1 matches.&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;dbMidi_list&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;query_file_list&amp;gt; maps each query to its associated ground truth. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are .onset and .y_onset files (for subtask 3), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	qbtQuery/query_00001.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	qbtQuery/query_00002.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	qbtQuery/query_00003.y_onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same .onset file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An .onset file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding .onset file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked (closest match) MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, qbt at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10285</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10285"/>
		<updated>2014-07-17T22:11:08Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Discussions for 2014 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Submission deadline is September 16.&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
=== New for 2014 ===&lt;br /&gt;
* Added QBT-Extended dataset&lt;br /&gt;
* New subtask combining rhythm and pitch contours (QBT-Extended only)&lt;br /&gt;
* Minor modifications to command format - moving more toward a training/test paradigm, accommodates hidden test datasets&lt;br /&gt;
* Please make sure your list of candidates is ranked, as we will be assigning variable points for Top-10, 5, and 1 matches.&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;dbMidi_list&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;query_file_list&amp;gt; maps each query to its associated ground truth. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are .onset and .y_onset files (for subtask 3), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	qbtQuery/query_00001.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	qbtQuery/query_00002.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	qbtQuery/query_00003.y_onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same .onset file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An .onset file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding .onset file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked (closest match) MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, qbt at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10284</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10284"/>
		<updated>2014-07-15T00:25:49Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Potential Participants */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;dbMidi_list&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;query_file_list&amp;gt; maps each query to its associated ground truth. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are .onset and .y_onset files (for subtask 3), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	qbtQuery/query_00001.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	qbtQuery/query_00002.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	qbtQuery/query_00003.y_onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same .onset file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An .onset file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding .onset file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked (closest match) MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, qbt at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10283</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10283"/>
		<updated>2014-07-15T00:24:00Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Per-task input specification */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;dbMidi_list&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;query_file_list&amp;gt; maps each query to its associated ground truth. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are .onset and .y_onset files (for subtask 3), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	qbtQuery/query_00001.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	qbtQuery/query_00002.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	qbtQuery/query_00003.y_onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same .onset file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An .onset file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding .onset file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked (closest match) MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10282</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10282"/>
		<updated>2014-07-15T00:23:01Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Per-task input specification */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;dbMidi_list&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;query_file_list&amp;gt; maps each query to its associated ground truth. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are .onset and .y_onset files (for subtask 3), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	qbtQuery/query_00001.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	qbtQuery/query_00002.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	qbtQuery/query_00003.y_onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset-files-format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=#Onset-files-format&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same .onset file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An .onset file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding .onset file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked (closest match) MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10281</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10281"/>
		<updated>2014-07-15T00:22:37Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Per-task input specification */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;dbMidi_list&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;query_file_list&amp;gt; maps each query to its associated ground truth. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset-files-format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are .onset and .y_onset files (for subtask 3), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	qbtQuery/query_00001.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	qbtQuery/query_00002.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	qbtQuery/query_00003.y_onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset-files-format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=#Onset-files-format&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same .onset file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An .onset file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding .onset file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked (closest match) MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10280</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10280"/>
		<updated>2014-07-15T00:19:39Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Step 2: Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;dbMidi_list&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;query_file_list&amp;gt; maps each query to its associated ground truth. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset-files-format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are .onset and .y_onset files (for subtask 3), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	qbtQuery/query_00001.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	qbtQuery/query_00002.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	qbtQuery/query_00003.y_onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset-files-format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=#Onset-file-format&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same .onset file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An .onset file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding .onset file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked (closest match) MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10279</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10279"/>
		<updated>2014-07-15T00:19:02Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Per-task input specification */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;dbMidi_list&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;query_file_list&amp;gt; maps each query to its associated ground truth. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset-files-format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are .onset and .y_onset files (for subtask 3), then the format of &amp;lt;query_file_list&amp;gt; is like this (tab-separated):&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset	qbtQuery/query_00001.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset	qbtQuery/query_00002.y_onset	00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset	qbtQuery/query_00003.y_onset	00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset-files-format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=#Onset-file-format&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same .onset file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An .onset file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding .onset file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10278</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10278"/>
		<updated>2014-07-15T00:12:44Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Step 2: Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;dbMidi_list&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;query_file_list&amp;gt; maps each query to its associated ground truth. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtasks 1 and 3), then the format of &amp;lt;query_file_list&amp;gt; is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset-files-format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;query_file_list&amp;gt; is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=#Onset-file-format&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same .onset file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An .onset file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding .onset file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTesting &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10277</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10277"/>
		<updated>2014-07-15T00:12:26Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Step 1: Training */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtTraining &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;dbMidi_list&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;query_file_list&amp;gt; maps each query to its associated ground truth. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtasks 1 and 3), then the format of &amp;lt;query_file_list&amp;gt; is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset-files-format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;query_file_list&amp;gt; is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=#Onset-file-format&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same .onset file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An .onset file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding .onset file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10276</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10276"/>
		<updated>2014-07-15T00:10:29Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Step 1: Training */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;dbMidi_list&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;query_file_list&amp;gt; maps each query to its associated ground truth. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtasks 1 and 3), then the format of &amp;lt;query_file_list&amp;gt; is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset-files-format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;query_file_list&amp;gt; is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=#Onset-file-format&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
===== Onset files format  =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same .onset file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An .onset file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding .onset file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10275</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10275"/>
		<updated>2014-07-15T00:09:12Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Step 1: Training */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;dbMidi_list&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;query_file_list&amp;gt; maps each query to its associated ground truth. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtasks 1 and 3), then the format of &amp;lt;query_file_list&amp;gt; is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
See details of [[#Onset-files-format|Onset files format]]&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;query_file_list&amp;gt; is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
===== &amp;lt;span id=#Onset-file-format&amp;gt;Onset files format&amp;lt;/span&amp;gt; =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same .onset file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
An .onset file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding .onset file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10274</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10274"/>
		<updated>2014-07-15T00:04:51Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Per-task input specification */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;dbMidi_list&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;query_file_list&amp;gt; maps each query to its associated ground truth. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtasks 1 and 3), then the format of &amp;lt;query_file_list&amp;gt; is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
An .onset file is a space-separated text file of elapsed onset times (in milliseconds) from the first onset, which is always 0.0. Example of 5 onsets: &lt;br /&gt;
&lt;br /&gt;
 0.0 479.922 720.069 976.071 1215.694&lt;br /&gt;
&lt;br /&gt;
For subtask 3, the additional dimension (position/pitch/contour) is provided in a file with the same name, but with extension &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt;. Similarly to an .onset file, &amp;lt;code&amp;gt;.y_onset&amp;lt;/code&amp;gt; files are space-separated text files, the same length as the corresponding .onset file, that lists the absolute vertical position of each tap on the touchscreen. Example (corresponding to above):&lt;br /&gt;
&lt;br /&gt;
 291.000000 293.500000 305.500000 302.000000 239.000000&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;query_file_list&amp;gt; is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
===== Onset files format =====&lt;br /&gt;
To preserve compatibility with the original task, the QBT-E query files share the same .onset file extension as previous symbolic input query datasets.&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10273</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10273"/>
		<updated>2014-07-14T23:49:16Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Step 1: Test the query files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Training ===&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram &amp;lt;dbMidi_list&amp;gt; &amp;lt;query_file_list&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;dbMidi_list&amp;gt; is a list of the MIDI files in the database to match against (see Step 0), and &amp;lt;query_file_list&amp;gt; maps each query to its associated ground truth. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
==== Per-task input specification ====&lt;br /&gt;
If the input query files are onset files (for subtask 1), then the format of &amp;lt;query_file_list&amp;gt; is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Please refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of &amp;lt;query_file_list&amp;gt; is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
If the input query files are 2-dimensional onset files (for subtask 3), then the format of &amp;lt;query_file_list&amp;gt; is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Testing ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram &amp;lt;query_file_list&amp;gt; &amp;lt;resultFile&amp;gt; [dir_workspace_root]&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;query_file_list&amp;gt; are input queries, and &amp;lt;resultFile&amp;gt; is the filename where your script should store results. You can use [dir_workspace_root] to store any temporary indexing/database structures. (You can omit [dir_workspace_root] if you do not need it at all.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;resultFile&amp;gt; gives ranked top-10 candidates for each query (note that ranking of the candidates is new for 2014). For instance &amp;lt;resultFile&amp;gt; should have the following format for subtasks 1 and 3:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Where 00025 is the top-ranked MIDI file for query_00001, followed by 01003, 02200, etc. Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10272</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10272"/>
		<updated>2014-07-14T23:22:54Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Test the query files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Test the query files ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Please refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10271</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10271"/>
		<updated>2014-07-14T23:22:33Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Step 0: Indexing the MIDIs collection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;dbMidi.list&amp;gt; is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into &amp;lt;dir_workspace_root&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Please refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10270</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10270"/>
		<updated>2014-07-14T23:20:34Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Step 0: Indexing the MIDIs collection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing &amp;lt;dbMidi.list&amp;gt; &amp;lt;dir_workspace_root&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into [dir_workspace_root].&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Please refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10269</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10269"/>
		<updated>2014-07-14T23:19:08Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Step 0: Indexing the MIDIs collection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the following command-line format (Note that this step is not required unless you want to index or preprocess the MIDI database).&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%.&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Please refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10268</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10268"/>
		<updated>2014-07-14T23:18:36Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Indexing the MIDIs collection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the formats below. Note that this step is not required unless you want to index or preprocess the MIDI database.&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%.&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Please refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10267</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10267"/>
		<updated>2014-07-14T23:15:53Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Indexing the MIDIs collection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
If your algorithm needs to pre-process (e.g., index) the database, your code should do so using the formats below. Note that this step is not required unless you want to index or preprocess the MIDI database.&lt;br /&gt;
&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%.&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Please refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10253</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10253"/>
		<updated>2014-07-01T22:42:10Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Discussions for 2014 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
CCRMA is very excited to be hosting the QBT task this year!&lt;br /&gt;
&lt;br /&gt;
Any questions or suggestions can be added directly here, or you can send us an email, qbt | at | ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Please refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10225</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10225"/>
		<updated>2014-07-01T20:57:47Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Please refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples, Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10188</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10188"/>
		<updated>2014-07-01T03:47:58Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Please refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116, paper 6136.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10187</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10187"/>
		<updated>2014-07-01T03:46:37Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Please refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBank - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10186</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10186"/>
		<updated>2014-07-01T03:36:53Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Please refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
Chen JCC, and Chen ALP (1998). Query by rhythm: An approach for song retrieval in music databases. Research Issues in Data Engineering, Proceedings of IEEE Eighth International Workshop on Continuous-Media Databases and Applications, 139-146.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). BeatBand - an MPEG-7 compliant query by tapping system. Audio Engineering Society Convention 116.&lt;br /&gt;
&lt;br /&gt;
Eisenberg G, Batke JM, and Sikora T (2004). Efficiently computable similarity measures for query by tapping systems. Proceedings of the Seventh International Conference on Digital Audio Effects (DAFx'04), Naples Italy, 189-192.&lt;br /&gt;
&lt;br /&gt;
Hanna P, and Robine M (2009) Query by tapping system based on alignment algorithm. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1881-1884. &lt;br /&gt;
&lt;br /&gt;
Hébert S, and Peretz I (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory &amp;amp; Cognition 25:4, 518-533.&lt;br /&gt;
&lt;br /&gt;
Hsiao SJS, Liant T, and Ke HR (2008). A simple algorithm for rhythm similarity comparison. MIREX.&lt;br /&gt;
&lt;br /&gt;
Jang JSR, Lee HR, and Yeh CH (2001). Query by tapping: A new paradigm for content-based music retrieval from acoustic input. Advances in Multimedia Information Processing PCM, 590-597.&lt;br /&gt;
&lt;br /&gt;
Kaneshiro B, Kim HS, Herrera J, Oh J, Berger J, and Slaney M (2013). QBT-extended: An annotated dataset of melodically contoured tapped queries. Proceedings of the 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 329-334.&lt;br /&gt;
&lt;br /&gt;
Peters G, Anthony C, and Schwartz M (2005). Song search and retrieval by tapping. Proceedings of the National Conference on Artificial Intelligence 20, 1696.&lt;br /&gt;
&lt;br /&gt;
Peters G, Cukierman D, Anthony C, and Schwartz M (2006). Online music search by tapping. Ambient Intelligence in Everyday Life, 178-197.&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10185</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10185"/>
		<updated>2014-07-01T03:23:03Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Please refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10040</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10040"/>
		<updated>2014-05-08T19:24:27Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Test the query files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Please refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10039</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10039"/>
		<updated>2014-05-08T19:24:03Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10038</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10038"/>
		<updated>2014-05-08T19:22:34Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Potential Participants */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
Jorge Herrera, Hyung-Suk Kim, and Blair Kaneshiro, CCRMA, jorgeh at ccrma dot stanford dot edu&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10037</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10037"/>
		<updated>2014-05-08T19:18:15Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Subtask 3: QBT-Extended with symbolic input (under construction) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (new for 2014) ===&lt;br /&gt;
* This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10036</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10036"/>
		<updated>2014-05-08T19:17:36Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory) from 60 participants; 51 ground-truth MIDI files&lt;br /&gt;
** A hidden dataset is currently being collected, from 20 new participants&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (under construction) ===&lt;br /&gt;
* '''New for 2014''': This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10035</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10035"/>
		<updated>2014-05-08T19:16:35Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Subtask 3: QBT-Extended with symbolic input (under construction) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory); 51 ground-truth MIDI files&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (under construction) ===&lt;br /&gt;
* '''New for 2014''': This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Development dataset''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Development evaluation''': Return top 10 candidates for each query file in the development dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Test evaluation''': Return top 10 candidates for each query file in the hidden dataset. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10034</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10034"/>
		<updated>2014-05-08T19:08:50Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Subtask 3: QBT-Extended with symbolic input (under construction) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory); 51 ground-truth MIDI files&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (under construction) ===&lt;br /&gt;
* '''New for 2014''': This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
* '''Possible secondary evaluation''': Use the hidden dataset.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10033</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10033"/>
		<updated>2014-05-08T19:05:27Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Task description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory); 51 ground-truth MIDI files&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (under construction) ===&lt;br /&gt;
* '''New for 2014''': This subtask uses a longer query vector concatenating tap times and (pitch) positions.&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files in the QBT-Extended dataset. Both onset times and MIDI note numbers are used.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries). Both onset times and vertical coordinates of tasks are considered.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10032</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10032"/>
		<updated>2014-05-08T19:02:55Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Task description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory); 51 ground-truth MIDI files&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
* '''Evaluations are performed separately on each dataset'''&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times to retrieve target MIDIs from all datasets listed above. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 3: QBT-Extended with symbolic input (under construction) ===&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files in the QBT-Extended dataset.&lt;br /&gt;
* '''Query files''': Text files of onset times in the QBT-Extended dataset (long-term and short-term memory queries).&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10031</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10031"/>
		<updated>2014-05-08T18:59:20Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Kaneshiro et al.'s [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory); 51 ground-truth MIDI files&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Evaluation is performed separately on each dataset'''&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset time to retrieve target MIDIs. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate).&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10030</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10030"/>
		<updated>2014-05-08T18:58:57Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* Blair Kaneshiro's [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory); 51 ground-truth MIDI files&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Evaluation is performed separately on each dataset'''&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset time to retrieve target MIDIs. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate).&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10029</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10029"/>
		<updated>2014-05-08T18:58:37Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Task description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* CCRMA's [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory); 51 ground-truth MIDI files&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Evaluation is performed separately on each dataset'''&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset time to retrieve target MIDIs. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate).&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10028</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10028"/>
		<updated>2014-05-08T18:58:00Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Subtask 1: QBT with symbolic input */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* CCRMA's [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory); 51 ground-truth MIDI files&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Evaluation is performed separately on each dataset'''&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset time to retrieve target MIDIs. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider Top-5 and Top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10027</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10027"/>
		<updated>2014-05-08T18:57:47Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Subtask 1: QBT with symbolic input */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* CCRMA's [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory); 51 ground-truth MIDI files&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Evaluation is performed separately on each dataset'''&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset time to retrieve target MIDIs. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate). We may also consider top-5 and top-1 scoring.&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10026</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10026"/>
		<updated>2014-05-08T18:57:07Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Subtask 1: QBT with symbolic input */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* CCRMA's [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory); 51 ground-truth MIDI files&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Evaluation is performed separately on each dataset'''&lt;br /&gt;
* '''Test database''': The set of ground-truth MIDI files corresponding to each dataset.&lt;br /&gt;
* '''Query files''': Text files of onset time to retrieve target MIDIs. These onset files can help participant concentrate on similarity matching instead of onset detection. Onset files derived from .wav files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate).&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10025</id>
		<title>2014:Query by Tapping</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Query_by_Tapping&amp;diff=10025"/>
		<updated>2014-05-08T18:55:56Z</updated>

		<summary type="html">&lt;p&gt;Blair Kaneshiro: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The text of this section is copied from the 2013 page. Please add your comments and discussions for 2014. &lt;br /&gt;
&lt;br /&gt;
The main purpose of QBT (Query by Tapping) is to evaluate MIR system in retrieving ground-truth MIDI files by tapping the onset of music notes to the microphone. This task provides query files in wave format as well as the corresponding human-label onset time in symbolic format. For this year's QBT task, we have three corpora for evaluation:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBT.rar MIR-QBT]: This dataset contains both wav files (recorded via microphone) and onset files (human-labeled onset time).&lt;br /&gt;
** 890 onset &amp;amp; .wav queries; 136 ground-truth MIDI files&lt;br /&gt;
* Show Hsiao's [http://mirlab.org/dataSet/public/QBT_symbolic.rar QBT_symbolic]: This dataset contains only onset files (obtained from the user's tapping on keyboard).&lt;br /&gt;
** 410 onset queries; 143 ground-truth MIDI files (128 of which have at least one query)&lt;br /&gt;
* CCRMA's [http://ccrma.stanford.edu/groups/qbtextended/data/qbt-extended-onset.zip QBT-Extended]: This dataset contains only onset files (obtained from users tapping on a touchscreen). Documentation can be found [http://ccrma.stanford.edu/groups/qbtextended/dataset.html here].&lt;br /&gt;
** 3,365 onset queries (1,412 from long-term memory and 1,953 from short-term memory); 51 ground-truth MIDI files&lt;br /&gt;
&lt;br /&gt;
== Task description ==&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: QBT with symbolic input ===&lt;br /&gt;
* '''Evaluation is performed separately on each dataset'''&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT and HSIAO.&lt;br /&gt;
* '''Query files''': About 800 text files of onset time to retrieve target MIDIs in MIR_QBT/HSIAO. These onset files can help participant concentrate on similarity matching instead of onset detection. All onset files cannot guarantee to have perfect detection result from original wav query files.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate).&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: QBT with wave input ===&lt;br /&gt;
* '''Test database''': About 150 ground-truth monophonic MIDI files in MIR-QBT.&lt;br /&gt;
* '''Query files''': About 800 wave files of tapping recordings to retrieve MIDIs in MIR-QBT.&lt;br /&gt;
* '''Evaluation''': Return top 10 candidates for each query file. 1 point is scored for a hit in the top 10 and 0 is scored otherwise (Top-10 hit rate).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Command formats ==&lt;br /&gt;
&lt;br /&gt;
=== Indexing the MIDIs collection ===&lt;br /&gt;
Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 QBT/database/00001.mid&lt;br /&gt;
 QBT/database/00002.mid&lt;br /&gt;
 QBT/database/00003.mid&lt;br /&gt;
 QBT/database/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (Note that this step is not required unless you want to index or preprocess the midi database.)&lt;br /&gt;
&lt;br /&gt;
=== Test the query files ===&lt;br /&gt;
The command format should be like this:&lt;br /&gt;
&lt;br /&gt;
 qbtProgram %dbMidi_list% %query_file_list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. (You can omit %dir_workspace_root% if you do not need it at all.) If the input query files are onset files (for subtask 1), then the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.onset   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.onset   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
(Pleae refer to the readme.txt of the downloaded MIR-QBT corpus for the format of onset files.)&lt;br /&gt;
&lt;br /&gt;
If the input query files are wave files (for subtask 2), the the format of %query_file_list% is like this:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00002.wav   00001.mid&lt;br /&gt;
 qbtQuery/query_00003.wav   00002.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives top-10 candidates for each query. For instance, for wave query file, the result file should have the following format for subtask 1:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.onset: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.onset: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.onset: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
And for subtask 2:&lt;br /&gt;
&lt;br /&gt;
 qbtQuery/query_00001.wav: 00025 01003 02200 ... &lt;br /&gt;
 qbtQuery/query_00002.wav: 01547 02313 07653 ... &lt;br /&gt;
 qbtQuery/query_00003.wav: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note that the output should be the names of the MIDI files (e.g., &amp;lt;code&amp;gt;00025&amp;lt;/code&amp;gt; means &amp;lt;code&amp;gt;00025.mid&amp;lt;/code&amp;gt;); they are not necessary 5-digit numbers.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Discussions for 2014 ==&lt;/div&gt;</summary>
		<author><name>Blair Kaneshiro</name></author>
		
	</entry>
</feed>