<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Zizzi+wang</id>
	<title>MIREX Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Zizzi+wang"/>
	<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/wiki/Special:Contributions/Zizzi_wang"/>
	<updated>2026-04-30T22:29:46Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation_Results&amp;diff=14788</id>
		<title>2025:Symbolic Music Generation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation_Results&amp;diff=14788"/>
		<updated>2025-09-11T12:51:06Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Submissions =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|- style=&amp;quot;font-weight:bold;&amp;quot;&lt;br /&gt;
! Team&lt;br /&gt;
! Extended Abstract&lt;br /&gt;
! Methods&lt;br /&gt;
|-&lt;br /&gt;
| RWKV (Zhou-Zheng et al.)&lt;br /&gt;
| [https://www.music-ir.org/mirex/wiki/MIREX_HOME]&lt;br /&gt;
| RWKV&lt;br /&gt;
|-&lt;br /&gt;
| PixelGen&lt;br /&gt;
| [https://www.music-ir.org/mirex/wiki/MIREX_HOME]&lt;br /&gt;
| Hierarchical Transformer&lt;br /&gt;
|-&lt;br /&gt;
| MuseCoco (BL-1)&lt;br /&gt;
| [https://arxiv.org/abs/2306.00110]&lt;br /&gt;
| Transformer&lt;br /&gt;
|-&lt;br /&gt;
| Anticipatory Music Transformer (BL-2)&lt;br /&gt;
| [https://arxiv.org/abs/2306.08620]&lt;br /&gt;
| Transformer&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=Results=&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
|- style=&amp;quot;font-weight:bold; vertical-align:center;&amp;quot;&lt;br /&gt;
! rowspan=&amp;quot;2&amp;quot; | Team&lt;br /&gt;
! colspan=&amp;quot;4&amp;quot; | Subjective Evaluation&lt;br /&gt;
|- style=&amp;quot;font-weight:bold; vertical-align:center;&amp;quot;&lt;br /&gt;
| Coherecy ↑&lt;br /&gt;
| Structure ↑&lt;br /&gt;
| Creativity ↑&lt;br /&gt;
| Musicality ↑&lt;br /&gt;
|-&lt;br /&gt;
| RWKV (Zhou-Zheng et al.)&lt;br /&gt;
| 3.57 ± 0.10&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 3.58 ± 0.10&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 3.26 ± 0.10&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
| '''3.5 ± 0.10&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;'''&lt;br /&gt;
|-&lt;br /&gt;
| PixelGen&lt;br /&gt;
| 2.39 ± 0.10&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 2.37 ± 0.09&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 2.85 ± 0.09&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 2.48 ± 0.09&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| MuseCoco (BL-1)&lt;br /&gt;
| 3.11 ± 0.10&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 3.07 ± 0.09&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 3.08 ± 0.09&amp;lt;sup&amp;gt;ab&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 2.95 ± 0.09&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Anticipatory Music Transformer (BL-2)&lt;br /&gt;
| '''3.70 ± 0.10&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;'''&lt;br /&gt;
| '''3.69 ± 0.09&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;'''&lt;br /&gt;
| '''3.30 ± 0.10&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;'''&lt;br /&gt;
| 3.45 ± 0.10&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
'''Note''': Results are reported in the form of mean ± sem&amp;lt;sup&amp;gt;s&amp;lt;/sup&amp;gt; (sem refers to standard error of mean), where s is a letter. Different letters within a column indicate significant differences (p-value p &amp;lt; 0.05) based on a Wilcoxon signed rank test.&lt;br /&gt;
&lt;br /&gt;
'''Subjective Evaluation Details''': One piece cherry-picked from 8 samples of each test piece, resulting in 6 pages of questions. We collect responses from 22 participants (18 complete submissions and 4 partial submissions). For complete submissions, the average completion time is 16min 59s.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation_Results&amp;diff=14771</id>
		<title>2025:Symbolic Music Generation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation_Results&amp;diff=14771"/>
		<updated>2025-09-11T03:27:09Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Submissions =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|- style=&amp;quot;font-weight:bold;&amp;quot;&lt;br /&gt;
! Team&lt;br /&gt;
! Extended Abstract&lt;br /&gt;
! Methods&lt;br /&gt;
|-&lt;br /&gt;
| RWKV (Zhou-Zheng et al.)&lt;br /&gt;
| [https://www.music-ir.org/mirex/wiki/MIREX_HOME]&lt;br /&gt;
| RWKV&lt;br /&gt;
|-&lt;br /&gt;
| PixelGen&lt;br /&gt;
| [https://www.music-ir.org/mirex/wiki/MIREX_HOME]&lt;br /&gt;
| Hierarchical Transformer&lt;br /&gt;
|-&lt;br /&gt;
| MuseCoco (BL-1)&lt;br /&gt;
| [https://arxiv.org/abs/2306.00110]&lt;br /&gt;
| Transformer&lt;br /&gt;
|-&lt;br /&gt;
| Anticipatory Music Transformer (BL-2)&lt;br /&gt;
| [https://arxiv.org/abs/2306.08620]&lt;br /&gt;
| Transformer&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=Results=&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
|- style=&amp;quot;font-weight:bold; vertical-align:center;&amp;quot;&lt;br /&gt;
! rowspan=&amp;quot;2&amp;quot; | Team&lt;br /&gt;
! colspan=&amp;quot;4&amp;quot; | Subjective Evaluation&lt;br /&gt;
|- style=&amp;quot;font-weight:bold; vertical-align:center;&amp;quot;&lt;br /&gt;
| Coherecy ↑&lt;br /&gt;
| Structure ↑&lt;br /&gt;
| Creativity ↑&lt;br /&gt;
| Musicality ↑&lt;br /&gt;
|-&lt;br /&gt;
| RWKV (Zhou-Zheng et al.)&lt;br /&gt;
| 3.57 ± 0.10&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 3.58 ± 0.10&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 3.26 ± 0.10&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
| '''3.5 ± 0.10&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;'''&lt;br /&gt;
|-&lt;br /&gt;
| PixelGen&lt;br /&gt;
| 2.39 ± 0.10&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 2.37 ± 0.09&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 2.85 ± 0.09&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 2.48 ± 0.09&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| MuseCoco (BL-1)&lt;br /&gt;
| 3.11 ± 0.10&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 3.07 ± 0.09&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 3.08 ± 0.09&amp;lt;sup&amp;gt;ab&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 2.95 ± 0.09&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Anticipatory Music Transformer (BL-2)&lt;br /&gt;
| '''3.70 ± 0.10&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;'''&lt;br /&gt;
| '''3.69 ± 0.09&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;'''&lt;br /&gt;
| '''3.30 ± 0.10&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;'''&lt;br /&gt;
| 3.45 ± 0.10&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
'''Note''': Results are reported in the form of mean ± sem&amp;lt;sup&amp;gt;s&amp;lt;/sup&amp;gt; (sem refers to standard error of mean), where s is a letter. Different letters within a column indicate significant differences (p-value p &amp;lt; 0.05) based on a Wilcoxon signed rank test.&lt;br /&gt;
&lt;br /&gt;
'''Subjective Evaluation Details''': One piece cherry-picked from 16 samples of each test piece, resulting in 6 pages of questions. We collect responses from 22 participants (18 complete submissions and 4 partial submissions). For complete submissions, the average completion time is 16min 59s.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation_Results&amp;diff=14770</id>
		<title>2025:Symbolic Music Generation Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation_Results&amp;diff=14770"/>
		<updated>2025-09-11T03:24:47Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: Created page with &amp;quot; = Submissions =  {| class=&amp;quot;wikitable&amp;quot;  |- style=&amp;quot;font-weight:bold;&amp;quot; ! Team ! Extended Abstract ! Methods |- | RWKV (Zhou-Zheng et al.) | [https://www.music-ir.org/mirex/wiki/...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Submissions =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|- style=&amp;quot;font-weight:bold;&amp;quot;&lt;br /&gt;
! Team&lt;br /&gt;
! Extended Abstract&lt;br /&gt;
! Methods&lt;br /&gt;
|-&lt;br /&gt;
| RWKV (Zhou-Zheng et al.)&lt;br /&gt;
| [https://www.music-ir.org/mirex/wiki/MIREX_HOME]&lt;br /&gt;
| RWKV&lt;br /&gt;
|-&lt;br /&gt;
| PixelGen&lt;br /&gt;
| [https://www.music-ir.org/mirex/wiki/MIREX_HOME]&lt;br /&gt;
| Hierarchical Transformer&lt;br /&gt;
|-&lt;br /&gt;
| MuseCoco (BL-1)&lt;br /&gt;
| [https://arxiv.org/abs/2306.00110]&lt;br /&gt;
| Transformer&lt;br /&gt;
|-&lt;br /&gt;
| Anticipatory Music Transformer (BL-2)&lt;br /&gt;
| [https://arxiv.org/abs/2306.08620]&lt;br /&gt;
| Transformer&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=Results=&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
|- style=&amp;quot;font-weight:bold; vertical-align:center;&amp;quot;&lt;br /&gt;
! rowspan=&amp;quot;2&amp;quot; | Team&lt;br /&gt;
! colspan=&amp;quot;4&amp;quot; | Subjective Evaluation&lt;br /&gt;
|- style=&amp;quot;font-weight:bold; vertical-align:center;&amp;quot;&lt;br /&gt;
| Coherecy ↑&lt;br /&gt;
| Structure ↑&lt;br /&gt;
| Creativity ↑&lt;br /&gt;
| Musicality ↑&lt;br /&gt;
|-&lt;br /&gt;
| RWKV (Zhou-Zheng et al.)&lt;br /&gt;
| 3.57 ± 0.10&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 3.58 ± 0.10&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 3.26 ± 0.10&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
| '''3.5 ± 0.10&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;'''&lt;br /&gt;
|-&lt;br /&gt;
| PixelGen&lt;br /&gt;
| 2.39 ± 0.10&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 2.37 ± 0.09&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 2.85 ± 0.09&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 2.48 ± 0.09&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| MuseCoco (BL-1)&lt;br /&gt;
| 3.11 ± 0.10&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 3.07 ± 0.09&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 3.08 ± 0.09&amp;lt;sup&amp;gt;ab&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 2.95 ± 0.09&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Anticipatory Music Transformer (BL-2)&lt;br /&gt;
| '''3.70 ± 0.10&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;'''&lt;br /&gt;
| '''3.69 ± 0.09&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;'''&lt;br /&gt;
| '''3.30 ± 0.10&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;'''&lt;br /&gt;
| 3.45 ± 0.10&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
'''Note''': Results are reported in the form of mean ± sem&amp;lt;sup&amp;gt;s&amp;lt;/sup&amp;gt; (sem refers to standard error of mean), where s is a letter. Different letters within a column indicate significant differences (p-value p &amp;lt; 0.05) based on a Wilcoxon signed rank test.&lt;br /&gt;
&lt;br /&gt;
'''Objective Evaluation Details''': Each model generates 16 samples for each of 6 test pieces. Negative Log Likelihood (NLL) is computed by inputing the molody and accompaniment into the MuseCoco 1B model.&lt;br /&gt;
&lt;br /&gt;
'''Subjective Evaluation Details''': One piece cherry-picked from 16 samples of each test piece, resulting in 6 pages of questions. We collect responses from 22 participants (18 complete submissions and 4 partial submissions). For complete submissions, the average completion time is 16min 59s.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14728</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14728"/>
		<updated>2025-08-18T01:21:33Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Submission */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections. &lt;br /&gt;
&lt;br /&gt;
Please refer to [https://github.com/ZZWaang/mirex2025-musecoco this repository] to access the baseline method and know more about the submission format.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an '''online subjective double-blind test'''. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* '''We welcome both challenge participants and non-participants to submit plans for objective evaluation.''' Evaluation methods may be incorporated as reference benchmarks and could inform the development of future evaluation metrics.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''8 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency to the prompt (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
* '''Aug 15, 2025''': Submit two prompts as a part of the test set. &lt;br /&gt;
* '''Aug 21, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 26, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 28, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2025''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2025''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As described in the Evaluation and Competition Format, there are four types of submissions. Below is a list of them:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Task !! Submission Method !! Deadline&lt;br /&gt;
|-&lt;br /&gt;
| 2 Prompts for the test set || Email JSON files to organizers. || Aug 15, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Algorithm || Email code/github link/docker to organizers. Check Algorithm Submission below. || Aug 21, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Cherry-picked IDs || Email IDs to organizers. || Aug 28, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Evaluation metric (optional) || Email organizers. || Aug 21, 2025&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include a &amp;lt;code&amp;gt;generation.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files using the following format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./generation.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: path to the input .json file.&lt;br /&gt;
* Output Folder: path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: number of samples to generate.&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
=Baseline=&lt;br /&gt;
&lt;br /&gt;
We provide a baseline algorithm in [https://github.com/ZZWaang/mirex2025-musecoco this repository]. This is modified from the model MuseCoco (Lu, P., et al. 2023). Please also refer to this code repository to check data format and generation protocol.&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14712</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14712"/>
		<updated>2025-07-09T07:15:00Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections. &lt;br /&gt;
&lt;br /&gt;
Please refer to [https://github.com/ZZWaang/mirex2025-musecoco this repository] to access the baseline method and know more about the submission format.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an '''online subjective double-blind test'''. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* '''We welcome both challenge participants and non-participants to submit plans for objective evaluation.''' Evaluation methods may be incorporated as reference benchmarks and could inform the development of future evaluation metrics.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''8 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency to the prompt (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
* '''Aug 15, 2025''': Submit two prompts as a part of the test set. &lt;br /&gt;
* '''Aug 21, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 26, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 28, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2025''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2025''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As described in the Evaluation and Competition Format, there are four types of submissions. Below is a list of them:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Task !! Submission Method !! Deadline&lt;br /&gt;
|-&lt;br /&gt;
| 2 Prompts for the test set || Email JSON files to organizers. || Aug 15, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Algorithm || Email code/github link/docker to organizers. Check Algorithm Submission below. || Aug 21, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Cherry-picked IDs || Email IDS to organizers. || Aug 28, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Evaluation metric (optional) || Email organizers. || Aug 21, 2025&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include a &amp;lt;code&amp;gt;generation.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files using the following format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./generation.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: path to the input .json file.&lt;br /&gt;
* Output Folder: path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: number of samples to generate.&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
=Baseline=&lt;br /&gt;
&lt;br /&gt;
We provide a baseline algorithm in [https://github.com/ZZWaang/mirex2025-musecoco this repository]. This is modified from the model MuseCoco (Lu, P., et al. 2023). Please also refer to this code repository to check data format and generation protocol.&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=MIREX_HOME&amp;diff=14711</id>
		<title>MIREX HOME</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=MIREX_HOME&amp;diff=14711"/>
		<updated>2025-07-09T07:13:33Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Task Descriptions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to MIREX 2025==&lt;br /&gt;
&lt;br /&gt;
After a break of 3 years, we want to bring back the MIREX (Music Information Retrieval Evaluation eXchange) competition starting from 2024. We want to bring in new tasks, benchmarks, and datasets in response to the rapid development of computer music research.&lt;br /&gt;
&lt;br /&gt;
The MIREX community will hold its annual meeting as part of [https://ismir.net/ The International Society for Music Information Retrieval Conference]. This year, the conference will be held in [https://ismir2025.ismir.net/ Daejeon, South Korea] from September 21-25, 2025.&lt;br /&gt;
&lt;br /&gt;
In a long run, we want to make MIREX a platform for researchers to share their latest research results, to compare their systems with others, and to promote the development of the field.&lt;br /&gt;
&lt;br /&gt;
==Task Descriptions==&lt;br /&gt;
&lt;br /&gt;
Traditional MIR tasks&lt;br /&gt;
* [[2025:Audio Chord Estimation]] &amp;lt;TC: [mailto:jj2731@nyu.edu Junyan Jiang]&amp;gt;&lt;br /&gt;
* [[2025:Lyrics Transcription]] &amp;lt;TC: [mailto:ruibiny@alumni.cmu.edu Ruibin Yuan] &amp;amp; [mailto:jj2731@nyu.edu Junyan Jiang]&amp;gt;&lt;br /&gt;
* [[2025:Cover Song Identification]] &amp;lt;TC: [mailto:x.du@rochester.edu Xingjian Du] &amp;amp; [mailto:ruibiny@alumni.cmu.edu Ruibin Yuan]&amp;gt;&lt;br /&gt;
* [[2025:Music Structure Analysis]] &amp;lt;TC: [mailto:yixiao.zhang@qmul.ac.uk Yixiao Zhang] &amp;amp; [mailto:ruibiny@alumni.cmu.edu Ruibin Yuan]&amp;gt;&lt;br /&gt;
* [[2025:Audio Beat Tracking]] &amp;lt;TC: [mailto:mwysjtu@gmail.com Wenye Ma] &amp;amp; [mailto:yinghao.ma@qmul.ac.uk Yinghao Ma]&amp;gt;&lt;br /&gt;
* [[2025:Audio Key Detection]] &amp;lt;TC: [mailto:mwysjtu@gmail.com Wenye Ma] &amp;amp; [mailto:yinghao.ma@qmul.ac.uk Yinghao Ma]&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modern MIR Tasks&lt;br /&gt;
* [[2025:Symbolic Music Generation]] &amp;lt;TC: [mailto:ziyu.wang@nyu.edu Ziyu Wang] &amp;amp; [mailto:jzhao@u.nus.edu Jingwei Zhao]&amp;gt;&lt;br /&gt;
* [[2025:Music Audio Generation]] &amp;lt;TC: [mailto:ruibiny@alumni.cmu.edu Ruibin Yuan]&amp;gt;&lt;br /&gt;
* [[2025:Music Description &amp;amp; Captioning]] &amp;lt;TC: [mailto:yixiao.zhang@qmul.ac.uk Yixiao Zhang] &amp;amp; [mailto:ruibiny@alumni.cmu.edu Ruibin Yuan]&amp;gt;&lt;br /&gt;
* [[2025:Polyphonic Transcription]] &amp;lt;TC: [mailto:yunglu@purdue.edu Yung-Hsiang Lu], [mailto:yun98@purdue.edu Kristen Yeon-Ji Yun], [mailto:ziyu.wang@nyu.edu Ziyu Wang], [mailto:yujia.yan@rochester.edu Yujia Yan]&amp;gt;&lt;br /&gt;
* [[2025:Song Deepfake Detection]] &amp;lt;TC: [mailto:you.zhang@rochester.edu Neil Zhang]&amp;gt;&lt;br /&gt;
* [[2025:Music Reasoning QA]] &amp;lt;TC: [mailto:yinghao.ma@qmul.ac.uk Yinghao Ma]&amp;gt;&lt;br /&gt;
* [[2025:RenCon]] (Expressive Piano Performance Rendering Competition) &amp;lt;TC: [mailto:huan.zhang@qmul.ac.uk Huan Zhang]&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Call for Challenges==&lt;br /&gt;
&lt;br /&gt;
Starting with MIREX 2024, we invite the ISMIR community to propose new research challenges that address cutting-edge problems in Music Information Retrieval (MIR). These challenges should aim to push the boundaries of current research and foster innovation in the field.&lt;br /&gt;
&lt;br /&gt;
We also welcome challenge sponsors from both industry and research institutions, particularly those willing to contribute datasets and computational resources to support the competition.&lt;br /&gt;
&lt;br /&gt;
For the format and requirements for the challenge proposal, please go to [[2025:Call for Challenges]].&lt;br /&gt;
&lt;br /&gt;
===What's new:===&lt;br /&gt;
&lt;br /&gt;
Starting with MIREX 2025, we invite the ISMIR community to participate in shaping the future of Music Information Retrieval (MIR) by either '''proposing new research challenges''' or '''volunteering as task captains''' for existing ones. &lt;br /&gt;
&lt;br /&gt;
* '''New challenge proposals''' should aim to address cutting-edge problems and push the boundaries of current MIR research. &lt;br /&gt;
* '''Task captains for established tasks''' are encouraged to help revitalize previous tasks—potentially by updating evaluation methodologies, datasets, or other aspects to reflect recent advances in the field.&lt;br /&gt;
&lt;br /&gt;
Task Captain Responsibilities:&lt;br /&gt;
&lt;br /&gt;
* Register on the [https://www.music-ir.org/mirex MIREX Wiki] and maintain a task description page.&lt;br /&gt;
* Collect submissions via the MIREX submission server (or provide customized submission instructions).&lt;br /&gt;
* Execute and evaluate the submissions.&lt;br /&gt;
* Report results to MIREX and create a results page on the MIREX Wiki.&lt;br /&gt;
* (Optional) Present a MIREX task captain poster at the Late-Breaking and Demo (LBD) session at ISMIR 2025.&lt;br /&gt;
&lt;br /&gt;
==How to Participate==&lt;br /&gt;
&lt;br /&gt;
See also the general [[Submission Guidelines]].&lt;br /&gt;
&lt;br /&gt;
* Read the [[Participant Agreement]] and task description carefully.&lt;br /&gt;
* Program your system.&lt;br /&gt;
* Write a 2-4 page extended abstract PDF describing your system.&lt;br /&gt;
* Submit your system and extended abstract to the [http://futuremirex.com/submission MIREX submission site].&lt;br /&gt;
* Top-performing teams will have the opportunity to present their MIREX posters at the LBD session at ISMIR 2025.&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;del&amp;gt;Challenge proposals due: May 9, 2025&amp;lt;/del&amp;gt;&lt;br /&gt;
* &amp;lt;del&amp;gt;Notification of acceptance: May 16, 2025&amp;lt;/del&amp;gt;&lt;br /&gt;
* Submission open: May 31, 2025&lt;br /&gt;
* Submission close: Sept 1, 2025 (Some tasks may have a different deadline; see task descriptions)&lt;br /&gt;
* Result published: Sept 10, 2025 (Some tasks may have a different deadline; see task descriptions)&lt;br /&gt;
&lt;br /&gt;
==Contact Us==&lt;br /&gt;
&lt;br /&gt;
====Email====&lt;br /&gt;
&lt;br /&gt;
For general questions, feedback, and suggestions, please send messages to our mailing list [mailto:future-mirex@googlegroups.com future-mirex@googlegroups.com].&lt;br /&gt;
&lt;br /&gt;
For task-specific questions, we have listed the email for each task captain [[MIREX_HOME#Task_Descriptions|here]].&lt;br /&gt;
&lt;br /&gt;
====Discord Server====&lt;br /&gt;
&lt;br /&gt;
For real-time discussion with the MIREX organizers or task captains, you may join our [https://discord.gg/vC2YWX29sC discord server].&lt;br /&gt;
&lt;br /&gt;
Notice: some task captains are not in the discord server.&lt;br /&gt;
&lt;br /&gt;
====LinkedIn Organization Page====&lt;br /&gt;
&lt;br /&gt;
You may visit our LinkedIn organization page [https://www.linkedin.com/company/future-mirex/ here].&lt;br /&gt;
&lt;br /&gt;
We are looking forward to seeing you at MIREX 2025!&lt;br /&gt;
&lt;br /&gt;
Future MIREX Team, 2025&lt;br /&gt;
&lt;br /&gt;
MIREX 2025 Organizers:&lt;br /&gt;
* Gus Xia, MBZUAI&lt;br /&gt;
* Junyan Jiang, New York University&lt;br /&gt;
* Akira Maezawa, Yamaha &lt;br /&gt;
* Ziyu Wang, New York University&lt;br /&gt;
* Yixiao Zhang, ByteDance Inc.&lt;br /&gt;
* Ruibin Yuan, Hong Kong University of Science and Technology&lt;br /&gt;
* J. Stephen Downie, University of Illinois&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=MIREX_HOME&amp;diff=14710</id>
		<title>MIREX HOME</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=MIREX_HOME&amp;diff=14710"/>
		<updated>2025-07-09T07:13:05Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Task Descriptions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to MIREX 2025==&lt;br /&gt;
&lt;br /&gt;
After a break of 3 years, we want to bring back the MIREX (Music Information Retrieval Evaluation eXchange) competition starting from 2024. We want to bring in new tasks, benchmarks, and datasets in response to the rapid development of computer music research.&lt;br /&gt;
&lt;br /&gt;
The MIREX community will hold its annual meeting as part of [https://ismir.net/ The International Society for Music Information Retrieval Conference]. This year, the conference will be held in [https://ismir2025.ismir.net/ Daejeon, South Korea] from September 21-25, 2025.&lt;br /&gt;
&lt;br /&gt;
In a long run, we want to make MIREX a platform for researchers to share their latest research results, to compare their systems with others, and to promote the development of the field.&lt;br /&gt;
&lt;br /&gt;
==Task Descriptions==&lt;br /&gt;
&lt;br /&gt;
Traditional MIR tasks&lt;br /&gt;
* [[2025:Audio Chord Estimation]] &amp;lt;TC: [mailto:jj2731@nyu.edu Junyan Jiang]&amp;gt;&lt;br /&gt;
* [[2025:Lyrics Transcription]] &amp;lt;TC: [mailto:ruibiny@alumni.cmu.edu Ruibin Yuan] &amp;amp; [mailto:jj2731@nyu.edu Junyan Jiang]&amp;gt;&lt;br /&gt;
* [[2025:Cover Song Identification]] &amp;lt;TC: [mailto:x.du@rochester.edu Xingjian Du] &amp;amp; [mailto:ruibiny@alumni.cmu.edu Ruibin Yuan]&amp;gt;&lt;br /&gt;
* [[2025:Music Structure Analysis]] &amp;lt;TC: [mailto:yixiao.zhang@qmul.ac.uk Yixiao Zhang] &amp;amp; [mailto:ruibiny@alumni.cmu.edu Ruibin Yuan]&amp;gt;&lt;br /&gt;
* [[2025:Audio Beat Tracking]] &amp;lt;TC: [mailto:mwysjtu@gmail.com Wenye Ma] &amp;amp; [mailto:yinghao.ma@qmul.ac.uk Yinghao Ma]&amp;gt;&lt;br /&gt;
* [[2025:Audio Key Detection]] &amp;lt;TC: [mailto:mwysjtu@gmail.com Wenye Ma] &amp;amp; [mailto:yinghao.ma@qmul.ac.uk Yinghao Ma]&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modern MIR Tasks&lt;br /&gt;
* [[2025:Symbolic Music Generation]] &amp;lt;TC: [mailto:ziyu.wang@nyu.edu Ziyu Wang] &amp;amp; [mailto: jzhao@u.nus.edu Jingwei Zhao]&amp;gt;&lt;br /&gt;
* [[2025:Music Audio Generation]] &amp;lt;TC: [mailto:ruibiny@alumni.cmu.edu Ruibin Yuan]&amp;gt;&lt;br /&gt;
* [[2025:Music Description &amp;amp; Captioning]] &amp;lt;TC: [mailto:yixiao.zhang@qmul.ac.uk Yixiao Zhang] &amp;amp; [mailto:ruibiny@alumni.cmu.edu Ruibin Yuan]&amp;gt;&lt;br /&gt;
* [[2025:Polyphonic Transcription]] &amp;lt;TC: [mailto:yunglu@purdue.edu Yung-Hsiang Lu], [mailto:yun98@purdue.edu Kristen Yeon-Ji Yun], [mailto:ziyu.wang@nyu.edu Ziyu Wang], [mailto:yujia.yan@rochester.edu Yujia Yan]&amp;gt;&lt;br /&gt;
* [[2025:Song Deepfake Detection]] &amp;lt;TC: [mailto:you.zhang@rochester.edu Neil Zhang]&amp;gt;&lt;br /&gt;
* [[2025:Music Reasoning QA]] &amp;lt;TC: [mailto:yinghao.ma@qmul.ac.uk Yinghao Ma]&amp;gt;&lt;br /&gt;
* [[2025:RenCon]] (Expressive Piano Performance Rendering Competition) &amp;lt;TC: [mailto:huan.zhang@qmul.ac.uk Huan Zhang]&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Call for Challenges==&lt;br /&gt;
&lt;br /&gt;
Starting with MIREX 2024, we invite the ISMIR community to propose new research challenges that address cutting-edge problems in Music Information Retrieval (MIR). These challenges should aim to push the boundaries of current research and foster innovation in the field.&lt;br /&gt;
&lt;br /&gt;
We also welcome challenge sponsors from both industry and research institutions, particularly those willing to contribute datasets and computational resources to support the competition.&lt;br /&gt;
&lt;br /&gt;
For the format and requirements for the challenge proposal, please go to [[2025:Call for Challenges]].&lt;br /&gt;
&lt;br /&gt;
===What's new:===&lt;br /&gt;
&lt;br /&gt;
Starting with MIREX 2025, we invite the ISMIR community to participate in shaping the future of Music Information Retrieval (MIR) by either '''proposing new research challenges''' or '''volunteering as task captains''' for existing ones. &lt;br /&gt;
&lt;br /&gt;
* '''New challenge proposals''' should aim to address cutting-edge problems and push the boundaries of current MIR research. &lt;br /&gt;
* '''Task captains for established tasks''' are encouraged to help revitalize previous tasks—potentially by updating evaluation methodologies, datasets, or other aspects to reflect recent advances in the field.&lt;br /&gt;
&lt;br /&gt;
Task Captain Responsibilities:&lt;br /&gt;
&lt;br /&gt;
* Register on the [https://www.music-ir.org/mirex MIREX Wiki] and maintain a task description page.&lt;br /&gt;
* Collect submissions via the MIREX submission server (or provide customized submission instructions).&lt;br /&gt;
* Execute and evaluate the submissions.&lt;br /&gt;
* Report results to MIREX and create a results page on the MIREX Wiki.&lt;br /&gt;
* (Optional) Present a MIREX task captain poster at the Late-Breaking and Demo (LBD) session at ISMIR 2025.&lt;br /&gt;
&lt;br /&gt;
==How to Participate==&lt;br /&gt;
&lt;br /&gt;
See also the general [[Submission Guidelines]].&lt;br /&gt;
&lt;br /&gt;
* Read the [[Participant Agreement]] and task description carefully.&lt;br /&gt;
* Program your system.&lt;br /&gt;
* Write a 2-4 page extended abstract PDF describing your system.&lt;br /&gt;
* Submit your system and extended abstract to the [http://futuremirex.com/submission MIREX submission site].&lt;br /&gt;
* Top-performing teams will have the opportunity to present their MIREX posters at the LBD session at ISMIR 2025.&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;del&amp;gt;Challenge proposals due: May 9, 2025&amp;lt;/del&amp;gt;&lt;br /&gt;
* &amp;lt;del&amp;gt;Notification of acceptance: May 16, 2025&amp;lt;/del&amp;gt;&lt;br /&gt;
* Submission open: May 31, 2025&lt;br /&gt;
* Submission close: Sept 1, 2025 (Some tasks may have a different deadline; see task descriptions)&lt;br /&gt;
* Result published: Sept 10, 2025 (Some tasks may have a different deadline; see task descriptions)&lt;br /&gt;
&lt;br /&gt;
==Contact Us==&lt;br /&gt;
&lt;br /&gt;
====Email====&lt;br /&gt;
&lt;br /&gt;
For general questions, feedback, and suggestions, please send messages to our mailing list [mailto:future-mirex@googlegroups.com future-mirex@googlegroups.com].&lt;br /&gt;
&lt;br /&gt;
For task-specific questions, we have listed the email for each task captain [[MIREX_HOME#Task_Descriptions|here]].&lt;br /&gt;
&lt;br /&gt;
====Discord Server====&lt;br /&gt;
&lt;br /&gt;
For real-time discussion with the MIREX organizers or task captains, you may join our [https://discord.gg/vC2YWX29sC discord server].&lt;br /&gt;
&lt;br /&gt;
Notice: some task captains are not in the discord server.&lt;br /&gt;
&lt;br /&gt;
====LinkedIn Organization Page====&lt;br /&gt;
&lt;br /&gt;
You may visit our LinkedIn organization page [https://www.linkedin.com/company/future-mirex/ here].&lt;br /&gt;
&lt;br /&gt;
We are looking forward to seeing you at MIREX 2025!&lt;br /&gt;
&lt;br /&gt;
Future MIREX Team, 2025&lt;br /&gt;
&lt;br /&gt;
MIREX 2025 Organizers:&lt;br /&gt;
* Gus Xia, MBZUAI&lt;br /&gt;
* Junyan Jiang, New York University&lt;br /&gt;
* Akira Maezawa, Yamaha &lt;br /&gt;
* Ziyu Wang, New York University&lt;br /&gt;
* Yixiao Zhang, ByteDance Inc.&lt;br /&gt;
* Ruibin Yuan, Hong Kong University of Science and Technology&lt;br /&gt;
* J. Stephen Downie, University of Illinois&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14709</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14709"/>
		<updated>2025-07-09T07:10:07Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections. Please refer to [https://github.com/ZZWaang/mirex2025-musecoco this repository] to access the baseline method and submission format.&lt;br /&gt;
&lt;br /&gt;
Please check [https://github.com/ZZWaang/mirex2025-musecoco this repository] to access the baseline method and know more about the submission format.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an '''online subjective double-blind test'''. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* '''We welcome both challenge participants and non-participants to submit plans for objective evaluation.''' Evaluation methods may be incorporated as reference benchmarks and could inform the development of future evaluation metrics.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''8 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency to the prompt (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
* '''Aug 15, 2025''': Submit two prompts as a part of the test set. &lt;br /&gt;
* '''Aug 21, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 26, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 28, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2025''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2025''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As described in the Evaluation and Competition Format, there are four types of submissions. Below is a list of them:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Task !! Submission Method !! Deadline&lt;br /&gt;
|-&lt;br /&gt;
| 2 Prompts for the test set || Email JSON files to organizers. || Aug 15, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Algorithm || Email code/github link/docker to organizers. Check Algorithm Submission below. || Aug 21, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Cherry-picked IDs || Email IDS to organizers. || Aug 28, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Evaluation metric (optional) || Email organizers. || Aug 21, 2025&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include a &amp;lt;code&amp;gt;generation.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files using the following format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./generation.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: path to the input .json file.&lt;br /&gt;
* Output Folder: path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: number of samples to generate.&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
=Baseline=&lt;br /&gt;
&lt;br /&gt;
We provide a baseline algorithm in [https://github.com/ZZWaang/mirex2025-musecoco this repository]. This is modified from the model MuseCoco (Lu, P., et al. 2023). Please also refer to this code repository to check data format and generation protocol.&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14708</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14708"/>
		<updated>2025-07-09T07:09:14Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Baseline */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections. Please refer to [https://github.com/ZZWaang/mirex2025-musecoco this repository] to access the baseline method and submission format.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an '''online subjective double-blind test'''. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* '''We welcome both challenge participants and non-participants to submit plans for objective evaluation.''' Evaluation methods may be incorporated as reference benchmarks and could inform the development of future evaluation metrics.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''8 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency to the prompt (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
* '''Aug 15, 2025''': Submit two prompts as a part of the test set. &lt;br /&gt;
* '''Aug 21, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 26, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 28, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2025''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2025''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As described in the Evaluation and Competition Format, there are four types of submissions. Below is a list of them:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Task !! Submission Method !! Deadline&lt;br /&gt;
|-&lt;br /&gt;
| 2 Prompts for the test set || Email JSON files to organizers. || Aug 15, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Algorithm || Email code/github link/docker to organizers. Check Algorithm Submission below. || Aug 21, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Cherry-picked IDs || Email IDS to organizers. || Aug 28, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Evaluation metric (optional) || Email organizers. || Aug 21, 2025&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include a &amp;lt;code&amp;gt;generation.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files using the following format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./generation.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: path to the input .json file.&lt;br /&gt;
* Output Folder: path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: number of samples to generate.&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
=Baseline=&lt;br /&gt;
&lt;br /&gt;
We provide a baseline algorithm in [https://github.com/ZZWaang/mirex2025-musecoco this repository]. This is modified from the model MuseCoco (Lu, P., et al. 2023). Please also refer to this code repository to check data format and generation protocol.&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14707</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14707"/>
		<updated>2025-07-09T07:07:58Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections. Please refer to [https://github.com/ZZWaang/mirex2025-musecoco this repository] to access the baseline method and submission format.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an '''online subjective double-blind test'''. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* '''We welcome both challenge participants and non-participants to submit plans for objective evaluation.''' Evaluation methods may be incorporated as reference benchmarks and could inform the development of future evaluation metrics.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''8 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency to the prompt (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
* '''Aug 15, 2025''': Submit two prompts as a part of the test set. &lt;br /&gt;
* '''Aug 21, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 26, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 28, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2025''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2025''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As described in the Evaluation and Competition Format, there are four types of submissions. Below is a list of them:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Task !! Submission Method !! Deadline&lt;br /&gt;
|-&lt;br /&gt;
| 2 Prompts for the test set || Email JSON files to organizers. || Aug 15, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Algorithm || Email code/github link/docker to organizers. Check Algorithm Submission below. || Aug 21, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Cherry-picked IDs || Email IDS to organizers. || Aug 28, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Evaluation metric (optional) || Email organizers. || Aug 21, 2025&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include a &amp;lt;code&amp;gt;generation.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files using the following format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./generation.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: path to the input .json file.&lt;br /&gt;
* Output Folder: path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: number of samples to generate.&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
We provide a baseline algorithm in [https://github.com/ZZWaang/mirex2025-musecoco this repository]. This is modified from the model MuseCoco (Lu, P., et al. 2023). Please also refer to this code repository to check data format and generation protocol.&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14706</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14706"/>
		<updated>2025-07-09T07:07:30Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections.&lt;br /&gt;
&lt;br /&gt;
Please refer to [https://github.com/ZZWaang/mirex2025-musecoco this repository] to access the baseline method and submission format.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an '''online subjective double-blind test'''. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* '''We welcome both challenge participants and non-participants to submit plans for objective evaluation.''' Evaluation methods may be incorporated as reference benchmarks and could inform the development of future evaluation metrics.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''8 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency to the prompt (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
* '''Aug 15, 2025''': Submit two prompts as a part of the test set. &lt;br /&gt;
* '''Aug 21, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 26, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 28, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2025''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2025''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As described in the Evaluation and Competition Format, there are four types of submissions. Below is a list of them:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Task !! Submission Method !! Deadline&lt;br /&gt;
|-&lt;br /&gt;
| 2 Prompts for the test set || Email JSON files to organizers. || Aug 15, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Algorithm || Email code/github link/docker to organizers. Check Algorithm Submission below. || Aug 21, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Cherry-picked IDs || Email IDS to organizers. || Aug 28, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Evaluation metric (optional) || Email organizers. || Aug 21, 2025&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include a &amp;lt;code&amp;gt;generation.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files using the following format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./generation.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: path to the input .json file.&lt;br /&gt;
* Output Folder: path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: number of samples to generate.&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
We provide a baseline algorithm in [https://github.com/ZZWaang/mirex2025-musecoco this repository]. This is modified from the model MuseCoco (Lu, P., et al. 2023). Please also refer to this code repository to check data format and generation protocol.&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14705</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14705"/>
		<updated>2025-07-09T07:05:54Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Baselines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an '''online subjective double-blind test'''. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* '''We welcome both challenge participants and non-participants to submit plans for objective evaluation.''' Evaluation methods may be incorporated as reference benchmarks and could inform the development of future evaluation metrics.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''8 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency to the prompt (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
* '''Aug 15, 2025''': Submit two prompts as a part of the test set. &lt;br /&gt;
* '''Aug 21, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 26, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 28, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2025''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2025''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As described in the Evaluation and Competition Format, there are four types of submissions. Below is a list of them:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Task !! Submission Method !! Deadline&lt;br /&gt;
|-&lt;br /&gt;
| 2 Prompts for the test set || Email JSON files to organizers. || Aug 15, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Algorithm || Email code/github link/docker to organizers. Check Algorithm Submission below. || Aug 21, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Cherry-picked IDs || Email IDS to organizers. || Aug 28, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Evaluation metric (optional) || Email organizers. || Aug 21, 2025&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include a &amp;lt;code&amp;gt;generation.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files using the following format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./generation.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: path to the input .json file.&lt;br /&gt;
* Output Folder: path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: number of samples to generate.&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
We provide a baseline algorithm in [https://github.com/ZZWaang/mirex2025-musecoco this repository]. This is modified from the model MuseCoco (Lu, P., et al. 2023). Please also refer to this code repository to check data format and generation protocol.&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14704</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14704"/>
		<updated>2025-07-09T07:03:44Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Baselines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an '''online subjective double-blind test'''. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* '''We welcome both challenge participants and non-participants to submit plans for objective evaluation.''' Evaluation methods may be incorporated as reference benchmarks and could inform the development of future evaluation metrics.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''8 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency to the prompt (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
* '''Aug 15, 2025''': Submit two prompts as a part of the test set. &lt;br /&gt;
* '''Aug 21, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 26, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 28, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2025''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2025''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As described in the Evaluation and Competition Format, there are four types of submissions. Below is a list of them:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Task !! Submission Method !! Deadline&lt;br /&gt;
|-&lt;br /&gt;
| 2 Prompts for the test set || Email JSON files to organizers. || Aug 15, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Algorithm || Email code/github link/docker to organizers. Check Algorithm Submission below. || Aug 21, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Cherry-picked IDs || Email IDS to organizers. || Aug 28, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Evaluation metric (optional) || Email organizers. || Aug 21, 2025&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include a &amp;lt;code&amp;gt;generation.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files using the following format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./generation.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: path to the input .json file.&lt;br /&gt;
* Output Folder: path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: number of samples to generate.&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
We provide a baseline algorithm in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main this repo].&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=14703</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=14703"/>
		<updated>2025-07-09T07:02:59Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Algorithm Submission */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, and chord progression. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
&lt;br /&gt;
* '''Oct 8, 2024''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Oct 15, 2024''': Submit the main algorithm.&lt;br /&gt;
* '''Oct 22, 2024''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Oct 25, 2024''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Oct 31 - Nov 3, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Nov 5, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
'''Packaging Submissions'''&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Accepted submission form'''&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
To establish a benchmark for this task, we consider the three baseline models in their official implementations:&lt;br /&gt;
&lt;br /&gt;
'''WholeSongGen''' (Wang et al., 2024)&lt;br /&gt;
* A denoising diffusion probabilistic model (DDPM) generating piano accompaniments as piano-roll images.&lt;br /&gt;
&lt;br /&gt;
'''Compose &amp;amp; Embellish''' (Wu and Yang, 2023)&lt;br /&gt;
* A Transformer-based architecture generating piano performances in beat-based event sequences.&lt;br /&gt;
&lt;br /&gt;
'''AccoMontage''' (Zhao and Xia, 2021)&lt;br /&gt;
* A hybrid algorithm generating piano accompaniments by rule-based search and music representation learning.&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte, C. Towards automatic extraction of harmony information from music signals. PhD Diss. 2010.&lt;br /&gt;
* Lu, P., et al. Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110 (2023).&lt;br /&gt;
* Wang, Z., et al. Whole-song hierarchical generation of symbolic music using cascaded diffusion models, in ICLR 2024.&lt;br /&gt;
* Wu, S.-L., &amp;amp; Yang, Y.-H. Compose &amp;amp; Embellish: Well-structured piano performance generation via a two-stage approach, in ICASSP 2023.&lt;br /&gt;
* Zhao, J., &amp;amp; Xia, G. Accomontage: Accompaniment arrangement via phrase selection and style transfer, in ISMIR 2021.&lt;br /&gt;
&lt;br /&gt;
* Code and data format samples: [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main]&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14702</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14702"/>
		<updated>2025-07-09T07:01:33Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Algorithm Submission */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an '''online subjective double-blind test'''. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* '''We welcome both challenge participants and non-participants to submit plans for objective evaluation.''' Evaluation methods may be incorporated as reference benchmarks and could inform the development of future evaluation metrics.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''8 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency to the prompt (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
* '''Aug 15, 2025''': Submit two prompts as a part of the test set. &lt;br /&gt;
* '''Aug 21, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 26, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 28, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2025''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2025''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As described in the Evaluation and Competition Format, there are four types of submissions. Below is a list of them:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Task !! Submission Method !! Deadline&lt;br /&gt;
|-&lt;br /&gt;
| 2 Prompts for the test set || Email JSON files to organizers. || Aug 15, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Algorithm || Email code/github link/docker to organizers. Check Algorithm Submission below. || Aug 21, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Cherry-picked IDs || Email IDS to organizers. || Aug 28, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Evaluation metric (optional) || Email organizers. || Aug 21, 2025&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include a &amp;lt;code&amp;gt;generation.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files using the following format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./generation.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: path to the input .json file.&lt;br /&gt;
* Output Folder: path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: number of samples to generate.&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14701</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14701"/>
		<updated>2025-07-09T06:58:56Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Submission */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an '''online subjective double-blind test'''. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* '''We welcome both challenge participants and non-participants to submit plans for objective evaluation.''' Evaluation methods may be incorporated as reference benchmarks and could inform the development of future evaluation metrics.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''8 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency to the prompt (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
* '''Aug 15, 2025''': Submit two prompts as a part of the test set. &lt;br /&gt;
* '''Aug 21, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 26, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 28, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2025''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2025''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As described in the Evaluation and Competition Format, there are four types of submissions. Below is a list of them:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Task !! Submission Method !! Deadline&lt;br /&gt;
|-&lt;br /&gt;
| 2 Prompts for the test set || Email JSON files to organizers. || Aug 15, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Algorithm || Email code/github link/docker to organizers. Check Algorithm Submission below. || Aug 21, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Cherry-picked IDs || Email IDS to organizers. || Aug 28, 2025&lt;br /&gt;
|-&lt;br /&gt;
| Evaluation metric (optional) || Email organizers. || Aug 21, 2025&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14700</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14700"/>
		<updated>2025-07-09T06:57:21Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Submission */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an '''online subjective double-blind test'''. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* '''We welcome both challenge participants and non-participants to submit plans for objective evaluation.''' Evaluation methods may be incorporated as reference benchmarks and could inform the development of future evaluation metrics.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''8 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency to the prompt (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
* '''Aug 15, 2025''': Submit two prompts as a part of the test set. &lt;br /&gt;
* '''Aug 21, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 26, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 28, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2025''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2025''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As described in the Evaluation and Competition Format, there are four types of submissions. Below is a list of them:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Task !! Submission Method !! Deadline&lt;br /&gt;
|-&lt;br /&gt;
| 2 Prompts for the test set || Email JSON files to organizers || &lt;br /&gt;
|-&lt;br /&gt;
| Algorithm || Email code/github link/docker to organizers || &lt;br /&gt;
|-&lt;br /&gt;
| Cherry-picked IDs || Email IDS to organizers || Example&lt;br /&gt;
|-&lt;br /&gt;
| Evaluation metric (optional) || Email organizers || &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14699</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14699"/>
		<updated>2025-07-09T06:52:00Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an '''online subjective double-blind test'''. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* '''We welcome both challenge participants and non-participants to submit plans for objective evaluation.''' Evaluation methods may be incorporated as reference benchmarks and could inform the development of future evaluation metrics.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''8 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency to the prompt (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
* '''Aug 15, 2025''': Submit two prompts as a part of the test set. &lt;br /&gt;
* '''Aug 21, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 26, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 28, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2025''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2025''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As described in the Evaluation and Competition Format, there are four types of submissions. Below is a list of them:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Caption text&lt;br /&gt;
|-&lt;br /&gt;
! Header text !! Header text !! Header text&lt;br /&gt;
|-&lt;br /&gt;
| Example || Example || Example&lt;br /&gt;
|-&lt;br /&gt;
| Example || Example || Example&lt;br /&gt;
|-&lt;br /&gt;
| Example || Example || Example&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Test set submission &lt;br /&gt;
# Algorithm submission&lt;br /&gt;
# Cherry-picked sample IDs submission &lt;br /&gt;
# Evaluation form submission &lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14698</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14698"/>
		<updated>2025-07-09T06:47:59Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Evaluation and Competition Format */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
&lt;br /&gt;
We will evaluate the submitted algorithms through an '''online subjective double-blind test'''. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* '''We welcome both challenge participants and non-participants to submit plans for objective evaluation.''' Evaluation methods may be incorporated as reference benchmarks and could inform the development of future evaluation metrics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''8 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency to the prompt (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
&lt;br /&gt;
* '''Aug 15, 2025''': Submit two prompts as a part of the test set. &lt;br /&gt;
* '''Aug 21, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 26, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 28, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2025''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2025''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission &lt;br /&gt;
# Algorithm submission&lt;br /&gt;
# Cherry-picked sample IDs submission &lt;br /&gt;
# Evaluation form submission &lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14697</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14697"/>
		<updated>2025-07-09T06:47:25Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Evaluation and Competition Format */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
&lt;br /&gt;
We will evaluate the submitted algorithms through an '''online subjective double-blind test'''. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* '''We welcome both challenge participants and non-participants to submit plans for objective evaluation.''' Evaluation methods may be incorporated as reference benchmarks and could inform the development of future evaluation metrics.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''8 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency to the prompt (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
&lt;br /&gt;
* '''Aug 15, 2025''': Submit two prompts as a part of the test set. &lt;br /&gt;
* '''Aug 21, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 26, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 28, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2025''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2025''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission &lt;br /&gt;
# Algorithm submission&lt;br /&gt;
# Cherry-picked sample IDs submission &lt;br /&gt;
# Evaluation form submission &lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14696</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14696"/>
		<updated>2025-07-09T06:47:01Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Evaluation and Competition Format */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
&lt;br /&gt;
We will evaluate the submitted algorithms through an '''online subjective double-blind test'''. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* '''We welcome both challenge participants and non-participants to submit plans for objective evaluation.''' Evaluation methods may be incorporated as reference benchmarks and could inform the development of future evaluation metrics.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''8 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency to the prompt (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
&lt;br /&gt;
* '''Aug 15, 2025''': Submit two prompts as a part of the test set. &lt;br /&gt;
* '''Aug 21, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 26, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 28, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2025''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2025''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission &lt;br /&gt;
# Algorithm submission&lt;br /&gt;
# Cherry-picked sample IDs submission &lt;br /&gt;
# Evaluation form submission &lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14695</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14695"/>
		<updated>2025-07-09T06:35:39Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Data Example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Important Dates (Tentative)==&lt;br /&gt;
&lt;br /&gt;
* '''Aug 7, 2025''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Aug 15, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 20, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 24, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission &lt;br /&gt;
# Algorithm submission&lt;br /&gt;
# Cherry-picked sample IDs submission &lt;br /&gt;
# Evaluation form submission &lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14694</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14694"/>
		<updated>2025-07-09T06:35:22Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Data Format */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
&lt;br /&gt;
Both the input prompt and output generation should be stored in JSON format. Specifically, music is represented by a list of notes, which contains &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes. &lt;br /&gt;
&lt;br /&gt;
The prompt is stored under the key &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt; and lasts 5 measures (the first measure is the pickup measure). Below is an example prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 72,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 6&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 16,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 57,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 14&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The generation is stored under the key &amp;lt;code&amp;gt;generation&amp;lt;/code&amp;gt; and lasts 12 measures. Below is an example generation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Generation&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;generation&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;start&amp;quot;: 80,&lt;br /&gt;
      &amp;quot;pitch&amp;quot;: 40,&lt;br /&gt;
      &amp;quot;duration&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above examples, &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes are counted in sixteenth notes. Since the data is assumed to be in 4/4 meter and quantized to a sixteenth note resolution, the &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the prompt should range from 0-79 (0-15 is the pickup measure) and &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; of the generation should range from 80-271. The &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated continuation. Note that the generation starts from the fifth measure (time step 80).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 80, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 80, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Important Dates (Tentative)==&lt;br /&gt;
&lt;br /&gt;
* '''Aug 7, 2025''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Aug 15, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 20, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 24, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission &lt;br /&gt;
# Algorithm submission&lt;br /&gt;
# Cherry-picked sample IDs submission &lt;br /&gt;
# Evaluation form submission &lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14693</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14693"/>
		<updated>2025-07-09T06:14:31Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
&lt;br /&gt;
Symbolic music generation covers a wide range of tasks and settings, including varying types of control, generation objectives (e.g., continuation, inpainting), and representations (e.g., score, performance, single- or multi-track). In MIREX, we narrow this scope each year to focus on a specific subtask.&lt;br /&gt;
&lt;br /&gt;
For this year’s challenge, the selected task is '''Piano Music Continuation'''. Given a 4-measure piano prompt (plus an optional pickup measure), the goal is to generate a 12-measure continuation that is musically coherent with the prompt, forming a complete 16-measure piece. All music is assumed to be in 4/4 time and quantized to sixteenth-note resolution. The continuation should match the style of the prompt, which may vary across classical, pop, jazz, or other existing styles. Further details are provided in the following sections.&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input prompt consists of 4 bars of piano music, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing a property: &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both prompt and output, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 17 * 16 - 1 = 271. Notes that end later than the sixteenth measure (i.e., 17 * 16 = 272th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated continuation. Note that the generation starts from the fifth measure (time step 80).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 80, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 80, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Important Dates (Tentative)==&lt;br /&gt;
&lt;br /&gt;
* '''Aug 7, 2025''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Aug 15, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 20, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 24, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission &lt;br /&gt;
# Algorithm submission&lt;br /&gt;
# Cherry-picked sample IDs submission &lt;br /&gt;
# Evaluation form submission &lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14657</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14657"/>
		<updated>2025-05-30T15:14:39Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
&lt;br /&gt;
This year, the selected task is '''piano music continuation'''. Given a 4-measure piano prompt, the goal is to generate a 12-measure continuation that is musically coherent with the prompt, resulting in a complete 16-measure piece. All music is assumed to be in 4/4 meter, quantized to sixteenth-note resolution, and may belong to any existing piano style, including classical, pop, jazz, and others. Further details on the data structure are provided in the Data Format section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input prompt consists of 4 bars of piano music, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing a property: &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;prompt&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both prompt and output, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 17 * 16 - 1 = 271. Notes that end later than the sixteenth measure (i.e., 17 * 16 = 272th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;prompt&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated continuation. Note that the generation starts from the fifth measure (time step 80).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 80, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 80, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two prompts.''' The organizer team will supplement the prompts if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 continuations''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the prompt and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Coherency (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Structuredness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Important Dates (Tentative)==&lt;br /&gt;
&lt;br /&gt;
* '''Aug 7, 2025''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Aug 15, 2025''': Submit the main algorithm.&lt;br /&gt;
* '''Aug 20, 2025''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Aug 24, 2025''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Aug 30 - Sep 5, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Sep 6, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission &lt;br /&gt;
# Algorithm submission&lt;br /&gt;
# Cherry-picked sample IDs submission &lt;br /&gt;
# Evaluation form submission &lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
To be announced later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14656</id>
		<title>2025:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:Symbolic_Music_Generation&amp;diff=14656"/>
		<updated>2025-05-30T14:08:15Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: Created page with &amp;quot;=Description= Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, and chord progression. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
&lt;br /&gt;
* '''Oct 8, 2024''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Oct 15, 2024''': Submit the main algorithm.&lt;br /&gt;
* '''Oct 22, 2024''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Oct 25, 2024''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Oct 31 - Nov 3, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Nov 5, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
'''Packaging Submissions'''&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Accepted submission form'''&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
To establish a benchmark for this task, we consider the three baseline models in their official implementations:&lt;br /&gt;
&lt;br /&gt;
'''WholeSongGen''' (Wang et al., 2024)&lt;br /&gt;
* A denoising diffusion probabilistic model (DDPM) generating piano accompaniments as piano-roll images.&lt;br /&gt;
&lt;br /&gt;
'''Compose &amp;amp; Embellish''' (Wu and Yang, 2023)&lt;br /&gt;
* A Transformer-based architecture generating piano performances in beat-based event sequences.&lt;br /&gt;
&lt;br /&gt;
'''AccoMontage''' (Zhao and Xia, 2021)&lt;br /&gt;
* A hybrid algorithm generating piano accompaniments by rule-based search and music representation learning.&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte, C. Towards automatic extraction of harmony information from music signals. PhD Diss. 2010.&lt;br /&gt;
* Lu, P., et al. Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110 (2023).&lt;br /&gt;
* Wang, Z., et al. Whole-song hierarchical generation of symbolic music using cascaded diffusion models, in ICLR 2024.&lt;br /&gt;
* Wu, S.-L., &amp;amp; Yang, Y.-H. Compose &amp;amp; Embellish: Well-structured piano performance generation via a two-stage approach, in ICASSP 2023.&lt;br /&gt;
* Zhao, J., &amp;amp; Xia, G. Accomontage: Accompaniment arrangement via phrase selection and style transfer, in ISMIR 2021.&lt;br /&gt;
&lt;br /&gt;
* Code and data format samples: [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main]&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13860</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13860"/>
		<updated>2024-09-15T14:21:11Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, and chord progression. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
&lt;br /&gt;
* '''Oct 8, 2024''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Oct 15, 2024''': Submit the main algorithm.&lt;br /&gt;
* '''Oct 22, 2024''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Oct 25, 2024''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Oct 31 - Nov 3, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Nov 5, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
'''Packaging Submissions'''&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Accepted submission form'''&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
'''WholeSongGen''' (Wang et al., 2023)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using the diffusion model.&lt;br /&gt;
&lt;br /&gt;
'''Accomontage''' (Zhao et al., 2020)&lt;br /&gt;
&lt;br /&gt;
This algorithm generates accompaniment using a combination of rule-based search and deep representation learning.&lt;br /&gt;
&lt;br /&gt;
'''Anticipatory Music Transformer''' (Thickstun et al., 2024)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using a Transformer-based model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte, C. (2010). Towards automatic extraction of harmony information from music signals (Doctoral dissertation).* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;br /&gt;
* Wang, Z., Min, L., &amp;amp; Xia, G. Whole-Song Hierarchical Generation of Symbolic Music Using Cascaded Diffusion Models. In The Twelfth International Conference on Learning Representations.&lt;br /&gt;
* Jingwei Zhao, &amp;amp; Gus Xia (2021). AccoMontage: Accompaniment Arrangement via Phrase Selection and Style Transfer. In Proceedings of the 22nd International Society for Music Information Retrieval Conference, ISMIR 2021, Online, November 7-12, 2021 (pp. 833–840).&lt;br /&gt;
* Thickstun, J., Hall, D., Donahue, C., &amp;amp; Liang, P. (2023). Anticipatory music transformer. arXiv preprint arXiv:2306.08620.&lt;br /&gt;
* Code and data format samples: [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main]&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13859</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13859"/>
		<updated>2024-09-15T10:18:30Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
&lt;br /&gt;
* '''Oct 8, 2024''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Oct 15, 2024''': Submit the main algorithm.&lt;br /&gt;
* '''Oct 22, 2024''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Oct 25, 2024''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Oct 31 - Nov 3, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Nov 5, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
'''Packaging Submissions'''&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Accepted submission form'''&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
'''WholeSongGen''' (Wang et al., 2023)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using the diffusion model.&lt;br /&gt;
&lt;br /&gt;
'''Accomontage''' (Zhao et al., 2020)&lt;br /&gt;
&lt;br /&gt;
This algorithm generates accompaniment using a combination of rule-based search and deep representation learning.&lt;br /&gt;
&lt;br /&gt;
'''Anticipatory Music Transformer''' (Thickstun et al., 2024)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using a Transformer-based model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte, C. (2010). Towards automatic extraction of harmony information from music signals (Doctoral dissertation).* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;br /&gt;
* Wang, Z., Min, L., &amp;amp; Xia, G. Whole-Song Hierarchical Generation of Symbolic Music Using Cascaded Diffusion Models. In The Twelfth International Conference on Learning Representations.&lt;br /&gt;
* Jingwei Zhao, &amp;amp; Gus Xia (2021). AccoMontage: Accompaniment Arrangement via Phrase Selection and Style Transfer. In Proceedings of the 22nd International Society for Music Information Retrieval Conference, ISMIR 2021, Online, November 7-12, 2021 (pp. 833–840).&lt;br /&gt;
* Thickstun, J., Hall, D., Donahue, C., &amp;amp; Liang, P. (2023). Anticipatory music transformer. arXiv preprint arXiv:2306.08620.&lt;br /&gt;
* Code and data format samples: [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main]&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13858</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13858"/>
		<updated>2024-09-15T10:18:15Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
&lt;br /&gt;
* '''Oct 8, 2024''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Oct 15, 2024''': Submit the main algorithm.&lt;br /&gt;
* '''Oct 22, 2024''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Oct 25, 2024''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Oct 31 - Nov 3, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Nov 5, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
'''Packaging Submissions'''&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Accepted submission form'''&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
'''WholeSongGen''' (Wang et al., 2023)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using the diffusion model.&lt;br /&gt;
&lt;br /&gt;
'''Accomontage''' (Zhao et al., 2020)&lt;br /&gt;
&lt;br /&gt;
This algorithm generates accompaniment using a combination of rule-based search and deep representation learning.&lt;br /&gt;
&lt;br /&gt;
'''Anticipatory Music Transformer''' (Thickstun et al., 2024)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using a Transformer-based model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte, C. (2010). Towards automatic extraction of harmony information from music signals (Doctoral dissertation).* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;br /&gt;
* Wang, Z., Min, L., &amp;amp; Xia, G. Whole-Song Hierarchical Generation of Symbolic Music Using Cascaded Diffusion Models. In The Twelfth International Conference on Learning Representations.&lt;br /&gt;
* Jingwei Zhao, &amp;amp; Gus Xia (2021). AccoMontage: Accompaniment Arrangement via Phrase Selection and Style Transfer. In Proceedings of the 22nd International Society for Music Information Retrieval Conference, ISMIR 2021, Online, November 7-12, 2021 (pp. 833–840).&lt;br /&gt;
* Thickstun, J., Hall, D., Donahue, C., &amp;amp; Liang, P. (2023). Anticipatory music transformer. arXiv preprint arXiv:2306.08620.&lt;br /&gt;
* Code and data format samples: [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main link]&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13857</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13857"/>
		<updated>2024-09-15T10:16:26Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
&lt;br /&gt;
* '''Oct 8, 2024''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Oct 15, 2024''': Submit the main algorithm.&lt;br /&gt;
* '''Oct 22, 2024''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Oct 25, 2024''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Oct 31 - Nov 3, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Nov 5, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
'''Packaging Submissions'''&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Accepted submission form'''&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
'''WholeSongGen''' (Wang et al., 2023)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using the diffusion model.&lt;br /&gt;
&lt;br /&gt;
'''Accomontage''' (Zhao et al., 2020)&lt;br /&gt;
&lt;br /&gt;
This algorithm generates accompaniment using a combination of rule-based search and deep representation learning.&lt;br /&gt;
&lt;br /&gt;
'''Anticipatory Music Transformer''' (Thickstun et al., 2024)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using a Transformer-based model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte, C. (2010). Towards automatic extraction of harmony information from music signals (Doctoral dissertation).* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;br /&gt;
* Wang, Z., Min, L., &amp;amp; Xia, G. Whole-Song Hierarchical Generation of Symbolic Music Using Cascaded Diffusion Models. In The Twelfth International Conference on Learning Representations.&lt;br /&gt;
* Jingwei Zhao, &amp;amp; Gus Xia (2021). AccoMontage: Accompaniment Arrangement via Phrase Selection and Style Transfer. In Proceedings of the 22nd International Society for Music Information Retrieval Conference, ISMIR 2021, Online, November 7-12, 2021 (pp. 833–840).&lt;br /&gt;
* Thickstun, J., Hall, D., Donahue, C., &amp;amp; Liang, P. (2023). Anticipatory music transformer. arXiv preprint arXiv:2306.08620.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13855</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13855"/>
		<updated>2024-09-15T10:15:26Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Evaluation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation and Competition Format=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
'''Packaging Submissions'''&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Accepted submission form'''&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Important Dates=&lt;br /&gt;
&lt;br /&gt;
* '''Oct 8, 2024''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Oct 15, 2024''': Submit the main algorithm.&lt;br /&gt;
* '''Oct 22, 2024''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Oct 25, 2024''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Oct 31 - Nov 3, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Nov 5, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
'''WholeSongGen''' (Wang et al., 2023)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using the diffusion model.&lt;br /&gt;
&lt;br /&gt;
'''Accomontage''' (Zhao et al., 2020)&lt;br /&gt;
&lt;br /&gt;
This algorithm generates accompaniment using a combination of rule-based search and deep representation learning.&lt;br /&gt;
&lt;br /&gt;
'''Anticipatory Music Transformer''' (Thickstun et al., 2024)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using a Transformer-based model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte, C. (2010). Towards automatic extraction of harmony information from music signals (Doctoral dissertation).* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;br /&gt;
* Wang, Z., Min, L., &amp;amp; Xia, G. Whole-Song Hierarchical Generation of Symbolic Music Using Cascaded Diffusion Models. In The Twelfth International Conference on Learning Representations.&lt;br /&gt;
* Jingwei Zhao, &amp;amp; Gus Xia (2021). AccoMontage: Accompaniment Arrangement via Phrase Selection and Style Transfer. In Proceedings of the 22nd International Society for Music Information Retrieval Conference, ISMIR 2021, Online, November 7-12, 2021 (pp. 833–840).&lt;br /&gt;
* Thickstun, J., Hall, D., Donahue, C., &amp;amp; Liang, P. (2023). Anticipatory music transformer. arXiv preprint arXiv:2306.08620.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13854</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13854"/>
		<updated>2024-09-15T10:14:26Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
'''Packaging Submissions'''&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Accepted submission form'''&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Important Dates=&lt;br /&gt;
&lt;br /&gt;
* '''Oct 8, 2024''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Oct 15, 2024''': Submit the main algorithm.&lt;br /&gt;
* '''Oct 22, 2024''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Oct 25, 2024''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Oct 31 - Nov 3, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Nov 5, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
'''WholeSongGen''' (Wang et al., 2023)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using the diffusion model.&lt;br /&gt;
&lt;br /&gt;
'''Accomontage''' (Zhao et al., 2020)&lt;br /&gt;
&lt;br /&gt;
This algorithm generates accompaniment using a combination of rule-based search and deep representation learning.&lt;br /&gt;
&lt;br /&gt;
'''Anticipatory Music Transformer''' (Thickstun et al., 2024)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using a Transformer-based model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte, C. (2010). Towards automatic extraction of harmony information from music signals (Doctoral dissertation).* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;br /&gt;
* Wang, Z., Min, L., &amp;amp; Xia, G. Whole-Song Hierarchical Generation of Symbolic Music Using Cascaded Diffusion Models. In The Twelfth International Conference on Learning Representations.&lt;br /&gt;
* Jingwei Zhao, &amp;amp; Gus Xia (2021). AccoMontage: Accompaniment Arrangement via Phrase Selection and Style Transfer. In Proceedings of the 22nd International Society for Music Information Retrieval Conference, ISMIR 2021, Online, November 7-12, 2021 (pp. 833–840).&lt;br /&gt;
* Thickstun, J., Hall, D., Donahue, C., &amp;amp; Liang, P. (2023). Anticipatory music transformer. arXiv preprint arXiv:2306.08620.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13853</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13853"/>
		<updated>2024-09-15T10:14:07Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
'''Packaging Submissions'''&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Accepted submission form'''&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Important Dates=&lt;br /&gt;
&lt;br /&gt;
* '''Oct 8, 2024''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Oct 15, 2024''': Submit the main algorithm.&lt;br /&gt;
* '''Oct 22, 2024''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Oct 25, 2024''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Oct 31 - Nov 3, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Nov 5, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
'''WholeSongGen''' (Wang et al., 2023)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using the diffusion model.&lt;br /&gt;
&lt;br /&gt;
'''Accomontage''' (Zhao et al., 2020)&lt;br /&gt;
&lt;br /&gt;
This algorithm generates accompaniment using a combination of rule-based search and deep representation learning.&lt;br /&gt;
&lt;br /&gt;
'''Anticipatory Music Transformer''' (Thickstun et al., 2024)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using a Transformer-based model.&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte, C. (2010). Towards automatic extraction of harmony information from music signals (Doctoral dissertation).* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;br /&gt;
* Wang, Z., Min, L., &amp;amp; Xia, G. Whole-Song Hierarchical Generation of Symbolic Music Using Cascaded Diffusion Models. In The Twelfth International Conference on Learning Representations.&lt;br /&gt;
* Jingwei Zhao, &amp;amp; Gus Xia (2021). AccoMontage: Accompaniment Arrangement via Phrase Selection and Style Transfer. In Proceedings of the 22nd International Society for Music Information Retrieval Conference, ISMIR 2021, Online, November 7-12, 2021 (pp. 833–840).&lt;br /&gt;
* Thickstun, J., Hall, D., Donahue, C., &amp;amp; Liang, P. (2023). Anticipatory music transformer. arXiv preprint arXiv:2306.08620.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13852</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13852"/>
		<updated>2024-09-15T10:13:29Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
'''Packaging Submissions'''&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Accepted submission form'''&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
=Important Dates=&lt;br /&gt;
&lt;br /&gt;
* '''Oct 8, 2024''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Oct 15, 2024''': Submit the main algorithm.&lt;br /&gt;
* '''Oct 22, 2024''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Oct 25, 2024''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Oct 31 - Nov 3, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Nov 5, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
'''WholeSongGen''' (Wang et al., 2023)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using the diffusion model.&lt;br /&gt;
&lt;br /&gt;
'''Accomontage''' (Zhao et al., 2020)&lt;br /&gt;
&lt;br /&gt;
This algorithm generates accompaniment using a combination of rule-based search and deep representation learning.&lt;br /&gt;
&lt;br /&gt;
'''Anticipatory Music Transformer''' (Thickstun et al., 2024)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using a Transformer-based model.&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte, C. (2010). Towards automatic extraction of harmony information from music signals (Doctoral dissertation).* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;br /&gt;
* Wang, Z., Min, L., &amp;amp; Xia, G. Whole-Song Hierarchical Generation of Symbolic Music Using Cascaded Diffusion Models. In The Twelfth International Conference on Learning Representations.&lt;br /&gt;
* Jingwei Zhao, &amp;amp; Gus Xia (2021). AccoMontage: Accompaniment Arrangement via Phrase Selection and Style Transfer. In Proceedings of the 22nd International Society for Music Information Retrieval Conference, ISMIR 2021, Online, November 7-12, 2021 (pp. 833–840).&lt;br /&gt;
* Thickstun, J., Hall, D., Donahue, C., &amp;amp; Liang, P. (2023). Anticipatory music transformer. arXiv preprint arXiv:2306.08620.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13851</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13851"/>
		<updated>2024-09-15T10:09:46Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Baselines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
'''Packaging Submissions'''&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Accepted submission form'''&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
=Important Dates=&lt;br /&gt;
&lt;br /&gt;
* '''Oct 8, 2024''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Oct 15, 2024''': Submit the main algorithm.&lt;br /&gt;
* '''Oct 22, 2024''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Oct 25, 2024''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Oct 31 - Nov 3, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Nov 5, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
'''WholeSongGen''' (Wang et al., 2023)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using the diffusion model.&lt;br /&gt;
&lt;br /&gt;
'''Accomontage''' (Zhao et al., 2020)&lt;br /&gt;
&lt;br /&gt;
This algorithm generates accompaniment using a combination of rule-based search and deep representation learning.&lt;br /&gt;
&lt;br /&gt;
'''Anticipatory Music Transformer''' (Thickstun et al., 2024)&lt;br /&gt;
&lt;br /&gt;
This model generates accompaniment using a Transformer-based model.&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;br /&gt;
* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13850</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13850"/>
		<updated>2024-09-15T10:09:32Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Baselines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
'''Packaging Submissions'''&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Accepted submission form'''&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
=Important Dates=&lt;br /&gt;
&lt;br /&gt;
* '''Oct 8, 2024''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Oct 15, 2024''': Submit the main algorithm.&lt;br /&gt;
* '''Oct 22, 2024''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Oct 25, 2024''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Oct 31 - Nov 3, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Nov 5, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
&lt;br /&gt;
'''WholeSongGen''' (Wang et al., 2023)&lt;br /&gt;
This model generates accompaniment using the diffusion model.&lt;br /&gt;
&lt;br /&gt;
'''Accomontage''' (Zhao et al., 2020)&lt;br /&gt;
This algorithm generates accompaniment using a combination of rule-based search and deep representation learning.&lt;br /&gt;
&lt;br /&gt;
'''Anticipatory Music Transformer''' (Thickstun et al., 2024)&lt;br /&gt;
This model generates accompaniment using a Transformer-based model.&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;br /&gt;
* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13846</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13846"/>
		<updated>2024-09-15T10:05:07Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Important Dates */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
'''Packaging Submissions'''&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Accepted submission form'''&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
=Important Dates=&lt;br /&gt;
&lt;br /&gt;
* '''Oct 8, 2024''': Submit two lead sheets as a part of the test set. &lt;br /&gt;
* '''Oct 15, 2024''': Submit the main algorithm.&lt;br /&gt;
* '''Oct 22, 2024''': Return the generated samples. The cherry-picking phase begins.&lt;br /&gt;
* '''Oct 25, 2024''': Submit the cherry-picked sample ids.&lt;br /&gt;
* '''Oct 31 - Nov 3, 2024''': Online subjective evaluation.&lt;br /&gt;
* '''Nov 5, 2024''': Announce the final result.&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;br /&gt;
* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13845</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13845"/>
		<updated>2024-09-15T10:01:09Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Algorithm Submission */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
'''Packaging Submissions'''&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Accepted submission form'''&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
The submission process is '''tentative'''.&lt;br /&gt;
* Oct 8, 2024: submission of two lead sheets as a part of the test set. This is also a confirmation of participation. &lt;br /&gt;
* Oct. 15, 2024: submission of the algorithm in docker.&lt;br /&gt;
* Oct. 22, 2024: return of the generated samples. Start of the cherry-picking phase.&lt;br /&gt;
* Oct. 25, 2024: submission of the cherry-picked sample ids.&lt;br /&gt;
* Oct. 31 - Nov. 3, 2024: subjective test.&lt;br /&gt;
* Nov. 5, 2024: announcement of the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;br /&gt;
* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13844</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13844"/>
		<updated>2024-09-15T10:00:48Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Packaging Submissions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
'''Packaging Submissions'''&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Accepted submission form:&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
The submission process is '''tentative'''.&lt;br /&gt;
* Oct 8, 2024: submission of two lead sheets as a part of the test set. This is also a confirmation of participation. &lt;br /&gt;
* Oct. 15, 2024: submission of the algorithm in docker.&lt;br /&gt;
* Oct. 22, 2024: return of the generated samples. Start of the cherry-picking phase.&lt;br /&gt;
* Oct. 25, 2024: submission of the cherry-picked sample ids.&lt;br /&gt;
* Oct. 31 - Nov. 3, 2024: subjective test.&lt;br /&gt;
* Nov. 5, 2024: announcement of the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;br /&gt;
* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13843</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13843"/>
		<updated>2024-09-15T10:00:15Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* I/O Format */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Algorithm Submission==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
==Packaging Submissions==&lt;br /&gt;
&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Accepted submission form:&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
The submission process is '''tentative'''.&lt;br /&gt;
* Oct 8, 2024: submission of two lead sheets as a part of the test set. This is also a confirmation of participation. &lt;br /&gt;
* Oct. 15, 2024: submission of the algorithm in docker.&lt;br /&gt;
* Oct. 22, 2024: return of the generated samples. Start of the cherry-picking phase.&lt;br /&gt;
* Oct. 25, 2024: submission of the cherry-picked sample ids.&lt;br /&gt;
* Oct. 31 - Nov. 3, 2024: subjective test.&lt;br /&gt;
* Nov. 5, 2024: announcement of the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;br /&gt;
* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13842</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13842"/>
		<updated>2024-09-15T09:59:50Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==I/O Format==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
==Packaging Submissions==&lt;br /&gt;
&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Accepted submission form:&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
The submission process is '''tentative'''.&lt;br /&gt;
* Oct 8, 2024: submission of two lead sheets as a part of the test set. This is also a confirmation of participation. &lt;br /&gt;
* Oct. 15, 2024: submission of the algorithm in docker.&lt;br /&gt;
* Oct. 22, 2024: return of the generated samples. Start of the cherry-picking phase.&lt;br /&gt;
* Oct. 25, 2024: submission of the cherry-picked sample ids.&lt;br /&gt;
* Oct. 31 - Nov. 3, 2024: subjective test.&lt;br /&gt;
* Nov. 5, 2024: announcement of the final result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=Contacts=&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;br /&gt;
* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13841</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13841"/>
		<updated>2024-09-15T09:59:12Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Evaluation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==I/O Format==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
==Packaging Submissions==&lt;br /&gt;
&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Accepted submission form:&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
The submission process is '''tentative'''.&lt;br /&gt;
* Oct 8, 2024: submission of two lead sheets as a part of the test set. This is also a confirmation of participation. &lt;br /&gt;
* Oct. 15, 2024: submission of the algorithm in docker.&lt;br /&gt;
* Oct. 22, 2024: return of the generated samples. Start of the cherry-picking phase.&lt;br /&gt;
* Oct. 25, 2024: submission of the cherry-picked sample ids.&lt;br /&gt;
* Oct. 31 - Nov. 3, 2024: subjective test.&lt;br /&gt;
* Nov. 5, 2024: announcement of the final result.&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==Contacts==&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;br /&gt;
* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13840</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13840"/>
		<updated>2024-09-15T09:58:49Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Submission */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
&lt;br /&gt;
As a generative task with subjective evaluation, the submission process ''differs greatly'' from other MIREX tasks. There are four important stages:&lt;br /&gt;
# Test set submission (Oct 8, 2024)&lt;br /&gt;
# Algorithm submission (Oct 15, 2024)&lt;br /&gt;
# Cherry-picked sample IDs submission (Oct 25, 2024)&lt;br /&gt;
# Evaluation form submission (Nov 3, 2024)&lt;br /&gt;
Please check the Important Dates section for the detailed schedule. '''Failure to participate in any of the stages will result in disqualification.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==I/O Format==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
==Packaging Submissions==&lt;br /&gt;
&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Accepted submission form:&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Important Dates==&lt;br /&gt;
The submission process is '''tentative'''.&lt;br /&gt;
* Oct 8, 2024: submission of two lead sheets as a part of the test set. This is also a confirmation of participation. &lt;br /&gt;
* Oct. 15, 2024: submission of the algorithm in docker.&lt;br /&gt;
* Oct. 22, 2024: return of the generated samples. Start of the cherry-picking phase.&lt;br /&gt;
* Oct. 25, 2024: submission of the cherry-picked sample ids.&lt;br /&gt;
* Oct. 31 - Nov. 3, 2024: subjective test.&lt;br /&gt;
* Nov. 5, 2024: announcement of the final result.&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==Contacts==&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;br /&gt;
* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13839</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13839"/>
		<updated>2024-09-15T09:50:36Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Contacts */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
==Important Dates==&lt;br /&gt;
The submission process is '''tentative'''.&lt;br /&gt;
* Oct 8, 2024: submission of two lead sheets as a part of the test set. This is also a confirmation of participation. &lt;br /&gt;
* Oct. 15, 2024: submission of the algorithm in docker.&lt;br /&gt;
* Oct. 22, 2024: return of the generated samples. Start of the cherry-picking phase.&lt;br /&gt;
* Oct. 25, 2024: submission of the cherry-picked sample ids.&lt;br /&gt;
* Oct. 31 - Nov. 3, 2024: subjective test.&lt;br /&gt;
* Nov. 5, 2024: announcement of the final result.&lt;br /&gt;
&lt;br /&gt;
==I/O Format==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
==Packaging Submissions==&lt;br /&gt;
&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Accepted submission form:&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==Contacts==&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: jzhao&amp;lt;at&amp;gt;u.nus.edu&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;br /&gt;
* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13838</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13838"/>
		<updated>2024-09-15T09:49:34Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
==Important Dates==&lt;br /&gt;
The submission process is '''tentative'''.&lt;br /&gt;
* Oct 8, 2024: submission of two lead sheets as a part of the test set. This is also a confirmation of participation. &lt;br /&gt;
* Oct. 15, 2024: submission of the algorithm in docker.&lt;br /&gt;
* Oct. 22, 2024: return of the generated samples. Start of the cherry-picking phase.&lt;br /&gt;
* Oct. 25, 2024: submission of the cherry-picked sample ids.&lt;br /&gt;
* Oct. 31 - Nov. 3, 2024: subjective test.&lt;br /&gt;
* Nov. 5, 2024: announcement of the final result.&lt;br /&gt;
&lt;br /&gt;
==I/O Format==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
==Packaging Submissions==&lt;br /&gt;
&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Accepted submission form:&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==Contacts==&lt;br /&gt;
If you any questions or suggestions about the task, please contact:&lt;br /&gt;
* Ziyu Wang: ziyu.wang&amp;lt;at&amp;gt;nyu.edu&lt;br /&gt;
* Jingwei Zhao: &lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;br /&gt;
* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13837</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13837"/>
		<updated>2024-09-15T09:48:00Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Objective Measurements */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023).&lt;br /&gt;
* We welcome proposals of the objective measurements.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
==Important Dates==&lt;br /&gt;
The submission process is '''tentative'''.&lt;br /&gt;
* Oct 8, 2024: submission of two lead sheets as a part of the test set. This is also a confirmation of participation. &lt;br /&gt;
* Oct. 15, 2024: submission of the algorithm in docker.&lt;br /&gt;
* Oct. 22, 2024: return of the generated samples. Start of the cherry-picking phase.&lt;br /&gt;
* Oct. 25, 2024: submission of the cherry-picked sample ids.&lt;br /&gt;
* Oct. 31 - Nov. 3, 2024: subjective test.&lt;br /&gt;
* Nov. 5, 2024: announcement of the final result.&lt;br /&gt;
&lt;br /&gt;
==I/O Format==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
==Packaging Submissions==&lt;br /&gt;
&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Accepted submission form:&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;br /&gt;
* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13836</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13836"/>
		<updated>2024-09-15T09:47:11Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023)scwill currently use only the likelihood.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
==Important Dates==&lt;br /&gt;
The submission process is '''tentative'''.&lt;br /&gt;
* Oct 8, 2024: submission of two lead sheets as a part of the test set. This is also a confirmation of participation. &lt;br /&gt;
* Oct. 15, 2024: submission of the algorithm in docker.&lt;br /&gt;
* Oct. 22, 2024: return of the generated samples. Start of the cherry-picking phase.&lt;br /&gt;
* Oct. 25, 2024: submission of the cherry-picked sample ids.&lt;br /&gt;
* Oct. 31 - Nov. 3, 2024: subjective test.&lt;br /&gt;
* Nov. 5, 2024: announcement of the final result.&lt;br /&gt;
&lt;br /&gt;
==I/O Format==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
==Packaging Submissions==&lt;br /&gt;
&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Accepted submission form:&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;br /&gt;
* Lu, P., Xu, X., Kang, C., Yu, B., Xing, C., Tan, X., &amp;amp; Bian, J. (2023). Musecoco: Generating symbolic music from text. arXiv preprint arXiv:2306.00110.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13835</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13835"/>
		<updated>2024-09-15T09:46:58Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Subjective Evaluation Format */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate '''16 arrangements''' for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked to rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
&lt;br /&gt;
==Objective Measurements==&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. &lt;br /&gt;
* The current plan is to compute the Negative Log Likelihood of a large music language model (e.g., Lu et al., 2023)scwill currently use only the likelihood.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
==Important Dates==&lt;br /&gt;
The submission process is '''tentative'''.&lt;br /&gt;
* Oct 8, 2024: submission of two lead sheets as a part of the test set. This is also a confirmation of participation. &lt;br /&gt;
* Oct. 15, 2024: submission of the algorithm in docker.&lt;br /&gt;
* Oct. 22, 2024: return of the generated samples. Start of the cherry-picking phase.&lt;br /&gt;
* Oct. 25, 2024: submission of the cherry-picked sample ids.&lt;br /&gt;
* Oct. 31 - Nov. 3, 2024: subjective test.&lt;br /&gt;
* Nov. 5, 2024: announcement of the final result.&lt;br /&gt;
&lt;br /&gt;
==I/O Format==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
==Packaging Submissions==&lt;br /&gt;
&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Accepted submission form:&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13834</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13834"/>
		<updated>2024-09-15T09:43:03Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Evaluation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. The evaluation format differs from conventional tasks in the following aspects:&lt;br /&gt;
* '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary. &lt;br /&gt;
* There will be '''no live ranking''' because the subjective test will be done after the algorithm submission deadline.&lt;br /&gt;
* To better handle randomness in the generation algorithm, we '''allow cherry-picking from a fixed number of generated samples'''.   &lt;br /&gt;
* We hope to compute some objective measurements as well, but these will only be reported as a reference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate 16 arrangements for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. We will currently use only the likelihood.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
==Important Dates==&lt;br /&gt;
The submission process is '''tentative'''.&lt;br /&gt;
* Oct 8, 2024: submission of two lead sheets as a part of the test set. This is also a confirmation of participation. &lt;br /&gt;
* Oct. 15, 2024: submission of the algorithm in docker.&lt;br /&gt;
* Oct. 22, 2024: return of the generated samples. Start of the cherry-picking phase.&lt;br /&gt;
* Oct. 25, 2024: submission of the cherry-picked sample ids.&lt;br /&gt;
* Oct. 31 - Nov. 3, 2024: subjective test.&lt;br /&gt;
* Nov. 5, 2024: announcement of the final result.&lt;br /&gt;
&lt;br /&gt;
==I/O Format==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
==Packaging Submissions==&lt;br /&gt;
&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Accepted submission form:&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13832</id>
		<title>2024:Symbolic Music Generation</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2024:Symbolic_Music_Generation&amp;diff=13832"/>
		<updated>2024-09-15T09:35:58Z</updated>

		<summary type="html">&lt;p&gt;Zizzi wang: /* Data Format */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Description=&lt;br /&gt;
Symbolic music generation is a broad topic. It covers a wide range of tasks, including generation, harmonization, arrangement, instrumentation, and more. We have multiple ways to represent music data, and the evaluation metrics also vary. To define a MIREX challenge within this topic, we need to narrow our focus to specific subtasks that are both relevant to the community and feasible to evaluate effectively.&lt;br /&gt;
&lt;br /&gt;
This year, we select the task to be '''piano accompaniment arrangement from a lead sheet'''. The lead sheet provides information about the melody, chord progression, and optional phrase labels. The goal is to generate a piano accompaniment that complements the lead melody. The music data consists of 8-measure segments in 4/4 meter, quantized to a sixteenth-note resolution. A more detailed description of the data structure is provided in the data format section. The genre of the lead sheets is broadly within western pop music (refer to the music examples for more detail).&lt;br /&gt;
&lt;br /&gt;
=Data Format=&lt;br /&gt;
The input lead sheet consists of 8 bars for the melody and harmony, with an additional mandatory pickup measure (left blank if not used). The data is prepared in JSON format containing two properties: &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;melody&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;chords&amp;lt;/code&amp;gt;: a list of chords. Each chord contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The output generation should also follow the JSON format containing one property &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;acc&amp;lt;/code&amp;gt;: a list of notes. Each note contains properties of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of &amp;lt;code&amp;gt;start&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;duration&amp;lt;/code&amp;gt; attributes.''' &lt;br /&gt;
&lt;br /&gt;
# The data is assumed to be in 4/4 meter, quantized to a sixteenth-note resolution. For both melody and chords, onsets and durations are counted in sixteenth notes. &lt;br /&gt;
# Both onsets and durations are integers ranging from 0 to 9 * 16 - 1 = 143. Notes that end later than the ninth measure (i.e., 9 * 16 = 144th time step) will be truncated to the end of the ninth measure. &lt;br /&gt;
# Melody notes are not allowed to overlap with one another. &lt;br /&gt;
# There should be no gaps or overlaps between chords. Chords must follow one another directly. If there is a blank space where no chord is played, it must be filled with the &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; chord. &lt;br /&gt;
# The accompaniment of the pick-up measure should be blank. &lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the &amp;lt;code&amp;gt;pitch&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The pitch property of a note should be integers ranging from 0 to 127, corresponding to the MIDI pitch numbers.&lt;br /&gt;
&lt;br /&gt;
'''Detailed explanation of the chord &amp;lt;code&amp;gt;symbol&amp;lt;/code&amp;gt; attribute.''' &lt;br /&gt;
&lt;br /&gt;
# The symbol property of a chord should be a string based on the syntax of (Harte, 2010). In other words, each chord string should be able to be passed as a parameter to mir_eval.chord.encode() without causing an error.&lt;br /&gt;
&lt;br /&gt;
=Data Example=&lt;br /&gt;
Below is an example of the input lead sheet in the format given above. The lead sheet is the melody of the first phrase of ''Hey Jude'' by The Beatles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;melody&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 12, &amp;quot;pitch&amp;quot;: 72, &amp;quot;duration&amp;quot;: 4},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 69, &amp;quot;duration&amp;quot;: 8},&lt;br /&gt;
    ...&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;chords&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 0, &amp;quot;symbol&amp;quot;: &amp;quot;N&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;symbol&amp;quot;: &amp;quot;F&amp;quot;, &amp;quot;duration&amp;quot;: 16},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is an example of the generated accompaniment. The accompaniment is generated using the baseline method WholeSongGen introduced below. Note that the generation starts from the second measure (time step 16).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;acc&amp;quot;: [&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 41, &amp;quot;duration&amp;quot;: 12},&lt;br /&gt;
    {&amp;quot;start&amp;quot;: 16, &amp;quot;pitch&amp;quot;: 65, &amp;quot;duration&amp;quot;: 5},&lt;br /&gt;
    ...&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Full data examples can be accessed in [https://github.com/ZZWaang/acc-gen-8bar-wholesong/tree/main/generation_samples this code repository]. MIDI conversion code and MIDI demos are also provided there.&lt;br /&gt;
&lt;br /&gt;
=Evaluation=&lt;br /&gt;
We will evaluate the submitted algorithms through an online subjective double-blind test. '''We use a &amp;quot;''potluck''&amp;quot; test set. Before submitting the algorithm, each team is required to submit two lead sheets.''' The organizer team will supplement the lead sheet if necessary.&lt;br /&gt;
&lt;br /&gt;
==Subjective Evaluation Format==&lt;br /&gt;
* After each team submits the algorithm, the organizer team will use the algorithm to generate 10 arrangements for each test sample. The generated results will be returned to each team for cherry-picking.&lt;br /&gt;
* Only a subset of the test set will be used for subjective evaluation.&lt;br /&gt;
* In the subjective evaluation, we will first ask the subjects to listen to the lead melody with chords and then listen to the generated samples in random order. The order of the samples will be randomized.&lt;br /&gt;
* The subject will be asked rate each arrangement based on the following criteria:&lt;br /&gt;
:* Harmony correctness (5-point scale)&lt;br /&gt;
:* Creativity (5-point scale)&lt;br /&gt;
:* Naturalness (5-point scale)&lt;br /&gt;
:* Overall musicality (5-point scale)&lt;br /&gt;
* We will use objective measurements only as a reference. The correlation between subjective and objective scores will be measured as a reference. We will currently use only the likelihood.&lt;br /&gt;
&lt;br /&gt;
=Submission=&lt;br /&gt;
==Important Dates==&lt;br /&gt;
The submission process is '''tentative'''.&lt;br /&gt;
* Oct 8, 2024: submission of two lead sheets as a part of the test set. This is also a confirmation of participation. &lt;br /&gt;
* Oct. 15, 2024: submission of the algorithm in docker.&lt;br /&gt;
* Oct. 22, 2024: return of the generated samples. Start of the cherry-picking phase.&lt;br /&gt;
* Oct. 25, 2024: submission of the cherry-picked sample ids.&lt;br /&gt;
* Oct. 31 - Nov. 3, 2024: subjective test.&lt;br /&gt;
* Nov. 5, 2024: announcement of the final result.&lt;br /&gt;
&lt;br /&gt;
==I/O Format==&lt;br /&gt;
Participants must include an &amp;lt;code&amp;gt;batch_acc_gen.sh&amp;lt;/code&amp;gt; script in their submission. The task captain will use the script to generate output files according to the following format:&lt;br /&gt;
&lt;br /&gt;
'''Usage'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acc_gen.sh &amp;quot;/path/to/input.json&amp;quot; &amp;quot;/path/to/output_folder&amp;quot; n_sample&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Input File: Path to the input .json file.&lt;br /&gt;
* Output Folder: Path to the folder where the generated output files will be saved.&lt;br /&gt;
* n_sample: Number of samples to generate.&lt;br /&gt;
&lt;br /&gt;
'''Output'''&lt;br /&gt;
* The script should generate n_sample output files in the specified output folder.&lt;br /&gt;
* Output files should be named sequentially as sample_01.json, sample_02.json, ..., up to sample_n_sample.json.&lt;br /&gt;
&lt;br /&gt;
Participants are free to implement the internal logic of the script, but it must adhere to this format for proper execution during the evaluation process.&lt;br /&gt;
&lt;br /&gt;
==Packaging Submissions==&lt;br /&gt;
&lt;br /&gt;
* Every submission must be packed into a docker image&lt;br /&gt;
* Every submission will be deployed and evaluated automatically with &amp;lt;code&amp;gt;docker run&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Accepted submission form:&lt;br /&gt;
* Link to public or private Github repository&lt;br /&gt;
* Link to public or private docker hub&lt;br /&gt;
* Shared google drive links&lt;br /&gt;
* If the repository is private, an access token is also required&lt;br /&gt;
&lt;br /&gt;
=Baselines=&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Harte. Towards Automatic Extraction of Harmony Information from Music Signals. PhD thesis, Queen Mary University of London, August 2010.&lt;/div&gt;</summary>
		<author><name>Zizzi wang</name></author>
		
	</entry>
</feed>