<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Huanz</id>
	<title>MIREX Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Huanz"/>
	<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/wiki/Special:Contributions/Huanz"/>
	<updated>2026-04-13T20:19:18Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14841</id>
		<title>2025:RenCon Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14841"/>
		<updated>2025-09-27T15:22:12Z</updated>

		<summary type="html">&lt;p&gt;Huanz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= 2025:RenCon Results =&lt;br /&gt;
&lt;br /&gt;
== Preliminary (Audition) Round Results ==&lt;br /&gt;
&lt;br /&gt;
=== Evaluation Methodology ===&lt;br /&gt;
The preliminary round was evaluated through an online listening test with '''25 expert evaluators'''. The evaluation used a weighted voting system where participants self-rated their expertise level from 1-5 stars, with responses weighted accordingly.&lt;br /&gt;
&lt;br /&gt;
=== Participant Demographics ===&lt;br /&gt;
Our evaluation panel consisted of highly qualified judges:&lt;br /&gt;
&lt;br /&gt;
'''Expertise Distribution:'''&lt;br /&gt;
* Expert evaluators (5 stars): 7 participants (29.2%)&lt;br /&gt;
* High confidence (4 stars): 5 participants (20.8%)&lt;br /&gt;
* Moderate confidence (3 stars): 10 participants (41.7%)&lt;br /&gt;
* Lower confidence (1-2 stars): 2 participants (8.4%)&lt;br /&gt;
* '''Average expertise weight:''' 3.67/5.0&lt;br /&gt;
&lt;br /&gt;
'''Professional Background:'''&lt;br /&gt;
* Music researchers: 12 (54.5%)&lt;br /&gt;
* Music technologists: 10 (45.5%)&lt;br /&gt;
* Active performers: 8 (36.4%)&lt;br /&gt;
* Conservatory students: 6 (27.3%)&lt;br /&gt;
* Music lovers: 15 (68.2%)&lt;br /&gt;
* Concert-goers: 8 (36.4%)&lt;br /&gt;
&lt;br /&gt;
'''Musical Experience:'''&lt;br /&gt;
* Strong representation of classical music expertise&lt;br /&gt;
* Diverse musical preferences spanning classical, jazz, pop, and rock&lt;br /&gt;
* Substantial piano experience among evaluators&lt;br /&gt;
* Mix of academic researchers and practicing musicians&lt;br /&gt;
&lt;br /&gt;
=== System Rankings ===&lt;br /&gt;
&lt;br /&gt;
The following table shows the final rankings based on weighted average scores from the preliminary round evaluation:&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Rank&lt;br /&gt;
! Anonymous Name&lt;br /&gt;
! Real System Name&lt;br /&gt;
! Authors/Institution&lt;br /&gt;
! Weighted Score&lt;br /&gt;
! PDF&lt;br /&gt;
|-&lt;br /&gt;
| 1&lt;br /&gt;
| MidnightOpal&lt;br /&gt;
| DirectorMusices&lt;br /&gt;
| Anders Friberg, Gabriel Jones&lt;br /&gt;
| 4.33/5.0&lt;br /&gt;
| [https://futuremirex.com/portal/wp-content/uploads/2025/rencon/DirectorMusices.pdf PDF]&lt;br /&gt;
|-&lt;br /&gt;
| 2&lt;br /&gt;
| CrystalEcho&lt;br /&gt;
| VirtuosoNet&lt;br /&gt;
| Dasaem Jeong, Taegyun Kwon, Yoojin Kim, Kyogu Lee, Juhan Nam&lt;br /&gt;
| 3.54/5.0&lt;br /&gt;
| [https://futuremirex.com/portal/wp-content/uploads/2025/rencon/VirtuosoNet.pdf PDF]&lt;br /&gt;
|-&lt;br /&gt;
| 3&lt;br /&gt;
| FrozenRiver&lt;br /&gt;
| Midihum&lt;br /&gt;
| Erich Grunewald&lt;br /&gt;
| 3.32/5.0&lt;br /&gt;
| [https://futuremirex.com/portal/wp-content/uploads/2025/rencon/Midihum.pdf PDF]&lt;br /&gt;
|-&lt;br /&gt;
| 4&lt;br /&gt;
| VelvetStorm&lt;br /&gt;
| ElegantAIPianist&lt;br /&gt;
| Leduo Chen, Xinrui Su, Yuqiang Li, Honyu Andy Shing, Junchuan Zhao, Zihan Chai, Kunyang Zhang, Shengchen Li&lt;br /&gt;
| 3.19/5.0&lt;br /&gt;
| [https://futuremirex.com/portal/wp-content/uploads/2025/rencon/ElegantAIPianist.pdf PDF]&lt;br /&gt;
|-&lt;br /&gt;
| 5&lt;br /&gt;
| SilverWave&lt;br /&gt;
| Contin-U&lt;br /&gt;
| Jongmin Jung, Dongmin Kim, Sihun Lee, Seola Cho, Hyungjoon Soh, Irmak Bukey, Chris Donahue, Dasaem Jeong&lt;br /&gt;
| 3.00/5.0&lt;br /&gt;
| [https://futuremirex.com/portal/wp-content/uploads/2025/rencon/Contin-U.pdf PDF]&lt;br /&gt;
|-&lt;br /&gt;
| 6&lt;br /&gt;
| EmberSky&lt;br /&gt;
| YQX+&lt;br /&gt;
| Jinwen Zhou, Yuncong Xie, Haochen Wang, Huan Zhang, Aidan Hogg, Simon Dixon&lt;br /&gt;
| 2.83/5.0&lt;br /&gt;
| [https://futuremirex.com/portal/wp-content/uploads/2025/rencon/YQX+.pdf PDF]&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| CrimsonDawn&lt;br /&gt;
| ScorePerLockNAR&lt;br /&gt;
| Weixi Zhai&lt;br /&gt;
| 2.53/5.0&lt;br /&gt;
| [https://futuremirex.com/portal/wp-content/uploads/2025/rencon/ScorePerLockNAR.pdf PDF]&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| AzureThunder&lt;br /&gt;
| RenConnoisseur&lt;br /&gt;
| Silvan Peter&lt;br /&gt;
| 2.53/5.0&lt;br /&gt;
| [https://futuremirex.com/portal/wp-content/uploads/2025/rencon/RenConnoisseur.pdf PDF]&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| GoldenMist&lt;br /&gt;
| CueFreeExpressPedal&lt;br /&gt;
| Kyle Worrall, Tom Collins&lt;br /&gt;
| 2.31/5.0&lt;br /&gt;
| [https://futuremirex.com/portal/wp-content/uploads/2025/rencon/CueFreeExpressPedal.pdf PDF]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
''Note: Complete rankings and system details will be updated following the live contest and final results announcement.''&lt;br /&gt;
&lt;br /&gt;
=== Qualitative Feedback ===&lt;br /&gt;
&lt;br /&gt;
Evaluators provided extensive qualitative feedback on the systems' performances:&lt;br /&gt;
&lt;br /&gt;
'''Common Positive Attributes:'''&lt;br /&gt;
* Natural expressiveness and human-like phrasing&lt;br /&gt;
* Appropriate tempo variations and rubato&lt;br /&gt;
* Musical sensitivity to harmonic structure&lt;br /&gt;
* Dynamic expression and articulation&lt;br /&gt;
&lt;br /&gt;
'''Areas for Improvement:'''&lt;br /&gt;
* Consistency across different musical styles&lt;br /&gt;
* Handling of complex rhythmic patterns&lt;br /&gt;
* Balance between technical accuracy and musical expression&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Live Contest Results ==&lt;br /&gt;
&lt;br /&gt;
The live contest was held on September 25, 2025, at the Daejeon Convention Center during ISMIR 2025. All systems rendered a surprise piece in real-time, with audience voting determining the final rankings. A total of '''50 evaluators''' participated in the live evaluation.&lt;br /&gt;
&lt;br /&gt;
=== Surprise Piece ===&lt;br /&gt;
* '''Title:''' Variation on &amp;quot;Mother and Sister&amp;quot;&lt;br /&gt;
* '''Composer:''' Hayeon Bang&lt;br /&gt;
* '''Duration:''' ~3 minutes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Live Contest Rankings ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Rank&lt;br /&gt;
! System Index&lt;br /&gt;
! Model Name&lt;br /&gt;
! Authors&lt;br /&gt;
! Live Score&lt;br /&gt;
! Preliminary Rank&lt;br /&gt;
! Score Change&lt;br /&gt;
|-&lt;br /&gt;
| 1st&lt;br /&gt;
| #8&lt;br /&gt;
| VirtuosoNet&lt;br /&gt;
| Dasaem Jeong et al.&lt;br /&gt;
| 3.62/5.0&lt;br /&gt;
| 2nd&lt;br /&gt;
| ↑1&lt;br /&gt;
|-&lt;br /&gt;
| 2nd&lt;br /&gt;
| #1&lt;br /&gt;
| DirectorMusices&lt;br /&gt;
| Anders Friberg et al.&lt;br /&gt;
| 3.06/5.0&lt;br /&gt;
| 1st&lt;br /&gt;
| ↓1&lt;br /&gt;
|-&lt;br /&gt;
| 3rd&lt;br /&gt;
| #2&lt;br /&gt;
| Midihum&lt;br /&gt;
| Erich Grunewald&lt;br /&gt;
| 2.90/5.0&lt;br /&gt;
| 3rd&lt;br /&gt;
| —&lt;br /&gt;
|-&lt;br /&gt;
| 4th&lt;br /&gt;
| #4&lt;br /&gt;
| Contin-U&lt;br /&gt;
| Jongmin Jung et al.&lt;br /&gt;
| 2.90/5.0&lt;br /&gt;
| 5th&lt;br /&gt;
| ↑1&lt;br /&gt;
|-&lt;br /&gt;
| 5th&lt;br /&gt;
| #6&lt;br /&gt;
| ScorePerLockNAR&lt;br /&gt;
| Weixi Zhai&lt;br /&gt;
| 2.52/5.0&lt;br /&gt;
| 7th&lt;br /&gt;
| ↑2&lt;br /&gt;
|-&lt;br /&gt;
| 6th&lt;br /&gt;
| #3&lt;br /&gt;
| RenConnoisseur&lt;br /&gt;
| Silvan Peter&lt;br /&gt;
| 2.40/5.0&lt;br /&gt;
| 8th&lt;br /&gt;
| ↑2&lt;br /&gt;
|-&lt;br /&gt;
| 7th&lt;br /&gt;
| #9&lt;br /&gt;
| ElegantAIPianist&lt;br /&gt;
| Leduo Chen et al.&lt;br /&gt;
| 2.08/5.0&lt;br /&gt;
| 4th&lt;br /&gt;
| ↓3&lt;br /&gt;
|-&lt;br /&gt;
| 8th&lt;br /&gt;
| #5&lt;br /&gt;
| YQX+&lt;br /&gt;
| Jinwen Zhou et al.&lt;br /&gt;
| 1.79/5.0&lt;br /&gt;
| 6th&lt;br /&gt;
| ↓2&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Human Baseline Performance ===&lt;br /&gt;
A human performance of the same surprise piece was also evaluated by the audience as a reference point:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Performance Type&lt;br /&gt;
! Score&lt;br /&gt;
! Rank Among All Performances&lt;br /&gt;
|-&lt;br /&gt;
| Human Performance&lt;br /&gt;
| 4.40/5.0&lt;br /&gt;
| 1st (Overall)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Performance Analysis ===&lt;br /&gt;
&lt;br /&gt;
'''Top Performers:'''&lt;br /&gt;
* '''VirtuosoNet''' achieved the highest AI system score (3.62/5.0), improving from 2nd place in the preliminary round&lt;br /&gt;
* '''DirectorMusices''' maintained strong performance (3.06/5.0) but dropped from 1st to 2nd place&lt;br /&gt;
* '''Midihum''' showed consistency, maintaining 3rd place across both rounds&lt;br /&gt;
&lt;br /&gt;
'''Notable Changes:'''&lt;br /&gt;
* '''ScorePerLockNAR''' and '''RenConnoisseur''' both improved by 2 positions in the live contest&lt;br /&gt;
* '''ElegantAIPianist''' experienced the largest drop, falling from 4th to 7th place&lt;br /&gt;
* '''Contin-U''' improved from 5th to 4th place in live evaluation&lt;br /&gt;
&lt;br /&gt;
'''Human vs. AI Performance:'''&lt;br /&gt;
* The human performance scored 4.40/5.0, significantly higher than the top AI system&lt;br /&gt;
* This represents a performance gap of 0.78 points between human and best AI performance&lt;br /&gt;
* All AI systems scored below the human baseline, indicating continued room for improvement&lt;br /&gt;
&lt;br /&gt;
=== Final Winner ===&lt;br /&gt;
&lt;br /&gt;
'''🏆 RenCon 2025 Winner: VirtuosoNet'''&lt;br /&gt;
* '''Team:''' Dasaem Jeong, Taegyun Kwon, Yoojin Kim, Kyogu Lee, Juhan Nam&lt;br /&gt;
* '''Final Score:''' 3.62/5.0&lt;br /&gt;
* '''Achievement:''' Highest-scoring AI system in live contest evaluation&lt;br /&gt;
&lt;br /&gt;
The winner was announced at the closing ceremony of ISMIR 2025 on September 25, 2025.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
&lt;br /&gt;
* [https://ren-con2025.vercel.app/ Official RenCon 2025 Website]&lt;br /&gt;
* [https://ismir2025.ismir.net/ ISMIR 2025 Conference Website]&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2025:RenCon RenCon 2025 MIREX Task Page]&lt;br /&gt;
&lt;br /&gt;
[[Category:MIREX]]&lt;br /&gt;
[[Category:ISMIR 2025]]&lt;br /&gt;
[[Category:Performance Rendering]]&lt;br /&gt;
[[Category:Competition Results]]&lt;/div&gt;</summary>
		<author><name>Huanz</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14829</id>
		<title>2025:RenCon Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14829"/>
		<updated>2025-09-18T23:29:30Z</updated>

		<summary type="html">&lt;p&gt;Huanz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= 2025:RenCon Results =&lt;br /&gt;
&lt;br /&gt;
== Preliminary (Audition) Round Results ==&lt;br /&gt;
&lt;br /&gt;
=== Evaluation Methodology ===&lt;br /&gt;
The preliminary round was evaluated through an online listening test with '''25 expert evaluators'''. The evaluation used a weighted voting system where participants self-rated their expertise level from 1-5 stars, with responses weighted accordingly.&lt;br /&gt;
&lt;br /&gt;
=== Participant Demographics ===&lt;br /&gt;
Our evaluation panel consisted of highly qualified judges:&lt;br /&gt;
&lt;br /&gt;
'''Expertise Distribution:'''&lt;br /&gt;
* Expert evaluators (5 stars): 7 participants (29.2%)&lt;br /&gt;
* High confidence (4 stars): 5 participants (20.8%)&lt;br /&gt;
* Moderate confidence (3 stars): 10 participants (41.7%)&lt;br /&gt;
* Lower confidence (1-2 stars): 2 participants (8.4%)&lt;br /&gt;
* '''Average expertise weight:''' 3.67/5.0&lt;br /&gt;
&lt;br /&gt;
'''Professional Background:'''&lt;br /&gt;
* Music researchers: 12 (54.5%)&lt;br /&gt;
* Music technologists: 10 (45.5%)&lt;br /&gt;
* Active performers: 8 (36.4%)&lt;br /&gt;
* Conservatory students: 6 (27.3%)&lt;br /&gt;
* Music lovers: 15 (68.2%)&lt;br /&gt;
* Concert-goers: 8 (36.4%)&lt;br /&gt;
&lt;br /&gt;
'''Musical Experience:'''&lt;br /&gt;
* Strong representation of classical music expertise&lt;br /&gt;
* Diverse musical preferences spanning classical, jazz, pop, and rock&lt;br /&gt;
* Substantial piano experience among evaluators&lt;br /&gt;
* Mix of academic researchers and practicing musicians&lt;br /&gt;
&lt;br /&gt;
=== System Rankings ===&lt;br /&gt;
&lt;br /&gt;
The following table shows the final rankings based on weighted average scores from the preliminary round evaluation:&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Rank&lt;br /&gt;
! Anonymous Name&lt;br /&gt;
! Real System Name&lt;br /&gt;
! Authors/Institution&lt;br /&gt;
! Weighted Score&lt;br /&gt;
|-&lt;br /&gt;
| 1&lt;br /&gt;
| MidnightOpal&lt;br /&gt;
| DirectorMusices&lt;br /&gt;
| Anders Friberg, Gabriel Jones&lt;br /&gt;
| 4.33/5.0&lt;br /&gt;
|-&lt;br /&gt;
| 2&lt;br /&gt;
| CrystalEcho&lt;br /&gt;
| VirtuosoNet&lt;br /&gt;
| Dasaem Jeong, Taegyun Kwon, Yoojin Kim, Kyogu Lee, Juhan Nam&lt;br /&gt;
| 3.54/5.0&lt;br /&gt;
|-&lt;br /&gt;
| 3&lt;br /&gt;
| FrozenRiver&lt;br /&gt;
| Midihum&lt;br /&gt;
| Erich Grunewald&lt;br /&gt;
| 3.32/5.0&lt;br /&gt;
|-&lt;br /&gt;
| 4&lt;br /&gt;
| VelvetStorm&lt;br /&gt;
| ElegantAIPianist&lt;br /&gt;
| Leduo Chen, Xinrui Su, Yuqiang Li, Honyu Andy Shing, Junchuan Zhao, Zihan Chai, Kunyang Zhang, Shengchen Li&lt;br /&gt;
| 3.19/5.0&lt;br /&gt;
|-&lt;br /&gt;
| 5&lt;br /&gt;
| SilverWave&lt;br /&gt;
| Contin-U&lt;br /&gt;
| Jongmin Jung, Dongmin Kim, Sihun Lee, Seola Cho, Hyungjoon Soh, Irmak Bukey, Chris Donahue, Dasaem Jeong&lt;br /&gt;
| 3.00/5.0&lt;br /&gt;
|-&lt;br /&gt;
| 6&lt;br /&gt;
| EmberSky&lt;br /&gt;
| YQX+&lt;br /&gt;
| Jinwen Zhou, Yuncong Xie, Haochen Wang, Huan Zhang, Aidan Hogg, Simon Dixon&lt;br /&gt;
| 2.83/5.0&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| CrimsonDawn&lt;br /&gt;
| ScorePerLockNAR&lt;br /&gt;
| Weixi Zhai&lt;br /&gt;
| 2.53/5.0&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| AzureThunder&lt;br /&gt;
| RenConnoisseur&lt;br /&gt;
| Silvan Peter&lt;br /&gt;
| 2.53/5.0&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| GoldenMist&lt;br /&gt;
| CueFreeExpressPedal&lt;br /&gt;
| Kyle Worrall, Tom Collins&lt;br /&gt;
| 2.31/5.0&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
''Note: Complete rankings and system details will be updated following the live contest and final results announcement.''&lt;br /&gt;
&lt;br /&gt;
=== Qualitative Feedback ===&lt;br /&gt;
&lt;br /&gt;
Evaluators provided extensive qualitative feedback on the systems' performances:&lt;br /&gt;
&lt;br /&gt;
'''Common Positive Attributes:'''&lt;br /&gt;
* Natural expressiveness and human-like phrasing&lt;br /&gt;
* Appropriate tempo variations and rubato&lt;br /&gt;
* Musical sensitivity to harmonic structure&lt;br /&gt;
* Dynamic expression and articulation&lt;br /&gt;
&lt;br /&gt;
'''Areas for Improvement:'''&lt;br /&gt;
* Consistency across different musical styles&lt;br /&gt;
* Handling of complex rhythmic patterns&lt;br /&gt;
* Balance between technical accuracy and musical expression&lt;br /&gt;
&lt;br /&gt;
== Live Contest Results ==&lt;br /&gt;
&lt;br /&gt;
''[To be updated following the live contest on September 25, 2025]''&lt;br /&gt;
&lt;br /&gt;
=== Surprise Piece ===&lt;br /&gt;
* '''Title:''' [To be announced]&lt;br /&gt;
* '''Composer:''' [To be announced]&lt;br /&gt;
* '''Duration:''' [X minutes]&lt;br /&gt;
* '''Style:''' [Musical characteristics]&lt;br /&gt;
&lt;br /&gt;
=== Live Performance Rankings ===&lt;br /&gt;
''[Results pending live audience voting]''&lt;br /&gt;
&lt;br /&gt;
=== Winner Announcement ===&lt;br /&gt;
''[To be announced at the conclusion of ISMIR 2025]''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
&lt;br /&gt;
* [https://ren-con2025.vercel.app/ Official RenCon 2025 Website]&lt;br /&gt;
* [https://ismir2025.ismir.net/ ISMIR 2025 Conference Website]&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2025:RenCon RenCon 2025 MIREX Task Page]&lt;br /&gt;
&lt;br /&gt;
[[Category:MIREX]]&lt;br /&gt;
[[Category:ISMIR 2025]]&lt;br /&gt;
[[Category:Performance Rendering]]&lt;br /&gt;
[[Category:Competition Results]]&lt;/div&gt;</summary>
		<author><name>Huanz</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14828</id>
		<title>2025:RenCon Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14828"/>
		<updated>2025-09-18T23:21:18Z</updated>

		<summary type="html">&lt;p&gt;Huanz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= 2025:RenCon Results =&lt;br /&gt;
&lt;br /&gt;
== Preliminary (Audition) Round Results ==&lt;br /&gt;
&lt;br /&gt;
=== Evaluation Methodology ===&lt;br /&gt;
The preliminary round was evaluated through an online listening test with '''25 expert evaluators'''. The evaluation used a weighted voting system where participants self-rated their expertise level from 1-5 stars, with responses weighted accordingly.&lt;br /&gt;
&lt;br /&gt;
=== Participant Demographics ===&lt;br /&gt;
Our evaluation panel consisted of highly qualified judges:&lt;br /&gt;
&lt;br /&gt;
'''Expertise Distribution:'''&lt;br /&gt;
* Expert evaluators (5 stars): 7 participants (29.2%)&lt;br /&gt;
* High confidence (4 stars): 5 participants (20.8%)&lt;br /&gt;
* Moderate confidence (3 stars): 10 participants (41.7%)&lt;br /&gt;
* Lower confidence (1-2 stars): 2 participants (8.4%)&lt;br /&gt;
* '''Average expertise weight:''' 3.67/5.0&lt;br /&gt;
&lt;br /&gt;
'''Professional Background:'''&lt;br /&gt;
* Music researchers: 12 (54.5%)&lt;br /&gt;
* Music technologists: 10 (45.5%)&lt;br /&gt;
* Active performers: 8 (36.4%)&lt;br /&gt;
* Conservatory students: 6 (27.3%)&lt;br /&gt;
* Music lovers: 15 (68.2%)&lt;br /&gt;
* Concert-goers: 8 (36.4%)&lt;br /&gt;
&lt;br /&gt;
'''Musical Experience:'''&lt;br /&gt;
* Strong representation of classical music expertise&lt;br /&gt;
* Diverse musical preferences spanning classical, jazz, pop, and rock&lt;br /&gt;
* Substantial piano experience among evaluators&lt;br /&gt;
* Mix of academic researchers and practicing musicians&lt;br /&gt;
&lt;br /&gt;
=== System Rankings ===&lt;br /&gt;
&lt;br /&gt;
The following table shows the final rankings based on weighted average scores from the preliminary round evaluation:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Rank&lt;br /&gt;
! Anonymous Name&lt;br /&gt;
! Real System Name&lt;br /&gt;
! Authors/Institution&lt;br /&gt;
! Weighted Score&lt;br /&gt;
! Simple Average&lt;br /&gt;
! Responses&lt;br /&gt;
|-&lt;br /&gt;
| 1&lt;br /&gt;
| EmberSky&lt;br /&gt;
| [System Name]&lt;br /&gt;
| [Author Names]&lt;br /&gt;
| [X.XXX]/5.0&lt;br /&gt;
| [X.XX]/5.0&lt;br /&gt;
| 24&lt;br /&gt;
|-&lt;br /&gt;
| 2&lt;br /&gt;
| AzureThunder&lt;br /&gt;
| [System Name]&lt;br /&gt;
| [Author Names]&lt;br /&gt;
| [X.XXX]/5.0&lt;br /&gt;
| [X.XX]/5.0&lt;br /&gt;
| 24&lt;br /&gt;
|-&lt;br /&gt;
| 3&lt;br /&gt;
| CrimsonDawn&lt;br /&gt;
| [System Name]&lt;br /&gt;
| [Author Names]&lt;br /&gt;
| [X.XXX]/5.0&lt;br /&gt;
| [X.XX]/5.0&lt;br /&gt;
| 24&lt;br /&gt;
|-&lt;br /&gt;
| 4&lt;br /&gt;
| SilverWave&lt;br /&gt;
| [System Name]&lt;br /&gt;
| [Author Names]&lt;br /&gt;
| [X.XXX]/5.0&lt;br /&gt;
| [X.XX]/5.0&lt;br /&gt;
| 24&lt;br /&gt;
|-&lt;br /&gt;
| 5&lt;br /&gt;
| VelvetStorm&lt;br /&gt;
| [System Name]&lt;br /&gt;
| [Author Names]&lt;br /&gt;
| [X.XXX]/5.0&lt;br /&gt;
| [X.XX]/5.0&lt;br /&gt;
| 24&lt;br /&gt;
|-&lt;br /&gt;
| 6&lt;br /&gt;
| GoldenMist&lt;br /&gt;
| [System Name]&lt;br /&gt;
| [Author Names]&lt;br /&gt;
| [X.XXX]/5.0&lt;br /&gt;
| [X.XX]/5.0&lt;br /&gt;
| 24&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
''Note: Complete rankings and system details will be updated following the live contest and final results announcement.''&lt;br /&gt;
&lt;br /&gt;
=== Qualitative Feedback ===&lt;br /&gt;
&lt;br /&gt;
Evaluators provided extensive qualitative feedback on the systems' performances:&lt;br /&gt;
&lt;br /&gt;
'''Common Positive Attributes:'''&lt;br /&gt;
* Natural expressiveness and human-like phrasing&lt;br /&gt;
* Appropriate tempo variations and rubato&lt;br /&gt;
* Musical sensitivity to harmonic structure&lt;br /&gt;
* Dynamic expression and articulation&lt;br /&gt;
&lt;br /&gt;
'''Areas for Improvement:'''&lt;br /&gt;
* Consistency across different musical styles&lt;br /&gt;
* Handling of complex rhythmic patterns&lt;br /&gt;
* Balance between technical accuracy and musical expression&lt;br /&gt;
&lt;br /&gt;
== Live Contest Results ==&lt;br /&gt;
&lt;br /&gt;
''[To be updated following the live contest on September 25, 2025]''&lt;br /&gt;
&lt;br /&gt;
=== Surprise Piece ===&lt;br /&gt;
* '''Title:''' [To be announced]&lt;br /&gt;
* '''Composer:''' [To be announced]&lt;br /&gt;
* '''Duration:''' [X minutes]&lt;br /&gt;
* '''Style:''' [Musical characteristics]&lt;br /&gt;
&lt;br /&gt;
=== Live Performance Rankings ===&lt;br /&gt;
''[Results pending live audience voting]''&lt;br /&gt;
&lt;br /&gt;
=== Winner Announcement ===&lt;br /&gt;
''[To be announced at the conclusion of ISMIR 2025]''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
&lt;br /&gt;
* [https://ren-con2025.vercel.app/ Official RenCon 2025 Website]&lt;br /&gt;
* [https://ismir2025.ismir.net/ ISMIR 2025 Conference Website]&lt;br /&gt;
* [https://www.music-ir.org/mirex/wiki/2025:RenCon RenCon 2025 MIREX Task Page]&lt;br /&gt;
&lt;br /&gt;
[[Category:MIREX]]&lt;br /&gt;
[[Category:ISMIR 2025]]&lt;br /&gt;
[[Category:Performance Rendering]]&lt;br /&gt;
[[Category:Competition Results]]&lt;/div&gt;</summary>
		<author><name>Huanz</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14785</id>
		<title>2025:RenCon Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14785"/>
		<updated>2025-09-11T08:41:09Z</updated>

		<summary type="html">&lt;p&gt;Huanz: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Huanz</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14784</id>
		<title>2025:RenCon Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14784"/>
		<updated>2025-09-11T08:41:01Z</updated>

		<summary type="html">&lt;p&gt;Huanz: /* Next Steps */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;# RenCon 2025 Online Audition Results - Preliminary&lt;br /&gt;
&lt;br /&gt;
'''Note: These are preliminary results as of September 10, 2025. The deadline has not yet passed, and rankings may change as additional responses are received.'''&lt;/div&gt;</summary>
		<author><name>Huanz</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14783</id>
		<title>2025:RenCon Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14783"/>
		<updated>2025-09-11T08:40:55Z</updated>

		<summary type="html">&lt;p&gt;Huanz: /* About the Scoring */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;# RenCon 2025 Online Audition Results - Preliminary&lt;br /&gt;
&lt;br /&gt;
'''Note: These are preliminary results as of September 10, 2025. The deadline has not yet passed, and rankings may change as additional responses are received.'''&lt;br /&gt;
&lt;br /&gt;
== Next Steps ==&lt;br /&gt;
&lt;br /&gt;
* '''Submission deadline''': Still open for additional audience feedback&lt;br /&gt;
* '''Final results''': Will be announced after the deadline closes&lt;br /&gt;
* '''Advancement''': Top performers will proceed to the next round&lt;br /&gt;
&lt;br /&gt;
''Last updated: September 10, 2025''&lt;/div&gt;</summary>
		<author><name>Huanz</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14782</id>
		<title>2025:RenCon Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14782"/>
		<updated>2025-09-11T08:19:41Z</updated>

		<summary type="html">&lt;p&gt;Huanz: /* Current Rankings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;# RenCon 2025 Online Audition Results - Preliminary&lt;br /&gt;
&lt;br /&gt;
'''Note: These are preliminary results as of September 10, 2025. The deadline has not yet passed, and rankings may change as additional responses are received.'''&lt;br /&gt;
&lt;br /&gt;
== About the Scoring ==&lt;br /&gt;
&lt;br /&gt;
* Scores are calculated using a weighted average system where judges self-rate their expertise level (1-5 stars)&lt;br /&gt;
* Higher expertise ratings receive greater weight in the final calculations&lt;br /&gt;
* All 9 participants received ratings from all 13 current respondents&lt;br /&gt;
&lt;br /&gt;
== Next Steps ==&lt;br /&gt;
&lt;br /&gt;
* '''Submission deadline''': Still open for additional audience feedback&lt;br /&gt;
* '''Final results''': Will be announced after the deadline closes&lt;br /&gt;
* '''Advancement''': Top performers will proceed to the next round&lt;br /&gt;
&lt;br /&gt;
''Last updated: September 10, 2025''&lt;/div&gt;</summary>
		<author><name>Huanz</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14764</id>
		<title>2025:RenCon Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:RenCon_Results&amp;diff=14764"/>
		<updated>2025-09-10T23:44:48Z</updated>

		<summary type="html">&lt;p&gt;Huanz: Created page with &amp;quot;# RenCon 2025 Online Audition Results - Preliminary  '''Note: These are preliminary results as of September 10, 2025. The deadline has not yet passed, and rankings may change...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;# RenCon 2025 Online Audition Results - Preliminary&lt;br /&gt;
&lt;br /&gt;
'''Note: These are preliminary results as of September 10, 2025. The deadline has not yet passed, and rankings may change as additional responses are received.'''&lt;br /&gt;
&lt;br /&gt;
== Current Rankings ==&lt;br /&gt;
&lt;br /&gt;
Based on weighted audience ratings from '''13 survey responses''':&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Audition Results &lt;br /&gt;
! Rank !! Performer !! Weighted Score !! Status&lt;br /&gt;
|-&lt;br /&gt;
| 1st || '''MidnightOpal''' || 4.21/5.0 || Leading&lt;br /&gt;
|-&lt;br /&gt;
| 2nd || '''CrystalEcho''' || 3.56/5.0 || Strong showing&lt;br /&gt;
|-&lt;br /&gt;
| 3rd || '''VelvetStorm''' || 3.29/5.0 || Competitive&lt;br /&gt;
|-&lt;br /&gt;
| 4th || '''SilverWave''' || 3.19/5.0 || Solid performance&lt;br /&gt;
|-&lt;br /&gt;
| 5th || '''FrozenRiver''' || 3.17/5.0 || Close contest&lt;br /&gt;
|-&lt;br /&gt;
| 6th || '''EmberSky''' || 2.67/5.0 || &lt;br /&gt;
|-&lt;br /&gt;
| 7th || '''AzureThunder''' || 2.50/5.0 || &lt;br /&gt;
|-&lt;br /&gt;
| 8th || '''CrimsonDawn''' || 2.40/5.0 || &lt;br /&gt;
|-&lt;br /&gt;
| 9th || '''GoldenMist''' || 2.06/5.0 || &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== About the Scoring ==&lt;br /&gt;
&lt;br /&gt;
* Scores are calculated using a weighted average system where judges self-rate their expertise level (1-5 stars)&lt;br /&gt;
* Higher expertise ratings receive greater weight in the final calculations&lt;br /&gt;
* All 9 participants received ratings from all 13 current respondents&lt;br /&gt;
&lt;br /&gt;
== Next Steps ==&lt;br /&gt;
&lt;br /&gt;
* '''Submission deadline''': Still open for additional audience feedback&lt;br /&gt;
* '''Final results''': Will be announced after the deadline closes&lt;br /&gt;
* '''Advancement''': Top performers will proceed to the next round&lt;br /&gt;
&lt;br /&gt;
''Last updated: September 10, 2025''&lt;/div&gt;</summary>
		<author><name>Huanz</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:RenCon&amp;diff=14735</id>
		<title>2025:RenCon</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:RenCon&amp;diff=14735"/>
		<updated>2025-08-19T22:18:58Z</updated>

		<summary type="html">&lt;p&gt;Huanz: /* Timeline */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= RenCon 2025: Expressive Performance Rendering Competition =&lt;br /&gt;
&lt;br /&gt;
Welcome to the official MIREX wiki page for RenCon 2025, a new task on expressive musical rendering from symbolic scores. This task is part of the MIREX 2025 evaluation campaign and will culminate with a live contest at ISMIR 2025 in Daejeon, Korea. For updates, templates, and examples, please visit our official site: https://ren-con2025.vercel.app&lt;br /&gt;
&lt;br /&gt;
Here is a short summary of the important information:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
Welcome to the official site for '''RenCon 2025''', an international challenge where researchers and developers submit systems capable of rendering expressive musical performances from symbolic scores. This year, we are delighted to hosted it along with '''ISMIR 2025''' and under the '''MIREX Tasks protocol'''.&lt;br /&gt;
&lt;br /&gt;
The RenCon competition has a rich history, having record for compeition in 2002, 2003, 2004, 2005, 2008 and 2011 ([https://www.researchgate.net/publication/228715822_The_Second_Rencon_Performance_Contest_Panel_Discussion_and_the_Future 1], [https://www.nime.org/proceedings/2004/nime2004_120.pdf 2], [https://citeseerx.ist.psu.edu/document?repid=rep1&amp;amp;type=pdf&amp;amp;doi=e611d7c4f9df63d99d5760ca88a7107dca945e05 3], [https://www.researchgate.net/publication/228715816_Rencon_Performance_Rendering_Contest_for_Automated_Music_Systems 4], [http://smc.afim-asso.org/smc11/papers/smc2011_120.pdf 5]), jointly with conferences like SMC, NIME, ICMPC and IJCAI. Even before the term &amp;quot;AI&amp;quot; was widely used, the RenCon competition has been a platform for researchers to showcase their work in the field of expressive performance rendering. &lt;br /&gt;
However, not too much of its past can be traced today, except an old [http://renconmusic.org site]. We hope to revive this tradition with RenCon 2025, coinciding with the renewed global focus on performance during this year's Chopin Piano Competition.&lt;br /&gt;
&lt;br /&gt;
== Task Description ==&lt;br /&gt;
&lt;br /&gt;
Expressive Performance Rendering is a task that challenges participants to develop systems capable of rendering expressive musical performances from symbolic scores in MusicXML format.&lt;br /&gt;
&lt;br /&gt;
This year, we limit the task to be '''solo-piano''' specific. We accept system that generate symbolic (MIDI) or audio (wav) renderings, and the output shall contain human-like expressive deviation from the MusicXML score. &lt;br /&gt;
&lt;br /&gt;
Similar to AI song contest, the evaluation of expressive rendering is subjective and requires human judges to assess the quality of the generated performances. Thus, we propose a two-phase competition structure as shown in the next section, relying on audience voting to determine the winner.&lt;br /&gt;
&lt;br /&gt;
== Competition Structure ==&lt;br /&gt;
&lt;br /&gt;
RenCon 2025 is structured in two phases:&lt;br /&gt;
&lt;br /&gt;
* '''Phase 1 – Preliminary Round (Online)'''  &lt;br /&gt;
Submit performances of assigned and free-choice pieces. Includes symbolic or audio renderings and a technical report. No real-time constraints allow for broader participation and diversity in evaluation. &lt;br /&gt;
The submission period is open from '''May 30, 2025''' to '''Aug 20, 2025''', following the MIREX submission guidelines.&lt;br /&gt;
After the submission deadline, the preliminary round page will be finalized with the list of participants and their submissions, and the online evaluation will take place.&lt;br /&gt;
&lt;br /&gt;
* '''Phase 2 – Live Contest at ISMIR (Daejeon, Korea)'''  &lt;br /&gt;
Top systems from preliminary round will be invited to render a surprise piece live at ISMIR, using their system in real time. &lt;br /&gt;
The live contest is open to all ISMIR attendees, as well as general public if the venue allows. The audience will be able to listen to the live performances and vote for their favorite system. The winner will be announced at the end of the conference.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
* '''May 30, 2025''': Preliminary round submission opens, online judge sign-up opens&lt;br /&gt;
* '''Aug 25, 2025''': Preliminary round submission closes (extended deadline)&lt;br /&gt;
* '''Aug 25, 2025''': Preliminary round page finalized and online evaluation begins&lt;br /&gt;
* '''Sept 10, 2025''': Online evaluation ends, results announced, and top systems invited to live contest&lt;br /&gt;
* '''Sept 25, 2025''': Live contest at ISMIR 2025 in Daejeon, Korea&lt;br /&gt;
&lt;br /&gt;
== Submission Requirements ==&lt;br /&gt;
&lt;br /&gt;
The following items are required for submission:&lt;br /&gt;
&lt;br /&gt;
* Code and checkpoint of the system.  &lt;br /&gt;
* Symbolic (MIDI) or audio (wav) renderings of designated pieces:  &lt;br /&gt;
** Required pieces (choose 2 out of 4), click to download the MusicXML files:&lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/CAPRICCIO_en_sol_mineur_HWV_483_-_Handel.mxl Handel: Capriccio in G minor, HWV 483]  &lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/32_Variations_in_C_minor_WoO_80_First_5.mxl Beethoven: 32 Variations in C minor, WoO 80 - Theme and the first 5 variations]  &lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/12_Romances_Op.21__Sergei_Rachmaninoff_Zdes_khorosho_-_Arrangement_for_solo_piano.mxl Rachmaninoff: Здесь хорошо (How Fair this Spot), Op. 21, No. 7 (transcribed for solo piano)]  &lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/With_Dog-teams.mxl Amy Beach: Eskimos, Op.64, No.4 - With Dog-Teams]  &lt;br /&gt;
** One free-choice piece (rendering must be less than 1 minute 30 seconds)  &lt;br /&gt;
** A total of 3 pieces, with a maximum of 5 minutes of rendering in total.&lt;br /&gt;
&lt;br /&gt;
* Technical report: Please use the template available at  [https://ren-con2025.vercel.app/static/RenCon%20Submission%20Report%20Template.zip Submission Report Template (ZIP)]&lt;br /&gt;
&lt;br /&gt;
Final submission must be made through the [[2025_Submission_Guidelines|MIREX submission system]], with maximum zip file of size 5GB.&lt;br /&gt;
&lt;br /&gt;
== Data Format of submission ==&lt;br /&gt;
&lt;br /&gt;
'''Symbolic submissions''': MIDI format with all sound events in program number 1 (solo piano). Track and channel are unrestricted.  &lt;br /&gt;
&lt;br /&gt;
'''Audio Submissions''': wav format, 44.1kHz, 16-bit PCM&lt;br /&gt;
&lt;br /&gt;
== Training Datasets ==&lt;br /&gt;
&lt;br /&gt;
Participants are welcome to train their systems on any dataset, including publicly available corpora, proprietary collections, or internally curated material. There are no restrictions on dataset origin, but we ask for full transparency.&lt;br /&gt;
&lt;br /&gt;
Some suggested datasets for training and validation include:&lt;br /&gt;
* '''[https://github.com/fosfrancesco/asap-dataset ASAP]''': A large dataset of classical piano performances sourced from MAESTRO, includes corresponding MIDI and audio. [https://github.com/CPJKU/asap-dataset (n)ASAP] provides score-performance alignment. &lt;br /&gt;
* '''[https://github.com/tangjjbetsy/ATEPP ATEPP]''': A large dataset of transcribed MIDI expressive piano performances, organized by virtuosic performer. However, only around half of the dataset contains score MusicXML files. &lt;br /&gt;
* '''[https://github.com/CPJKU/vienna4x22 VIENNA 4x22]''': A small-scale dataset of 4 pieces with 22 different interpretations, including audio and MIDI and fine alignment. ()  &lt;br /&gt;
* '''[https://github.com/huispaty/batik_plays_mozart Batik-plays-Mozart]''': Fine-aligned performance MIDI dataset of Mozart played by Roland Batik.   &lt;br /&gt;
&lt;br /&gt;
Please clearly describe the datasets used for training and validation in your technical report. Important details to include are:&lt;br /&gt;
&lt;br /&gt;
* Dataset name or source  &lt;br /&gt;
* Size and number of pieces  &lt;br /&gt;
* Instrumentation and expressive characteristics  &lt;br /&gt;
* Data format (MIDI, audio, etc.)  &lt;br /&gt;
* Any preprocessing, cleaning, or augmentation steps applied  &lt;br /&gt;
&lt;br /&gt;
This helps the jury and the research community understand the representational capacity and limitations of each submission.&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
To ensure fair evaluation, all post-processing applied to the preliminary round output must be documented in the submission report. Depending on your system type, please include the following:&lt;br /&gt;
&lt;br /&gt;
* '''Symbolic Output System''': If your model generates symbolic MIDI output and you submit the sonified audio track, describe how audio is derived. Include soundfont names, software synths used (e.g., FluidSynth, Logic Pro), or player piano models.&lt;br /&gt;
** If you would like to submit the MIDI output directly and allow us (the organizer team) for sonification, please contact [mailto:huan.zhang@qmul.ac.uk huan.zhang@qmul.ac.uk] during your submission. It's likely that we would arrange a Disklavier recording in the Vienna office of Institute of Computational Perception (CPJKU) lab.&lt;br /&gt;
* '''Audio Output Systems''': If your model outputs audio directly, describe if you have applied any enhancement steps such as EQ, reverb, compression, or noise reduction to the model's output.&lt;br /&gt;
* '''Controllability or Interventions''': Clarify if the output is influenced by human-involved choices — such as selected tempo, dynamics range, segmentation, or annotated phrasing.&lt;br /&gt;
* '''MIDI Cleanup''': If symbolic outputs were manually edited (quantization, pedals, etc) before submission, that should be documented.&lt;br /&gt;
&lt;br /&gt;
Submissions should aim for minimal human intervention. Manual correction is allowed only if it is well-documented and justified in the report.&lt;br /&gt;
&lt;br /&gt;
== Organizers ==&lt;br /&gt;
* Huan Zhang (Task Captain, Queen Mary University of London)&lt;br /&gt;
* Taegyun Kwon, Venue Coordinator, Korea Advanced Institute of Science and Technology (KAIST)&lt;br /&gt;
* Junyan Jiang (New York University)&lt;br /&gt;
* Simon Dixon (Queen Mary University of London)&lt;br /&gt;
* Gus Xia (MBZUAI)&lt;br /&gt;
* Akira Maezawa (Yamaha)&lt;br /&gt;
&lt;br /&gt;
Contact: [mailto:huan.zhang@qmul.ac.uk huan.zhang@qmul.ac.uk]&lt;/div&gt;</summary>
		<author><name>Huanz</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:RenCon&amp;diff=14734</id>
		<title>2025:RenCon</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:RenCon&amp;diff=14734"/>
		<updated>2025-08-19T22:18:48Z</updated>

		<summary type="html">&lt;p&gt;Huanz: /* Timeline */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= RenCon 2025: Expressive Performance Rendering Competition =&lt;br /&gt;
&lt;br /&gt;
Welcome to the official MIREX wiki page for RenCon 2025, a new task on expressive musical rendering from symbolic scores. This task is part of the MIREX 2025 evaluation campaign and will culminate with a live contest at ISMIR 2025 in Daejeon, Korea. For updates, templates, and examples, please visit our official site: https://ren-con2025.vercel.app&lt;br /&gt;
&lt;br /&gt;
Here is a short summary of the important information:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
Welcome to the official site for '''RenCon 2025''', an international challenge where researchers and developers submit systems capable of rendering expressive musical performances from symbolic scores. This year, we are delighted to hosted it along with '''ISMIR 2025''' and under the '''MIREX Tasks protocol'''.&lt;br /&gt;
&lt;br /&gt;
The RenCon competition has a rich history, having record for compeition in 2002, 2003, 2004, 2005, 2008 and 2011 ([https://www.researchgate.net/publication/228715822_The_Second_Rencon_Performance_Contest_Panel_Discussion_and_the_Future 1], [https://www.nime.org/proceedings/2004/nime2004_120.pdf 2], [https://citeseerx.ist.psu.edu/document?repid=rep1&amp;amp;type=pdf&amp;amp;doi=e611d7c4f9df63d99d5760ca88a7107dca945e05 3], [https://www.researchgate.net/publication/228715816_Rencon_Performance_Rendering_Contest_for_Automated_Music_Systems 4], [http://smc.afim-asso.org/smc11/papers/smc2011_120.pdf 5]), jointly with conferences like SMC, NIME, ICMPC and IJCAI. Even before the term &amp;quot;AI&amp;quot; was widely used, the RenCon competition has been a platform for researchers to showcase their work in the field of expressive performance rendering. &lt;br /&gt;
However, not too much of its past can be traced today, except an old [http://renconmusic.org site]. We hope to revive this tradition with RenCon 2025, coinciding with the renewed global focus on performance during this year's Chopin Piano Competition.&lt;br /&gt;
&lt;br /&gt;
== Task Description ==&lt;br /&gt;
&lt;br /&gt;
Expressive Performance Rendering is a task that challenges participants to develop systems capable of rendering expressive musical performances from symbolic scores in MusicXML format.&lt;br /&gt;
&lt;br /&gt;
This year, we limit the task to be '''solo-piano''' specific. We accept system that generate symbolic (MIDI) or audio (wav) renderings, and the output shall contain human-like expressive deviation from the MusicXML score. &lt;br /&gt;
&lt;br /&gt;
Similar to AI song contest, the evaluation of expressive rendering is subjective and requires human judges to assess the quality of the generated performances. Thus, we propose a two-phase competition structure as shown in the next section, relying on audience voting to determine the winner.&lt;br /&gt;
&lt;br /&gt;
== Competition Structure ==&lt;br /&gt;
&lt;br /&gt;
RenCon 2025 is structured in two phases:&lt;br /&gt;
&lt;br /&gt;
* '''Phase 1 – Preliminary Round (Online)'''  &lt;br /&gt;
Submit performances of assigned and free-choice pieces. Includes symbolic or audio renderings and a technical report. No real-time constraints allow for broader participation and diversity in evaluation. &lt;br /&gt;
The submission period is open from '''May 30, 2025''' to '''Aug 20, 2025''', following the MIREX submission guidelines.&lt;br /&gt;
After the submission deadline, the preliminary round page will be finalized with the list of participants and their submissions, and the online evaluation will take place.&lt;br /&gt;
&lt;br /&gt;
* '''Phase 2 – Live Contest at ISMIR (Daejeon, Korea)'''  &lt;br /&gt;
Top systems from preliminary round will be invited to render a surprise piece live at ISMIR, using their system in real time. &lt;br /&gt;
The live contest is open to all ISMIR attendees, as well as general public if the venue allows. The audience will be able to listen to the live performances and vote for their favorite system. The winner will be announced at the end of the conference.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
* '''May 30, 2025''': Preliminary round submission opens, online judge sign-up opens&lt;br /&gt;
* '''Aug 25, 2025''': Preliminary round submission closes (extended dealine)&lt;br /&gt;
* '''Aug 25, 2025''': Preliminary round page finalized and online evaluation begins&lt;br /&gt;
* '''Sept 10, 2025''': Online evaluation ends, results announced, and top systems invited to live contest&lt;br /&gt;
* '''Sept 25, 2025''': Live contest at ISMIR 2025 in Daejeon, Korea&lt;br /&gt;
&lt;br /&gt;
== Submission Requirements ==&lt;br /&gt;
&lt;br /&gt;
The following items are required for submission:&lt;br /&gt;
&lt;br /&gt;
* Code and checkpoint of the system.  &lt;br /&gt;
* Symbolic (MIDI) or audio (wav) renderings of designated pieces:  &lt;br /&gt;
** Required pieces (choose 2 out of 4), click to download the MusicXML files:&lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/CAPRICCIO_en_sol_mineur_HWV_483_-_Handel.mxl Handel: Capriccio in G minor, HWV 483]  &lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/32_Variations_in_C_minor_WoO_80_First_5.mxl Beethoven: 32 Variations in C minor, WoO 80 - Theme and the first 5 variations]  &lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/12_Romances_Op.21__Sergei_Rachmaninoff_Zdes_khorosho_-_Arrangement_for_solo_piano.mxl Rachmaninoff: Здесь хорошо (How Fair this Spot), Op. 21, No. 7 (transcribed for solo piano)]  &lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/With_Dog-teams.mxl Amy Beach: Eskimos, Op.64, No.4 - With Dog-Teams]  &lt;br /&gt;
** One free-choice piece (rendering must be less than 1 minute 30 seconds)  &lt;br /&gt;
** A total of 3 pieces, with a maximum of 5 minutes of rendering in total.&lt;br /&gt;
&lt;br /&gt;
* Technical report: Please use the template available at  [https://ren-con2025.vercel.app/static/RenCon%20Submission%20Report%20Template.zip Submission Report Template (ZIP)]&lt;br /&gt;
&lt;br /&gt;
Final submission must be made through the [[2025_Submission_Guidelines|MIREX submission system]], with maximum zip file of size 5GB.&lt;br /&gt;
&lt;br /&gt;
== Data Format of submission ==&lt;br /&gt;
&lt;br /&gt;
'''Symbolic submissions''': MIDI format with all sound events in program number 1 (solo piano). Track and channel are unrestricted.  &lt;br /&gt;
&lt;br /&gt;
'''Audio Submissions''': wav format, 44.1kHz, 16-bit PCM&lt;br /&gt;
&lt;br /&gt;
== Training Datasets ==&lt;br /&gt;
&lt;br /&gt;
Participants are welcome to train their systems on any dataset, including publicly available corpora, proprietary collections, or internally curated material. There are no restrictions on dataset origin, but we ask for full transparency.&lt;br /&gt;
&lt;br /&gt;
Some suggested datasets for training and validation include:&lt;br /&gt;
* '''[https://github.com/fosfrancesco/asap-dataset ASAP]''': A large dataset of classical piano performances sourced from MAESTRO, includes corresponding MIDI and audio. [https://github.com/CPJKU/asap-dataset (n)ASAP] provides score-performance alignment. &lt;br /&gt;
* '''[https://github.com/tangjjbetsy/ATEPP ATEPP]''': A large dataset of transcribed MIDI expressive piano performances, organized by virtuosic performer. However, only around half of the dataset contains score MusicXML files. &lt;br /&gt;
* '''[https://github.com/CPJKU/vienna4x22 VIENNA 4x22]''': A small-scale dataset of 4 pieces with 22 different interpretations, including audio and MIDI and fine alignment. ()  &lt;br /&gt;
* '''[https://github.com/huispaty/batik_plays_mozart Batik-plays-Mozart]''': Fine-aligned performance MIDI dataset of Mozart played by Roland Batik.   &lt;br /&gt;
&lt;br /&gt;
Please clearly describe the datasets used for training and validation in your technical report. Important details to include are:&lt;br /&gt;
&lt;br /&gt;
* Dataset name or source  &lt;br /&gt;
* Size and number of pieces  &lt;br /&gt;
* Instrumentation and expressive characteristics  &lt;br /&gt;
* Data format (MIDI, audio, etc.)  &lt;br /&gt;
* Any preprocessing, cleaning, or augmentation steps applied  &lt;br /&gt;
&lt;br /&gt;
This helps the jury and the research community understand the representational capacity and limitations of each submission.&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
To ensure fair evaluation, all post-processing applied to the preliminary round output must be documented in the submission report. Depending on your system type, please include the following:&lt;br /&gt;
&lt;br /&gt;
* '''Symbolic Output System''': If your model generates symbolic MIDI output and you submit the sonified audio track, describe how audio is derived. Include soundfont names, software synths used (e.g., FluidSynth, Logic Pro), or player piano models.&lt;br /&gt;
** If you would like to submit the MIDI output directly and allow us (the organizer team) for sonification, please contact [mailto:huan.zhang@qmul.ac.uk huan.zhang@qmul.ac.uk] during your submission. It's likely that we would arrange a Disklavier recording in the Vienna office of Institute of Computational Perception (CPJKU) lab.&lt;br /&gt;
* '''Audio Output Systems''': If your model outputs audio directly, describe if you have applied any enhancement steps such as EQ, reverb, compression, or noise reduction to the model's output.&lt;br /&gt;
* '''Controllability or Interventions''': Clarify if the output is influenced by human-involved choices — such as selected tempo, dynamics range, segmentation, or annotated phrasing.&lt;br /&gt;
* '''MIDI Cleanup''': If symbolic outputs were manually edited (quantization, pedals, etc) before submission, that should be documented.&lt;br /&gt;
&lt;br /&gt;
Submissions should aim for minimal human intervention. Manual correction is allowed only if it is well-documented and justified in the report.&lt;br /&gt;
&lt;br /&gt;
== Organizers ==&lt;br /&gt;
* Huan Zhang (Task Captain, Queen Mary University of London)&lt;br /&gt;
* Taegyun Kwon, Venue Coordinator, Korea Advanced Institute of Science and Technology (KAIST)&lt;br /&gt;
* Junyan Jiang (New York University)&lt;br /&gt;
* Simon Dixon (Queen Mary University of London)&lt;br /&gt;
* Gus Xia (MBZUAI)&lt;br /&gt;
* Akira Maezawa (Yamaha)&lt;br /&gt;
&lt;br /&gt;
Contact: [mailto:huan.zhang@qmul.ac.uk huan.zhang@qmul.ac.uk]&lt;/div&gt;</summary>
		<author><name>Huanz</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:RenCon&amp;diff=14733</id>
		<title>2025:RenCon</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:RenCon&amp;diff=14733"/>
		<updated>2025-08-19T22:18:23Z</updated>

		<summary type="html">&lt;p&gt;Huanz: /* Timeline */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= RenCon 2025: Expressive Performance Rendering Competition =&lt;br /&gt;
&lt;br /&gt;
Welcome to the official MIREX wiki page for RenCon 2025, a new task on expressive musical rendering from symbolic scores. This task is part of the MIREX 2025 evaluation campaign and will culminate with a live contest at ISMIR 2025 in Daejeon, Korea. For updates, templates, and examples, please visit our official site: https://ren-con2025.vercel.app&lt;br /&gt;
&lt;br /&gt;
Here is a short summary of the important information:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
Welcome to the official site for '''RenCon 2025''', an international challenge where researchers and developers submit systems capable of rendering expressive musical performances from symbolic scores. This year, we are delighted to hosted it along with '''ISMIR 2025''' and under the '''MIREX Tasks protocol'''.&lt;br /&gt;
&lt;br /&gt;
The RenCon competition has a rich history, having record for compeition in 2002, 2003, 2004, 2005, 2008 and 2011 ([https://www.researchgate.net/publication/228715822_The_Second_Rencon_Performance_Contest_Panel_Discussion_and_the_Future 1], [https://www.nime.org/proceedings/2004/nime2004_120.pdf 2], [https://citeseerx.ist.psu.edu/document?repid=rep1&amp;amp;type=pdf&amp;amp;doi=e611d7c4f9df63d99d5760ca88a7107dca945e05 3], [https://www.researchgate.net/publication/228715816_Rencon_Performance_Rendering_Contest_for_Automated_Music_Systems 4], [http://smc.afim-asso.org/smc11/papers/smc2011_120.pdf 5]), jointly with conferences like SMC, NIME, ICMPC and IJCAI. Even before the term &amp;quot;AI&amp;quot; was widely used, the RenCon competition has been a platform for researchers to showcase their work in the field of expressive performance rendering. &lt;br /&gt;
However, not too much of its past can be traced today, except an old [http://renconmusic.org site]. We hope to revive this tradition with RenCon 2025, coinciding with the renewed global focus on performance during this year's Chopin Piano Competition.&lt;br /&gt;
&lt;br /&gt;
== Task Description ==&lt;br /&gt;
&lt;br /&gt;
Expressive Performance Rendering is a task that challenges participants to develop systems capable of rendering expressive musical performances from symbolic scores in MusicXML format.&lt;br /&gt;
&lt;br /&gt;
This year, we limit the task to be '''solo-piano''' specific. We accept system that generate symbolic (MIDI) or audio (wav) renderings, and the output shall contain human-like expressive deviation from the MusicXML score. &lt;br /&gt;
&lt;br /&gt;
Similar to AI song contest, the evaluation of expressive rendering is subjective and requires human judges to assess the quality of the generated performances. Thus, we propose a two-phase competition structure as shown in the next section, relying on audience voting to determine the winner.&lt;br /&gt;
&lt;br /&gt;
== Competition Structure ==&lt;br /&gt;
&lt;br /&gt;
RenCon 2025 is structured in two phases:&lt;br /&gt;
&lt;br /&gt;
* '''Phase 1 – Preliminary Round (Online)'''  &lt;br /&gt;
Submit performances of assigned and free-choice pieces. Includes symbolic or audio renderings and a technical report. No real-time constraints allow for broader participation and diversity in evaluation. &lt;br /&gt;
The submission period is open from '''May 30, 2025''' to '''Aug 20, 2025''', following the MIREX submission guidelines.&lt;br /&gt;
After the submission deadline, the preliminary round page will be finalized with the list of participants and their submissions, and the online evaluation will take place.&lt;br /&gt;
&lt;br /&gt;
* '''Phase 2 – Live Contest at ISMIR (Daejeon, Korea)'''  &lt;br /&gt;
Top systems from preliminary round will be invited to render a surprise piece live at ISMIR, using their system in real time. &lt;br /&gt;
The live contest is open to all ISMIR attendees, as well as general public if the venue allows. The audience will be able to listen to the live performances and vote for their favorite system. The winner will be announced at the end of the conference.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
* '''May 30, 2025''': Preliminary round submission opens, online judge sign-up opens&lt;br /&gt;
* '''Aug 20, 2025''': Preliminary round submission closes&lt;br /&gt;
* '''Aug 25, 2025''': Preliminary round page finalized and online evaluation begins&lt;br /&gt;
* '''Sept 10, 2025''': Online evaluation ends, results announced, and top systems invited to live contest&lt;br /&gt;
* '''Sept 25, 2025''': Live contest at ISMIR 2025 in Daejeon, Korea&lt;br /&gt;
&lt;br /&gt;
== Submission Requirements ==&lt;br /&gt;
&lt;br /&gt;
The following items are required for submission:&lt;br /&gt;
&lt;br /&gt;
* Code and checkpoint of the system.  &lt;br /&gt;
* Symbolic (MIDI) or audio (wav) renderings of designated pieces:  &lt;br /&gt;
** Required pieces (choose 2 out of 4), click to download the MusicXML files:&lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/CAPRICCIO_en_sol_mineur_HWV_483_-_Handel.mxl Handel: Capriccio in G minor, HWV 483]  &lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/32_Variations_in_C_minor_WoO_80_First_5.mxl Beethoven: 32 Variations in C minor, WoO 80 - Theme and the first 5 variations]  &lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/12_Romances_Op.21__Sergei_Rachmaninoff_Zdes_khorosho_-_Arrangement_for_solo_piano.mxl Rachmaninoff: Здесь хорошо (How Fair this Spot), Op. 21, No. 7 (transcribed for solo piano)]  &lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/With_Dog-teams.mxl Amy Beach: Eskimos, Op.64, No.4 - With Dog-Teams]  &lt;br /&gt;
** One free-choice piece (rendering must be less than 1 minute 30 seconds)  &lt;br /&gt;
** A total of 3 pieces, with a maximum of 5 minutes of rendering in total.&lt;br /&gt;
&lt;br /&gt;
* Technical report: Please use the template available at  [https://ren-con2025.vercel.app/static/RenCon%20Submission%20Report%20Template.zip Submission Report Template (ZIP)]&lt;br /&gt;
&lt;br /&gt;
Final submission must be made through the [[2025_Submission_Guidelines|MIREX submission system]], with maximum zip file of size 5GB.&lt;br /&gt;
&lt;br /&gt;
== Data Format of submission ==&lt;br /&gt;
&lt;br /&gt;
'''Symbolic submissions''': MIDI format with all sound events in program number 1 (solo piano). Track and channel are unrestricted.  &lt;br /&gt;
&lt;br /&gt;
'''Audio Submissions''': wav format, 44.1kHz, 16-bit PCM&lt;br /&gt;
&lt;br /&gt;
== Training Datasets ==&lt;br /&gt;
&lt;br /&gt;
Participants are welcome to train their systems on any dataset, including publicly available corpora, proprietary collections, or internally curated material. There are no restrictions on dataset origin, but we ask for full transparency.&lt;br /&gt;
&lt;br /&gt;
Some suggested datasets for training and validation include:&lt;br /&gt;
* '''[https://github.com/fosfrancesco/asap-dataset ASAP]''': A large dataset of classical piano performances sourced from MAESTRO, includes corresponding MIDI and audio. [https://github.com/CPJKU/asap-dataset (n)ASAP] provides score-performance alignment. &lt;br /&gt;
* '''[https://github.com/tangjjbetsy/ATEPP ATEPP]''': A large dataset of transcribed MIDI expressive piano performances, organized by virtuosic performer. However, only around half of the dataset contains score MusicXML files. &lt;br /&gt;
* '''[https://github.com/CPJKU/vienna4x22 VIENNA 4x22]''': A small-scale dataset of 4 pieces with 22 different interpretations, including audio and MIDI and fine alignment. ()  &lt;br /&gt;
* '''[https://github.com/huispaty/batik_plays_mozart Batik-plays-Mozart]''': Fine-aligned performance MIDI dataset of Mozart played by Roland Batik.   &lt;br /&gt;
&lt;br /&gt;
Please clearly describe the datasets used for training and validation in your technical report. Important details to include are:&lt;br /&gt;
&lt;br /&gt;
* Dataset name or source  &lt;br /&gt;
* Size and number of pieces  &lt;br /&gt;
* Instrumentation and expressive characteristics  &lt;br /&gt;
* Data format (MIDI, audio, etc.)  &lt;br /&gt;
* Any preprocessing, cleaning, or augmentation steps applied  &lt;br /&gt;
&lt;br /&gt;
This helps the jury and the research community understand the representational capacity and limitations of each submission.&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
To ensure fair evaluation, all post-processing applied to the preliminary round output must be documented in the submission report. Depending on your system type, please include the following:&lt;br /&gt;
&lt;br /&gt;
* '''Symbolic Output System''': If your model generates symbolic MIDI output and you submit the sonified audio track, describe how audio is derived. Include soundfont names, software synths used (e.g., FluidSynth, Logic Pro), or player piano models.&lt;br /&gt;
** If you would like to submit the MIDI output directly and allow us (the organizer team) for sonification, please contact [mailto:huan.zhang@qmul.ac.uk huan.zhang@qmul.ac.uk] during your submission. It's likely that we would arrange a Disklavier recording in the Vienna office of Institute of Computational Perception (CPJKU) lab.&lt;br /&gt;
* '''Audio Output Systems''': If your model outputs audio directly, describe if you have applied any enhancement steps such as EQ, reverb, compression, or noise reduction to the model's output.&lt;br /&gt;
* '''Controllability or Interventions''': Clarify if the output is influenced by human-involved choices — such as selected tempo, dynamics range, segmentation, or annotated phrasing.&lt;br /&gt;
* '''MIDI Cleanup''': If symbolic outputs were manually edited (quantization, pedals, etc) before submission, that should be documented.&lt;br /&gt;
&lt;br /&gt;
Submissions should aim for minimal human intervention. Manual correction is allowed only if it is well-documented and justified in the report.&lt;br /&gt;
&lt;br /&gt;
== Organizers ==&lt;br /&gt;
* Huan Zhang (Task Captain, Queen Mary University of London)&lt;br /&gt;
* Taegyun Kwon, Venue Coordinator, Korea Advanced Institute of Science and Technology (KAIST)&lt;br /&gt;
* Junyan Jiang (New York University)&lt;br /&gt;
* Simon Dixon (Queen Mary University of London)&lt;br /&gt;
* Gus Xia (MBZUAI)&lt;br /&gt;
* Akira Maezawa (Yamaha)&lt;br /&gt;
&lt;br /&gt;
Contact: [mailto:huan.zhang@qmul.ac.uk huan.zhang@qmul.ac.uk]&lt;/div&gt;</summary>
		<author><name>Huanz</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:RenCon&amp;diff=14732</id>
		<title>2025:RenCon</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:RenCon&amp;diff=14732"/>
		<updated>2025-08-19T22:17:45Z</updated>

		<summary type="html">&lt;p&gt;Huanz: /* Organizers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= RenCon 2025: Expressive Performance Rendering Competition =&lt;br /&gt;
&lt;br /&gt;
Welcome to the official MIREX wiki page for RenCon 2025, a new task on expressive musical rendering from symbolic scores. This task is part of the MIREX 2025 evaluation campaign and will culminate with a live contest at ISMIR 2025 in Daejeon, Korea. For updates, templates, and examples, please visit our official site: https://ren-con2025.vercel.app&lt;br /&gt;
&lt;br /&gt;
Here is a short summary of the important information:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
Welcome to the official site for '''RenCon 2025''', an international challenge where researchers and developers submit systems capable of rendering expressive musical performances from symbolic scores. This year, we are delighted to hosted it along with '''ISMIR 2025''' and under the '''MIREX Tasks protocol'''.&lt;br /&gt;
&lt;br /&gt;
The RenCon competition has a rich history, having record for compeition in 2002, 2003, 2004, 2005, 2008 and 2011 ([https://www.researchgate.net/publication/228715822_The_Second_Rencon_Performance_Contest_Panel_Discussion_and_the_Future 1], [https://www.nime.org/proceedings/2004/nime2004_120.pdf 2], [https://citeseerx.ist.psu.edu/document?repid=rep1&amp;amp;type=pdf&amp;amp;doi=e611d7c4f9df63d99d5760ca88a7107dca945e05 3], [https://www.researchgate.net/publication/228715816_Rencon_Performance_Rendering_Contest_for_Automated_Music_Systems 4], [http://smc.afim-asso.org/smc11/papers/smc2011_120.pdf 5]), jointly with conferences like SMC, NIME, ICMPC and IJCAI. Even before the term &amp;quot;AI&amp;quot; was widely used, the RenCon competition has been a platform for researchers to showcase their work in the field of expressive performance rendering. &lt;br /&gt;
However, not too much of its past can be traced today, except an old [http://renconmusic.org site]. We hope to revive this tradition with RenCon 2025, coinciding with the renewed global focus on performance during this year's Chopin Piano Competition.&lt;br /&gt;
&lt;br /&gt;
== Task Description ==&lt;br /&gt;
&lt;br /&gt;
Expressive Performance Rendering is a task that challenges participants to develop systems capable of rendering expressive musical performances from symbolic scores in MusicXML format.&lt;br /&gt;
&lt;br /&gt;
This year, we limit the task to be '''solo-piano''' specific. We accept system that generate symbolic (MIDI) or audio (wav) renderings, and the output shall contain human-like expressive deviation from the MusicXML score. &lt;br /&gt;
&lt;br /&gt;
Similar to AI song contest, the evaluation of expressive rendering is subjective and requires human judges to assess the quality of the generated performances. Thus, we propose a two-phase competition structure as shown in the next section, relying on audience voting to determine the winner.&lt;br /&gt;
&lt;br /&gt;
== Competition Structure ==&lt;br /&gt;
&lt;br /&gt;
RenCon 2025 is structured in two phases:&lt;br /&gt;
&lt;br /&gt;
* '''Phase 1 – Preliminary Round (Online)'''  &lt;br /&gt;
Submit performances of assigned and free-choice pieces. Includes symbolic or audio renderings and a technical report. No real-time constraints allow for broader participation and diversity in evaluation. &lt;br /&gt;
The submission period is open from '''May 30, 2025''' to '''Aug 20, 2025''', following the MIREX submission guidelines.&lt;br /&gt;
After the submission deadline, the preliminary round page will be finalized with the list of participants and their submissions, and the online evaluation will take place.&lt;br /&gt;
&lt;br /&gt;
* '''Phase 2 – Live Contest at ISMIR (Daejeon, Korea)'''  &lt;br /&gt;
Top systems from preliminary round will be invited to render a surprise piece live at ISMIR, using their system in real time. &lt;br /&gt;
The live contest is open to all ISMIR attendees, as well as general public if the venue allows. The audience will be able to listen to the live performances and vote for their favorite system. The winner will be announced at the end of the conference.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
* '''May 30, 2025''': Preliminary round submission opens, online judge sign-up opens&lt;br /&gt;
* '''Aug 20, 2025''': Preliminary round submission closes&lt;br /&gt;
* '''Aug 25, 2025''': Preliminary round page finalized and online evaluation begins&lt;br /&gt;
* '''Sept 10, 2025''': Online evaluation ends, results announced, and top systems invited to live contest&lt;br /&gt;
* '''Sept 2*, 2025(TBD)''': Live contest at ISMIR 2025 in Daejeon, Korea&lt;br /&gt;
&lt;br /&gt;
== Submission Requirements ==&lt;br /&gt;
&lt;br /&gt;
The following items are required for submission:&lt;br /&gt;
&lt;br /&gt;
* Code and checkpoint of the system.  &lt;br /&gt;
* Symbolic (MIDI) or audio (wav) renderings of designated pieces:  &lt;br /&gt;
** Required pieces (choose 2 out of 4), click to download the MusicXML files:&lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/CAPRICCIO_en_sol_mineur_HWV_483_-_Handel.mxl Handel: Capriccio in G minor, HWV 483]  &lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/32_Variations_in_C_minor_WoO_80_First_5.mxl Beethoven: 32 Variations in C minor, WoO 80 - Theme and the first 5 variations]  &lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/12_Romances_Op.21__Sergei_Rachmaninoff_Zdes_khorosho_-_Arrangement_for_solo_piano.mxl Rachmaninoff: Здесь хорошо (How Fair this Spot), Op. 21, No. 7 (transcribed for solo piano)]  &lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/With_Dog-teams.mxl Amy Beach: Eskimos, Op.64, No.4 - With Dog-Teams]  &lt;br /&gt;
** One free-choice piece (rendering must be less than 1 minute 30 seconds)  &lt;br /&gt;
** A total of 3 pieces, with a maximum of 5 minutes of rendering in total.&lt;br /&gt;
&lt;br /&gt;
* Technical report: Please use the template available at  [https://ren-con2025.vercel.app/static/RenCon%20Submission%20Report%20Template.zip Submission Report Template (ZIP)]&lt;br /&gt;
&lt;br /&gt;
Final submission must be made through the [[2025_Submission_Guidelines|MIREX submission system]], with maximum zip file of size 5GB.&lt;br /&gt;
&lt;br /&gt;
== Data Format of submission ==&lt;br /&gt;
&lt;br /&gt;
'''Symbolic submissions''': MIDI format with all sound events in program number 1 (solo piano). Track and channel are unrestricted.  &lt;br /&gt;
&lt;br /&gt;
'''Audio Submissions''': wav format, 44.1kHz, 16-bit PCM&lt;br /&gt;
&lt;br /&gt;
== Training Datasets ==&lt;br /&gt;
&lt;br /&gt;
Participants are welcome to train their systems on any dataset, including publicly available corpora, proprietary collections, or internally curated material. There are no restrictions on dataset origin, but we ask for full transparency.&lt;br /&gt;
&lt;br /&gt;
Some suggested datasets for training and validation include:&lt;br /&gt;
* '''[https://github.com/fosfrancesco/asap-dataset ASAP]''': A large dataset of classical piano performances sourced from MAESTRO, includes corresponding MIDI and audio. [https://github.com/CPJKU/asap-dataset (n)ASAP] provides score-performance alignment. &lt;br /&gt;
* '''[https://github.com/tangjjbetsy/ATEPP ATEPP]''': A large dataset of transcribed MIDI expressive piano performances, organized by virtuosic performer. However, only around half of the dataset contains score MusicXML files. &lt;br /&gt;
* '''[https://github.com/CPJKU/vienna4x22 VIENNA 4x22]''': A small-scale dataset of 4 pieces with 22 different interpretations, including audio and MIDI and fine alignment. ()  &lt;br /&gt;
* '''[https://github.com/huispaty/batik_plays_mozart Batik-plays-Mozart]''': Fine-aligned performance MIDI dataset of Mozart played by Roland Batik.   &lt;br /&gt;
&lt;br /&gt;
Please clearly describe the datasets used for training and validation in your technical report. Important details to include are:&lt;br /&gt;
&lt;br /&gt;
* Dataset name or source  &lt;br /&gt;
* Size and number of pieces  &lt;br /&gt;
* Instrumentation and expressive characteristics  &lt;br /&gt;
* Data format (MIDI, audio, etc.)  &lt;br /&gt;
* Any preprocessing, cleaning, or augmentation steps applied  &lt;br /&gt;
&lt;br /&gt;
This helps the jury and the research community understand the representational capacity and limitations of each submission.&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
To ensure fair evaluation, all post-processing applied to the preliminary round output must be documented in the submission report. Depending on your system type, please include the following:&lt;br /&gt;
&lt;br /&gt;
* '''Symbolic Output System''': If your model generates symbolic MIDI output and you submit the sonified audio track, describe how audio is derived. Include soundfont names, software synths used (e.g., FluidSynth, Logic Pro), or player piano models.&lt;br /&gt;
** If you would like to submit the MIDI output directly and allow us (the organizer team) for sonification, please contact [mailto:huan.zhang@qmul.ac.uk huan.zhang@qmul.ac.uk] during your submission. It's likely that we would arrange a Disklavier recording in the Vienna office of Institute of Computational Perception (CPJKU) lab.&lt;br /&gt;
* '''Audio Output Systems''': If your model outputs audio directly, describe if you have applied any enhancement steps such as EQ, reverb, compression, or noise reduction to the model's output.&lt;br /&gt;
* '''Controllability or Interventions''': Clarify if the output is influenced by human-involved choices — such as selected tempo, dynamics range, segmentation, or annotated phrasing.&lt;br /&gt;
* '''MIDI Cleanup''': If symbolic outputs were manually edited (quantization, pedals, etc) before submission, that should be documented.&lt;br /&gt;
&lt;br /&gt;
Submissions should aim for minimal human intervention. Manual correction is allowed only if it is well-documented and justified in the report.&lt;br /&gt;
&lt;br /&gt;
== Organizers ==&lt;br /&gt;
* Huan Zhang (Task Captain, Queen Mary University of London)&lt;br /&gt;
* Taegyun Kwon, Venue Coordinator, Korea Advanced Institute of Science and Technology (KAIST)&lt;br /&gt;
* Junyan Jiang (New York University)&lt;br /&gt;
* Simon Dixon (Queen Mary University of London)&lt;br /&gt;
* Gus Xia (MBZUAI)&lt;br /&gt;
* Akira Maezawa (Yamaha)&lt;br /&gt;
&lt;br /&gt;
Contact: [mailto:huan.zhang@qmul.ac.uk huan.zhang@qmul.ac.uk]&lt;/div&gt;</summary>
		<author><name>Huanz</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:RenCon&amp;diff=14637</id>
		<title>2025:RenCon</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:RenCon&amp;diff=14637"/>
		<updated>2025-05-21T22:34:33Z</updated>

		<summary type="html">&lt;p&gt;Huanz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= RenCon 2025: Expressive Performance Rendering Competition =&lt;br /&gt;
&lt;br /&gt;
Welcome to the official MIREX wiki page for RenCon 2025, a new task on expressive musical rendering from symbolic scores. This task is part of the MIREX 2025 evaluation campaign and will culminate with a live contest at ISMIR 2025 in Daejeon, Korea. For updates, templates, and examples, please visit our official site: https://ren-con2025.vercel.app&lt;br /&gt;
&lt;br /&gt;
Here is a short summary of the important information:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
Welcome to the official site for '''RenCon 2025''', an international challenge where researchers and developers submit systems capable of rendering expressive musical performances from symbolic scores. This year, we are delighted to hosted it along with '''ISMIR 2025''' and under the '''MIREX Tasks protocol'''.&lt;br /&gt;
&lt;br /&gt;
The RenCon competition has a rich history, having record for compeition in 2002, 2003, 2004, 2005, 2008 and 2011 ([https://www.researchgate.net/publication/228715822_The_Second_Rencon_Performance_Contest_Panel_Discussion_and_the_Future 1], [https://www.nime.org/proceedings/2004/nime2004_120.pdf 2], [https://citeseerx.ist.psu.edu/document?repid=rep1&amp;amp;type=pdf&amp;amp;doi=e611d7c4f9df63d99d5760ca88a7107dca945e05 3], [https://www.researchgate.net/publication/228715816_Rencon_Performance_Rendering_Contest_for_Automated_Music_Systems 4], [http://smc.afim-asso.org/smc11/papers/smc2011_120.pdf 5]), jointly with conferences like SMC, NIME, ICMPC and IJCAI. Even before the term &amp;quot;AI&amp;quot; was widely used, the RenCon competition has been a platform for researchers to showcase their work in the field of expressive performance rendering. &lt;br /&gt;
However, not too much of its past can be traced today, except an old [http://renconmusic.org site]. We hope to revive this tradition with RenCon 2025, coinciding with the renewed global focus on performance during this year's Chopin Piano Competition.&lt;br /&gt;
&lt;br /&gt;
== Task Description ==&lt;br /&gt;
&lt;br /&gt;
Expressive Performance Rendering is a task that challenges participants to develop systems capable of rendering expressive musical performances from symbolic scores in MusicXML format.&lt;br /&gt;
&lt;br /&gt;
This year, we limit the task to be '''solo-piano''' specific. We accept system that generate symbolic (MIDI) or audio (wav) renderings, and the output shall contain human-like expressive deviation from the MusicXML score. &lt;br /&gt;
&lt;br /&gt;
Similar to AI song contest, the evaluation of expressive rendering is subjective and requires human judges to assess the quality of the generated performances. Thus, we propose a two-phase competition structure as shown in the next section, relying on audience voting to determine the winner.&lt;br /&gt;
&lt;br /&gt;
== Competition Structure ==&lt;br /&gt;
&lt;br /&gt;
RenCon 2025 is structured in two phases:&lt;br /&gt;
&lt;br /&gt;
* '''Phase 1 – Preliminary Round (Online)'''  &lt;br /&gt;
Submit performances of assigned and free-choice pieces. Includes symbolic or audio renderings and a technical report. No real-time constraints allow for broader participation and diversity in evaluation. &lt;br /&gt;
The submission period is open from '''May 30, 2025''' to '''Aug 20, 2025''', following the MIREX submission guidelines.&lt;br /&gt;
After the submission deadline, the preliminary round page will be finalized with the list of participants and their submissions, and the online evaluation will take place.&lt;br /&gt;
&lt;br /&gt;
* '''Phase 2 – Live Contest at ISMIR (Daejeon, Korea)'''  &lt;br /&gt;
Top systems from preliminary round will be invited to render a surprise piece live at ISMIR, using their system in real time. &lt;br /&gt;
The live contest is open to all ISMIR attendees, as well as general public if the venue allows. The audience will be able to listen to the live performances and vote for their favorite system. The winner will be announced at the end of the conference.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
* '''May 30, 2025''': Preliminary round submission opens, online judge sign-up opens&lt;br /&gt;
* '''Aug 20, 2025''': Preliminary round submission closes&lt;br /&gt;
* '''Aug 25, 2025''': Preliminary round page finalized and online evaluation begins&lt;br /&gt;
* '''Sept 10, 2025''': Online evaluation ends, results announced, and top systems invited to live contest&lt;br /&gt;
* '''Sept 2*, 2025(TBD)''': Live contest at ISMIR 2025 in Daejeon, Korea&lt;br /&gt;
&lt;br /&gt;
== Submission Requirements ==&lt;br /&gt;
&lt;br /&gt;
The following items are required for submission:&lt;br /&gt;
&lt;br /&gt;
* Code and checkpoint of the system.  &lt;br /&gt;
* Symbolic (MIDI) or audio (wav) renderings of designated pieces:  &lt;br /&gt;
** Required pieces (choose 2 out of 4), click to download the MusicXML files:&lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/CAPRICCIO_en_sol_mineur_HWV_483_-_Handel.mxl Handel: Capriccio in G minor, HWV 483]  &lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/32_Variations_in_C_minor_WoO_80_First_5.mxl Beethoven: 32 Variations in C minor, WoO 80 - Theme and the first 5 variations]  &lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/12_Romances_Op.21__Sergei_Rachmaninoff_Zdes_khorosho_-_Arrangement_for_solo_piano.mxl Rachmaninoff: Здесь хорошо (How Fair this Spot), Op. 21, No. 7 (transcribed for solo piano)]  &lt;br /&gt;
*** [https://ren-con2025.vercel.app/static/pieces/With_Dog-teams.mxl Amy Beach: Eskimos, Op.64, No.4 - With Dog-Teams]  &lt;br /&gt;
** One free-choice piece (rendering must be less than 1 minute 30 seconds)  &lt;br /&gt;
** A total of 3 pieces, with a maximum of 5 minutes of rendering in total.&lt;br /&gt;
&lt;br /&gt;
* Technical report: Please use the template available at  [https://ren-con2025.vercel.app/static/RenCon%20Submission%20Report%20Template.zip Submission Report Template (ZIP)]&lt;br /&gt;
&lt;br /&gt;
Final submission must be made through the [[2025_Submission_Guidelines|MIREX submission system]], with maximum zip file of size 5GB.&lt;br /&gt;
&lt;br /&gt;
== Data Format of submission ==&lt;br /&gt;
&lt;br /&gt;
'''Symbolic submissions''': MIDI format with all sound events in program number 1 (solo piano). Track and channel are unrestricted.  &lt;br /&gt;
&lt;br /&gt;
'''Audio Submissions''': wav format, 44.1kHz, 16-bit PCM&lt;br /&gt;
&lt;br /&gt;
== Training Datasets ==&lt;br /&gt;
&lt;br /&gt;
Participants are welcome to train their systems on any dataset, including publicly available corpora, proprietary collections, or internally curated material. There are no restrictions on dataset origin, but we ask for full transparency.&lt;br /&gt;
&lt;br /&gt;
Some suggested datasets for training and validation include:&lt;br /&gt;
* '''[https://github.com/fosfrancesco/asap-dataset ASAP]''': A large dataset of classical piano performances sourced from MAESTRO, includes corresponding MIDI and audio. [https://github.com/CPJKU/asap-dataset (n)ASAP] provides score-performance alignment. &lt;br /&gt;
* '''[https://github.com/tangjjbetsy/ATEPP ATEPP]''': A large dataset of transcribed MIDI expressive piano performances, organized by virtuosic performer. However, only around half of the dataset contains score MusicXML files. &lt;br /&gt;
* '''[https://github.com/CPJKU/vienna4x22 VIENNA 4x22]''': A small-scale dataset of 4 pieces with 22 different interpretations, including audio and MIDI and fine alignment. ()  &lt;br /&gt;
* '''[https://github.com/huispaty/batik_plays_mozart Batik-plays-Mozart]''': Fine-aligned performance MIDI dataset of Mozart played by Roland Batik.   &lt;br /&gt;
&lt;br /&gt;
Please clearly describe the datasets used for training and validation in your technical report. Important details to include are:&lt;br /&gt;
&lt;br /&gt;
* Dataset name or source  &lt;br /&gt;
* Size and number of pieces  &lt;br /&gt;
* Instrumentation and expressive characteristics  &lt;br /&gt;
* Data format (MIDI, audio, etc.)  &lt;br /&gt;
* Any preprocessing, cleaning, or augmentation steps applied  &lt;br /&gt;
&lt;br /&gt;
This helps the jury and the research community understand the representational capacity and limitations of each submission.&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
To ensure fair evaluation, all post-processing applied to the preliminary round output must be documented in the submission report. Depending on your system type, please include the following:&lt;br /&gt;
&lt;br /&gt;
* '''Symbolic Output System''': If your model generates symbolic MIDI output and you submit the sonified audio track, describe how audio is derived. Include soundfont names, software synths used (e.g., FluidSynth, Logic Pro), or player piano models.&lt;br /&gt;
** If you would like to submit the MIDI output directly and allow us (the organizer team) for sonification, please contact [mailto:huan.zhang@qmul.ac.uk huan.zhang@qmul.ac.uk] during your submission. It's likely that we would arrange a Disklavier recording in the Vienna office of Institute of Computational Perception (CPJKU) lab.&lt;br /&gt;
* '''Audio Output Systems''': If your model outputs audio directly, describe if you have applied any enhancement steps such as EQ, reverb, compression, or noise reduction to the model's output.&lt;br /&gt;
* '''Controllability or Interventions''': Clarify if the output is influenced by human-involved choices — such as selected tempo, dynamics range, segmentation, or annotated phrasing.&lt;br /&gt;
* '''MIDI Cleanup''': If symbolic outputs were manually edited (quantization, pedals, etc) before submission, that should be documented.&lt;br /&gt;
&lt;br /&gt;
Submissions should aim for minimal human intervention. Manual correction is allowed only if it is well-documented and justified in the report.&lt;br /&gt;
&lt;br /&gt;
== Organizers ==&lt;br /&gt;
* Huan Zhang (Task Captain, Queen Mary University of London)&lt;br /&gt;
* Junyan Jiang (New York University )&lt;br /&gt;
* Simon Dixon (Queen Mary University of London)&lt;br /&gt;
* Gus Xia (MBZUAI)&lt;br /&gt;
* Akira Maezawa (Yamaha)&lt;br /&gt;
&lt;br /&gt;
Contact: [mailto:huan.zhang@qmul.ac.uk huan.zhang@qmul.ac.uk]&lt;/div&gt;</summary>
		<author><name>Huanz</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2025:RenCon&amp;diff=14618</id>
		<title>2025:RenCon</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2025:RenCon&amp;diff=14618"/>
		<updated>2025-05-18T20:47:43Z</updated>

		<summary type="html">&lt;p&gt;Huanz: overview&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= RenCon 2025: Expressive Performance Rendering Competition =&lt;br /&gt;
&lt;br /&gt;
Welcome to the official MIREX wiki page for RenCon 2025, a new task on expressive musical rendering from symbolic scores. This task is part of the MIREX 2025 evaluation campaign and will culminate with a live contest at ISMIR 2025 in Daejeon, Korea. For updates, templates, and examples, please visit our official site: https://ren-con2025.vercel.app&lt;br /&gt;
&lt;br /&gt;
Here is a short summary of the important information:&lt;br /&gt;
&lt;br /&gt;
== Task Overview ==&lt;br /&gt;
RenCon challenges participants to develop systems that generate expressive musical audio renderings based on symbolic input (e.g., MIDI, MusicXML). Submissions are evaluated in two phases:&lt;br /&gt;
* Phase 1 – Preliminary Round (Online):&lt;br /&gt;
Systems submit rendered audio (and optionally symbolic outputs) for both required and free-choice pieces. No real-time constraint is imposed.&lt;br /&gt;
* Phase 2 – Live Contest (Onsite at ISMIR):&lt;br /&gt;
Top systems will be invited to render a surprise piece using their system in real time at ISMIR. The audience will vote to determine the winner.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
* '''May 30 – Aug 20, 2025''': Submission period for preliminary round&lt;br /&gt;
* '''Aug 26, 2025''': Preliminary round results released&lt;br /&gt;
* '''Sept 10, 2025''': Online evaluation results + invitation to live contest&lt;br /&gt;
* '''Sept 2025 (TBD)''': Live contest at ISMIR 2025&lt;br /&gt;
&lt;br /&gt;
== Submission Requirements ==&lt;br /&gt;
Each submission should include:&lt;br /&gt;
* Code and checkpoint of your system&lt;br /&gt;
* Symbolic inputs and audio outputs for:&lt;br /&gt;
* At least 2 out of 3 required pieces &lt;br /&gt;
* One free-choice piece (≤1 min 30 sec; max 4 pieces, 5 min total)&lt;br /&gt;
* A technical report, including dataset details and post-processing notes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
* Online round: Human evaluation and technical inspection&lt;br /&gt;
* Live round: Real-time rendering, judged by ISMIR audience&lt;br /&gt;
* Objective metrics (e.g., alignment, timing, velocity) are used for analysis but not ranking&lt;br /&gt;
&lt;br /&gt;
== Datasets ==&lt;br /&gt;
No restriction on datasets (public, private, internal allowed), but transparency is required.&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
Systems must report any post-processing:&lt;br /&gt;
* Symbolic-to-audio systems: Describe synthesizers, soundfonts, or piano models used&lt;br /&gt;
* Direct audio systems: Note EQ, compression, or other effects&lt;br /&gt;
* Interventions: If prompts or editing were used, clarify in your report&lt;br /&gt;
* MIDI cleanup: Describe if quantization, editing, or timing correction applied&lt;br /&gt;
&lt;br /&gt;
Minimal human intervention is encouraged unless well-justified.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Organizers ==&lt;br /&gt;
* Huan Zhang (Task Captain, QMUL)&lt;br /&gt;
* Junyan Jiang (NYU)&lt;br /&gt;
* Simon Dixon (QMUL)&lt;br /&gt;
* Gus Xia (MBZUAI)&lt;br /&gt;
* Akira Maezawa (Yamaha)&lt;br /&gt;
&lt;br /&gt;
Contact: huan.zhang@qmul.ac.uk&lt;/div&gt;</summary>
		<author><name>Huanz</name></author>
		
	</entry>
</feed>