2025:RenCon

From MIREX Wiki

RenCon 2025: Expressive Performance Rendering Competition

Welcome to the official MIREX wiki page for RenCon 2025, a new task on expressive musical rendering from symbolic scores. This task is part of the MIREX 2025 evaluation campaign and will culminate with a live contest at ISMIR 2025 in Daejeon, Korea. For updates, templates, and examples, please visit our official site: https://ren-con2025.vercel.app

Here is a short summary of the important information:

Task Overview

RenCon challenges participants to develop systems that generate expressive musical audio renderings based on symbolic input (e.g., MIDI, MusicXML). Submissions are evaluated in two phases:

  • Phase 1 – Preliminary Round (Online):

Systems submit rendered audio (and optionally symbolic outputs) for both required and free-choice pieces. No real-time constraint is imposed.

  • Phase 2 – Live Contest (Onsite at ISMIR):

Top systems will be invited to render a surprise piece using their system in real time at ISMIR. The audience will vote to determine the winner.

Timeline

  • May 30 – Aug 20, 2025: Submission period for preliminary round
  • Aug 26, 2025: Preliminary round results released
  • Sept 10, 2025: Online evaluation results + invitation to live contest
  • Sept 2025 (TBD): Live contest at ISMIR 2025

Submission Requirements

Each submission should include:

  • Code and checkpoint of your system
  • Symbolic inputs and audio outputs for:
  • At least 2 out of 3 required pieces
  • One free-choice piece (≤1 min 30 sec; max 4 pieces, 5 min total)
  • A technical report, including dataset details and post-processing notes


Evaluation

  • Online round: Human evaluation and technical inspection
  • Live round: Real-time rendering, judged by ISMIR audience
  • Objective metrics (e.g., alignment, timing, velocity) are used for analysis but not ranking

Datasets

No restriction on datasets (public, private, internal allowed), but transparency is required.

Post-Processing

Systems must report any post-processing:

  • Symbolic-to-audio systems: Describe synthesizers, soundfonts, or piano models used
  • Direct audio systems: Note EQ, compression, or other effects
  • Interventions: If prompts or editing were used, clarify in your report
  • MIDI cleanup: Describe if quantization, editing, or timing correction applied

Minimal human intervention is encouraged unless well-justified.


Organizers

  • Huan Zhang (Task Captain, QMUL)
  • Junyan Jiang (NYU)
  • Simon Dixon (QMUL)
  • Gus Xia (MBZUAI)
  • Akira Maezawa (Yamaha)

Contact: huan.zhang@qmul.ac.uk