Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

You agree to use the GigaMIDI dataset only for non-commercial research or education without infringing copyright laws or causing harm to the creative rights of artists, creators, or musicians.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for GigaMIDI

Dataset Logo

GigaMIDI Dataset Version Update

Currently, we provide two versions of the GigaMIDI dataset: the first, in parquet format, is optimized for efficiency and integrates seamlessly with Python MIDI parsing libraries such as Symusic and the MIDI tokenization library MIDITok. The second version consists of raw MIDI files along with a CSV file. We are excited to announce the release of version 1.0 of the GigaMIDI dataset (released on February 7th, 2025). We are actively refining the dataset to improve its usability, adding new musical features, and expanding it with additional subsets. Stay tuned for more exciting updates coming throughout 2025!

For the raw MIDI file version instead of parquet, please check the PDF file in our repository named "Description of GigaMIDI Raw MIDI File Version-1.0.pdf"

Dataset Description

Dataset Curators

Curators: Keon Ju Maverick Lee, Jeff Ens, Sara Adkins, Nathan Fradet, Pedro Sarmento, Philippe Pasquier, Mathieu Barthet.

Note: The GigaMIDI dataset is designed for continuous growth, with new subsets added and updated over time to ensure its ongoing expansion and relevance.

Licensing Information

The dataset is distributed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. This license permits users to share, adapt, and utilize the dataset exclusively for non-commercial purposes, including research and educational applications, provided that proper attribution is given to the original creators. By adhering to the terms of CC BY-NC 4.0, users ensure the dataset's responsible use while fostering its accessibility for academic and non-commercial endeavors.

Citation/Reference

You agree to use the GigaMIDI dataset only for non-commercial research or education without infringing copyright laws or causing harm to the creative rights of artists, creators, or musicians.

If you use the GigaMIDI dataset or any part of this project, please cite the following paper: https://transactions.ismir.net/articles/10.5334/tismir.203

@article{lee2024gigamidi,
  title={The GigaMIDI Dataset with Features for Expressive Music Performance Detection},
  author={Lee, KJM and Ens, J and Adkins, S and Sarmento, P and Barthet, M and Pasquier, P},
  journal={Transactions of the International Society for Music Information Retrieval},
  year={2025},
  publisher={Ubiquity Press},
  doi={10.5334/tismir.203}
}

Dataset Summary

Research in artificial intelligence applications in music computing has gained significant traction with the progress of deep learning. Musical instrument digital interface (MIDI) data and its associated metadata are fundamental for the advancement of models that execute tasks such as music generation and transcription with optimal efficiency and high-performance quality. The majority of the public music datasets contain audio data, and symbolic music datasets are comparatively small. However, MIDI data presents advantages over audio, such as providing an editable version of musical content independent of its sonic rendering. MIDI data can be quantized or interpreted with variations in micro-timing and velocity, but there is only a limited amount of metadata and algorithms to differentiate expressive symbolic music data performed by a musician from non-expressive data that can be assimilated into music scores. To address these challenges, we present the GigaMIDI dataset, a comprehensive corpus comprising over 1.43M MIDI files, 5.3M tracks, and 1.8B notes, along with annotations for loops and metadata for expressive performance detection. To detect expressiveness, which tracks reflect human interpretation, we introduced a new heuristic called note onset median metric level (NOMML), which allowed us to identify with 100% accuracy that 31% of GigaMIDI tracks are expressive. Detecting loops, or repetitions of musical patterns, presents a challenge when tracks exhibit expressive timing variations, as repeated patterns may not be strictly identical. To address this issue, we mark MIDI loops for non-expressive music tracks, which allows us to identify 7M loops. The GigaMIDI dataset is accessible for research purposes on the Hugging Face hub [https://huggingface.co/datasets/Metacreation/GigaMIDI] in a user-friendly way for convenience and reproducibility and our companion GitHub page is also available at the following link: [https://github.com/Metacreation-Lab/GigaMIDI-Dataset].

Dataset Category

We provide three categories: drums-only, which contains MIDI files exclusively containing drum tracks; no-drums for MIDI files containing any MIDI program except drums (channel 10); and all-instruments-with-drums for MIDI files containing multiple MIDI programs, including drums. The all subset encompasses the three to get the full dataset.

The metadata median metric depth corresponds to the Note Onset Median Metric Level (NOMML). Please refer to the latest information below, as this repository was intended to be temporary during the anonymous peer-review process of the academic paper. Based on our experiments, tracks at levels 0-11 can be classified as non-expressive, while level 12 indicates expressive MIDI tracks. Note that this classification applies at the track level, not the file level.

How to use

The datasets library allows you to load and pre-process your dataset in pure Python at scale. The dataset can be downloaded and prepared in one call to your local drive by using the load_dataset function.

from datasets import load_dataset

dataset = load_dataset("Metacreation/GigaMIDI", "all-instruments-with-drums")

Using the datasets library, you can also stream the dataset on-the-fly by adding a streaming=True argument to the load_dataset function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.

from datasets import load_dataset

dataset = load_dataset(
    "Metacreation/GigaMIDI", "all-instruments-with-drums", streaming=True
)

print(next(iter(dataset)))

Bonus: create a PyTorch dataloader directly with your own datasets (local/streamed).

Local

from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler

dataset = load_dataset("Metacreation/GigaMIDI", "all-instruments-with-drums", split="train")
batch_sampler = BatchSampler(RandomSampler(dataset), batch_size=32, drop_last=False)
dataloader = DataLoader(dataset, batch_sampler=batch_sampler)

Streaming

from datasets import load_dataset
from torch.utils.data import DataLoader

dataset = load_dataset("Metacreation/GigaMIDI", "all-instruments-with-drums", split="train")
dataloader = DataLoader(dataset, batch_size=32)

Example scripts

MIDI files can be easily loaded and tokenized with Symusic and MidiTok respectively.

from datasets import load_dataset
from miditok import REMI
from symusic import Score

dataset = load_dataset("Metacreation/GigaMIDI", "all-instruments-with-drums", split="train")
tokenizer = REMI()
for sample in dataset:
    score = Score.from_midi(sample["music"])
    tokens = tokenizer(score)

The dataset can be processed by using the dataset.map and dataset.filter methods.

from pathlib import Path
from datasets import load_dataset
from miditok.constants import SCORE_LOADING_EXCEPTION
from miditok.utils import get_bars_ticks
from symusic import Score

def is_score_valid(
    score: Score | Path | bytes, min_num_bars: int, min_num_notes: int
) -> bool:
    """
    Check if a ``symusic.Score`` is valid, contains the minimum required number of bars.

    :param score: ``symusic.Score`` to inspect or path to a MIDI file.
    :param min_num_bars: minimum number of bars the score should contain.
    :param min_num_notes: minimum number of notes that score should contain.
    :return: boolean indicating if ``score`` is valid.
    """
    if isinstance(score, Path):
        try:
            score = Score(score)
        except SCORE_LOADING_EXCEPTION:
            return False
    elif isinstance(score, bytes):
        try:
            score = Score.from_midi(score)
        except SCORE_LOADING_EXCEPTION:
            return False

    return (
        len(get_bars_ticks(score)) >= min_num_bars and score.note_num() > min_num_notes
    )

dataset = load_dataset("Metacreation/GigaMIDI", "all-instruments-with-drums", split="train")
dataset = dataset.filter(
    lambda ex: is_score_valid(ex["music"], min_num_bars=8, min_num_notes=50)
)

Export MIDI files

The GigaMIDI dataset is provided in parquet format for ease of use with the Hugging Face datasets library. If you wish to use the "raw" MIDI files, you can simply iterate over the dataset as shown in the examples above and write the music entry of each sample on your local filesystem as a MIDI file.

Dataset Structure

Data Instances

A typical data sample comprises the md5 of the file which corresponds to its file name, a music entry containing bytes that can be loaded with symusic as score = Score.from_midi(dataset[sample_idx]["music"]). Metadata accompanies each file, which is introduced in the next section.

A data sample indexed from the dataset may look like this (the music entry is voluntarily shorten):

{
    'md5': '0211bbf6adf0cf10d42117e5929929a4',
    'music': b"MThd\x00\x00\x00\x06\x00\x01\x00\x05\x01\x00MTrk\x00",
    'is_drums': False,
    'sid_matches': {'sid': ['065TU5v0uWSQmnTlP5Cnsz', '29OG7JWrnT0G19tOXwk664', '2lL9TiCxUt7YpwJwruyNGh'], 'score': [0.711, 0.8076, 0.8315]},
    'mbid_matches': {'sid': ['065TU5v0uWSQmnTlP5Cnsz', '29OG7JWrnT0G19tOXwk664', '2lL9TiCxUt7YpwJwruyNGh'], 'mbids': [['43d521a9-54b0-416a-b15e-08ad54982e63', '70645f54-a13d-4123-bf49-c73d8c961db8', 'f46bba68-588f-49e7-bb4d-e321396b0d8e'], ['43d521a9-54b0-416a-b15e-08ad54982e63', '70645f54-a13d-4123-bf49-c73d8c961db8'], ['3a4678e6-9d8f-4379-aa99-78c19caf1ff5']]},
    'artist_scraped': 'Bach, Johann Sebastian',
    'title_scraped': 'Contrapunctus 1 from Art of Fugue',
    'genres_scraped': ['classical', 'romantic'],
    'genres_discogs': {'genre': ['classical', 'classical---baroque'], 'count': [14, 1]},
    'genres_tagtraum': {'genre': ['classical', 'classical---baroque'], 'count': [1, 1]},
    'genres_lastfm': {'genre': [], 'count': []},
    'median_metric_depth': [0, 0, 0, 0],
    'loops': {'end_tick': [15488, 33920, 33152, 12416, 41600, 32384, 8576], 'start_tick': [13952, 27776, 25472, 10880, 33920, 30848, 6272], 'track_idx': [0, 0, 0, 1, 1, 1, 1]},
}

Data Fields

The GigaMIDI dataset comprises the MetaMIDI dataset. Consequently, the GigaMIDI dataset also contains its metadata which we compiled here in a convenient and easy to use dataset format. The fields of each data entry are:

  • md5 (string): hash the MIDI file, corresponding to its original file name;
  • music (bytes): bytes of the MIDI file to be loaded with an external Python package such as symusic;
  • is_drums (boolean): whether the sample comes from the drums subset, this can be useful when working with the all subset;
  • sid_matches (dict[str, list[str] | list[float16]]): ids of the Spotify entries matched and their scores.
  • mbid_matches (dict[str, str | list[str]]): ids of the MusicBrainz entries matched with the Spotify entries.
  • artist_scraped (string): scraped artist of the entry;
  • title_scraped (string): scraped song title of the entry;
  • genres_scraped (list[str]): scraped genres of the entry;
  • genres_discogs (dict[str, list[str] | list[int16]]): Discogs genres matched from the AcousticBrainz dataset;
  • genres_tagtraum (dict[str, list[str] | list[int16]]): Tagtraum genres matched from the AcousticBrainz dataset;
  • genres_lastfm (dict[str, list[str] | list[int16]]): Lastfm genres matched from the AcousticBrainz dataset;
  • median_metric_depth (list[int16]):
  • loops (list[tuple[int8, int16, int16]]): loops detected within the file, provided as a list of tuples with values corresponding to (track_index, start_tick, end_tick)

Data Splits

The dataset has been subdivided into portions for training (train), validation (validation) and testing (test).

The validation and test splits contain each 10% of the dataset, while the training split contains the rest (about 80%).

Dataset Creation

Curation Rationale

The GigaMIDI dataset was curated through a meticulous process to ensure a high-quality collection of MIDI files for research, particularly in expressive music performance detection. Freely available MIDI files were aggregated from platforms like Zenodo and GitHub through web scraping, with all subsets documented and deduplicated using MD5 checksums. The dataset was standardized to adhere to the General MIDI specification, including remapping non-GM drum tracks and correcting MIDI channel assignments. Manual curation was performed to define ground-truth categories for expressive and non-expressive performances, enabling robust analysis.

Source Data

Data Source Links

The GigaMIDI dataset incorporates MIDI files aggregated from various publicly available sources. Detailed information and source links for each subset are provided in the accompanying PDF file:

Data Source Links for the GigaMIDI Dataset

This document includes source links for all publicly available MIDI files included in the dataset.

Please refer to the PDF for comprehensive details about the origins and organization of the dataset's contents.

Annotations

Annotation process

To classify tracks based on dynamic and timing variations, novel heuristics were developed, such as the Distinctive Note Velocity Ratio (DNVR), Distinctive Note Onset Deviation Ratio (DNODR), and Note Onset Median Metric Level (NOMML). Musical styles were annotated using the Musicmap style topology, with manual validation to ensure accuracy. The dataset, hosted on the Hugging Face Hub for enhanced accessibility, supports integration with tools like Symusic and MidiTok. With over 1.4 million unique MIDI files and 7.1 million loops, GigaMIDI offers an extensive resource for Music Information Retrieval (MIR) and computational musicology research.

More details are available via our GitHub webpage: https://github.com/Metacreation-Lab/GigaMIDI-Dataset

Limitations

In navigating the use of MIDI datasets for research and creative explorations, it is imperative to consider the ethical implications inherent in dataset bias. Bias in MIDI datasets often mirrors prevailing practices in Western digital music production, where certain instruments, particularly the piano and drums, dominate. This predominance is largely influenced by the widespread availability and use of MIDI-compatible instruments and controllers for these instruments. The piano is a primary compositional tool and a ubiquitous MIDI controller and keyboard, facilitating input for a wide range of virtual instruments and synthesizers. Similarly, drums, whether through drum machines or MIDI drum pads, enjoy widespread use for rhythm programming and beat production. This prevalence arises from their intuitive interface and versatility within digital audio workstations. This may explain why the distribution of MIDI instruments in MIDI datasets is often skewed toward piano and drums, with limited representation of other instruments, particularly those requiring more nuanced interpretation or less commonly played via MIDI controllers or instruments.

Moreover, the MIDI standard, while effective for encoding basic musical information, is limited in representing the complexities of Western music's time signatures and meters. It lacks an inherent framework to encode hierarchical metric structures, such as strong and weak beats, and struggles with the dynamic flexibility of metric changes. Additionally, its reliance on fixed temporal grids often oversimplifies expressive rhythmic nuances like rubato, leading to a loss of critical musical details. These constraints necessitate supplementary metadata or advanced techniques to accurately capture the temporal intricacies of Western music.

Furthermore, a constraint emerges from the inadequate accessibility of ground truth data that clearly demarcates the differentiation between non-expressive and expressive MIDI tracks across all MIDI instruments for expressive performance detection. Presently, such data predominantly originates from piano and drum instruments in the GigaMIDI dataset.

Data Accessibility and Ethical Statements

The GigaMIDI dataset consists of MIDI files acquired via the aggregation of previously available datasets and web scraping from publicly available online sources. Each subset is accompanied by source links, copyright information when available, and acknowledgments. File names are anonymized using MD5 hash encryption. We acknowledge the work from the previous dataset papers (Goebl, 1999; Müller et al., 2011; Raf- fel, 2016; Bosch et al., 2016; Miron et al., 2016; Don- ahue et al., 2018; Crestel et al., 2018; Li et al., 2018; Hawthorne et al., 2019; Gillick et al., 2019; Wang et al., 2020; Foscarin et al., 2020; Callender et al., 2020; Ens and Pasquier, 2021; Hung et al., 2021; Sarmento et al., 2021; Zhang et al., 2022; Szelogowski et al., 2022; Liu et al., 2022; Ma et al., 2022; Kong et al., 2022; Hyun et al., 2022; Choi et al., 2022; Plut et al., 2022; Hu and Widmer, 2023) that we aggregate and analyze as part of the GigaMIDI subsets.

This dataset has been collected, utilized, and distributed under the Fair Dealing provisions for research and private study outlined in the Canadian Copyright Act (Government of Canada, 2024). Fair Dealing permits the limited use of copyright-protected material without the risk of infringement and without having to seek the permission of copyright owners. It is intended to provide a balance between the rights of creators and the rights of users. As per instructions of the Copyright Office of Simon Fraser University, two protective measures have been put in place that are deemed sufficient given the nature of the data (accessible online):

  1. We explicitly state that this dataset has been collected, used, and distributed under the Fair Dealing provisions for research and private study outlined in the Canadian Copyright Act.
  2. On the Hugging Face hub, we advertise that the data is available for research purposes only and collect the user’s legal name and email as proof of agreement before granting access.

We thus decline any responsibility for misuse.

The FAIR (Findable, Accessible, Interoperable, Reusable) principles (Jacobsen et al., 2020) serve as a framework to ensure that data is well-managed, easily discoverable and usable for a broad range of purposes in research. These principles are particularly important in the context of data management to facilitate open science, collaboration, and reproducibility.

• Findable: Data should be easily discoverable by both humans and machines. This is typically achieved through proper metadata, traceable source links and searchable resources. Applying this to MIDI data, each subset of MIDI files collected from public domain sources is accompanied by clear and consistent metadata via our GitHub and Hugging Face hub webpages. For example, organizing the source links of each data subset, as done with the GigaMIDI dataset, ensures that each source can be easily traced and referenced, improving discoverability.

• Accessible: Once found, data should be easily retrievable using standard protocols. Accessibility does not necessarily imply open access, but it does mean that data should be available under well-defined conditions. For the GigaMIDI dataset, hosting the data on platforms like Hugging Face Hub improves accessibility, as these platforms provide efficient data retrieval mechanisms, especially for large-scale datasets. Ensuring that MIDI data is accessible for public use while respecting any applicable licenses supports wider research and analysis in music computing.

• Interoperable: Data should be structured in such a way that it can be integrated with other datasets and used by various applications. MIDI data, being a widely accepted format in music research, is inherently interoperable, especially when standardized metadata and file formats are used. By ensuring that the GigaMIDI dataset complies with widely adopted standards and supports integration with state-of-the-art libraries in symbolic music processing, such as Symusic and MidiTok, the dataset enhances its utility for music researchers and practitioners working across different platforms and systems.

• Reusable: Data should be well-documented and licensed to be reused in future research. Reusability is ensured through proper metadata, clear licenses, and documentation of provenance. In the case of GigaMIDI, aggregating all subsets from public domain sources and linking them to the original sources strengthens the reproducibility and traceability of the data. This practice allows future researchers to not only use the dataset but also verify and expand upon it by referring to the original data sources.

Developing ethical and responsible AI systems for music requires adherence to core principles of fairness, transparency, and accountability. The creation of the GigaMIDI dataset reflects a commitment to these values, emphasizing the promotion of ethical practices in data usage and accessibility. Our work aligns with prominent initiatives promoting ethical approaches to AI in music, such as AI for Music Initiatives, which advocates for principles guiding the ethical creation of music with AI, supported by the Metacreation Lab for Creative AI and the Centre for Digital Music, which provide critical guidelines for the responsible development and deployment of AI systems in music. Similarly, the Fairly Trained initiative highlights the importance of ethical standards in data curation and model training, principles that are integral to the design of the GigaMIDI dataset. These frameworks have shaped the methodologies used in this study, from dataset creation and validation to algorithmic design and system evaluation. By engaging with these initiatives, this research not only contributes to advancing AI in music but also reinforces the ethical use of data for the benefit of the broader music computing and MIR communities.

Acknowledgements

We gratefully acknowledge the support and contributions that have directly or indirectly aided this research. This work was supported in part by funding from the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Social Sciences and Humanities Research Council of Canada (SSHRC). We also extend our gratitude to the School of Interactive Arts and Technology (SIAT) at Simon Fraser University (SFU) for providing resources and an enriching research environment. Additionally, we thank the Centre for Digital Music (C4DM) at Queen Mary University of London (QMUL) for fostering collaborative opportunities and supporting our engagement with interdisciplinary research initiatives. We also acknowledge the support of EPSRC UKRI Centre for Doctoral Training in AI and Music (Grant EP/S022694/1) and UKRI - Innovate UK (Project number 10102804).

Special thanks are extended to Dr. Cale Plut for his meticulous manual curation of musical styles and to Dr. Nathan Fradet for his invaluable assistance in developing the HuggingFace Hub website for the GigaMIDI dataset, ensuring it is accessible and user-friendly for music computing and MIR researchers. We also sincerely thank our research interns, Paul Triana and Davide Rizzotti, for their thorough proofreading of the manuscript, as well as the TISMIR reviewers who helped us improve our manuscript.

Finally, we express our heartfelt appreciation to the individuals and communities who generously shared their MIDI files for research purposes. Their contributions have been instrumental in advancing this work and fostering collaborative knowledge in the field.

Downloads last month
141