You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Histology images from uniform tumor regions in TCGA Whole Slide Images (TCGA-UT-Internal, TCGA-UT-External)

TCGA Histology Dataset Logo

This repository provides a benchmarking framework for the TCGA histology image dataset originally published on Zenodo. It includes predefined train/validation/test splits and example code for foundation model evaluation.

Task

Classification of 31 different cancer types from tumor histopathological images.

Original Dataset Description

This dataset contains 1,608,060 image patches of hematoxylin & eosin stained histological samples from various human cancers. The data was collected and processed as follows:

  • Source: TCGA dataset from 32 solid cancer types (GDC legacy database, downloaded between December 1, 2016, and June 19, 2017)
  • Initial data: 9,662 diagnostic slides from 7,951 patients in SVS format
  • Annotation: At least three representative tumor regions were selected as polygons by two trained pathologists
  • Quality control: 926 slides were removed due to poor staining, low resolution, out-of-focus issues, absence of cancerous regions, or incorrect cancer types
  • Final dataset: 8,736 diagnostic slides from 7,175 patients
  • Patch extraction: 10 patches at 0.5 μm/pixel resolution (128 x 128 μm) were randomly cropped from each annotated region

Note: Additional resolution levels are available in the original Zenodo dataset. Please refer to the Zenodo repository for the complete dataset.

TCGA Barcode format (TCGA-XX-XXXX) represents patient ID. For details, see the TCGA Barcode documentation.

Updates in This Version

The dataset has been modified and organized for benchmarking purposes:

  1. Label Consolidation:

    • Colon Adenocarcinoma (COAD) and Rectum Adenocarcinoma (READ) have been merged due to their histological similarity
  2. Structured Splits:

    Internal Split (70:15:15): TCGA-UT-Internal

    • Ensures no patient overlap between train, validation, and test sets
    • Approximate distribution: 70% train, 15% validation, 15% test

    External Split: TCGA-UT-External

    • Separates data based on medical facilities to evaluate cross-institutional generalization
    • No facility overlap between train, validation, and test sets
    • Maintains similar class distributions across splits

Dataset Details

Internal Split: TCGA-UT-Internal

case train (patches) valid (patches) test (patches) train (patients) valid (patients) test (patients)
Adrenocortical_carcinoma 3480 750 750 35 8 8
Bladder_Urothelial_Carcinoma 6990 1500 1500 202 43 44
Brain_Lower_Grade_Glioma 16480 3530 3520 326 70 71
Breast_invasive_carcinoma 16580 3550 3560 513 110 111
Cervical_squamous_cell_carcinoma_and_endocervical_adenocarcinoma 4380 930 960 140 30 31
Cholangiocarcinoma 630 120 150 21 4 5
Colon_Rectum_adenocarcinoma 7020 1510 1500 190 41 41
Esophageal_carcinoma 2360 510 510 78 17 17
Glioblastoma_multiforme 16620 3570 3550 254 54 55
Head_and_Neck_squamous_cell_carcinoma 8250 1770 1770 221 48 48
Kidney_Chromophobe 1710 360 390 57 12 13
Kidney_renal_clear_cell_carcinoma 8160 1740 1750 269 58 58
Kidney_renal_papillary_cell_carcinoma 4750 1020 1020 149 32 33
Liver_hepatocellular_carcinoma 5860 1250 1260 190 41 41
Lung_adenocarcinoma 11520 2470 2470 303 65 66
Lung_squamous_cell_carcinoma 11590 2490 2480 305 66 66
Lymphoid_Neoplasm_Diffuse_Large_B-cell_Lymphoma 570 120 150 19 4 5
Mesothelioma 1470 320 300 42 9 10
Ovarian_serous_cystadenocarcinoma 1740 390 390 58 13 13
Pancreatic_adenocarcinoma 2850 620 620 88 19 19
Pheochromocytoma_and_Paraganglioma 930 210 210 30 7 7
Prostate_adenocarcinoma 6870 1470 1470 212 45 46
Sarcoma 9440 2010 2030 149 32 32
Skin_Cutaneous_Melanoma 7040 1510 1510 226 48 49
Stomach_adenocarcinoma 6770 1450 1450 182 39 39
Testicular_Germ_Cell_Tumors 4210 900 900 92 20 20
Thymoma 2520 540 540 59 13 13
Thyroid_carcinoma 7950 1710 1700 259 56 56
Uterine_Carcinosarcoma 1470 320 330 34 7 8
Uterine_Corpus_Endometrial_Carcinoma 8730 1890 1860 266 57 58
Uveal_Melanoma 1140 240 260 38 8 9
Total 190080 40770 40860 5007 1076 1092

External Split: TCGA-UT-External

case train (patches) valid (patches) test (patches) train (patients) valid (patients) test (patients)
Adrenocortical_carcinoma 4500 390 90 45 5 1
Bladder_Urothelial_Carcinoma 6990 1500 1500 190 50 49
Brain_Lower_Grade_Glioma 16430 3540 3560 332 80 55
Breast_invasive_carcinoma 16560 3570 3560 509 116 109
Cervical_squamous_cell_carcinoma_and_endocervical_adenocarcinoma 4380 930 960 145 31 25
Cholangiocarcinoma 660 150 90 22 5 3
Colon_Rectum_adenocarcinoma 7020 1500 1510 197 39 36
Esophageal_carcinoma 2360 510 510 78 17 17
Glioblastoma_multiforme 16630 3810 3300 244 76 43
Head_and_Neck_squamous_cell_carcinoma 8260 1750 1780 224 51 42
Kidney_Chromophobe 1740 270 450 58 9 15
Kidney_renal_clear_cell_carcinoma 8170 1710 1770 269 57 59
Kidney_renal_papillary_cell_carcinoma 4750 1020 1020 146 34 34
Liver_hepatocellular_carcinoma 5870 1300 1200 189 43 40
Lung_adenocarcinoma 11530 2470 2460 288 77 69
Lung_squamous_cell_carcinoma 11580 2490 2490 296 68 73
Lymphoid_Neoplasm_Diffuse_Large_B-cell_Lymphoma 600 90 150 20 3 5
Mesothelioma 1470 300 320 43 10 8
Ovarian_serous_cystadenocarcinoma 2220 120 180 74 4 6
Pancreatic_adenocarcinoma 2860 600 630 85 20 21
Pheochromocytoma_and_Paraganglioma 1170 90 90 38 3 3
Prostate_adenocarcinoma 6870 1470 1470 226 49 28
Sarcoma 9490 2070 1920 154 28 31
Skin_Cutaneous_Melanoma 7030 1530 1500 233 40 50
Stomach_adenocarcinoma 6990 1330 1350 187 37 36
Testicular_Germ_Cell_Tumors 4600 630 780 96 10 26
Thymoma 2520 540 540 54 18 13
Thyroid_carcinoma 7980 1650 1730 259 54 58
Uterine_Carcinosarcoma 1470 330 320 37 7 5
Uterine_Corpus_Endometrial_Carcinoma 8730 1890 1860 272 48 61
Uveal_Melanoma 1250 120 270 42 4 9
Total 192680 39670 39360 5052 1093 1030

Foundation Model Benchmarking

We provide example implementations using four state-of-the-art foundation models:

See licenses/references.txt for model citations.

Benchmark Results

Note: The provided script is a simplified example of training code. In practice, hyperparameter tuning and additional techniques were employed to achieve the following results.

Internal Split Results

Model Accuracy Balanced Accuracy
UNI2 0.8498 0.8500
H-Optimus 0.8498 0.8398
Virchow2 0.8456 0.8355
UNI 0.8142 0.7923
GigaPath 0.8162 0.7877
CONCH 0.7670 0.7301

External Split Results

Model Accuracy Balanced Accuracy
UNI2 0.7648 0.7262
H-Optimus 0.7845 0.7213
Virchow2 0.7745 0.6922
UNI 0.7373 0.6581
GigaPath 0.7246 0.6377
CONCH 0.6991 0.5974

Getting Started

  1. Clone this repository:
git clone [repository-url]
  1. Install dependencies:
pip install -r requirements.txt
  1. Login Hugging Face:
  • The first time you run the program, you must log in with a Hugging Face account that has access to the dataset and the model you wish to use.
  1. (Optional) Setup:
  • A notebook file setup.ipynb is provided for repository cloning, environment setup, and code execution. It has been confirmed to work in the Google Colaboratory environment.

Troubleshooting

Dependencies Installation

While requirements.txt specifies version numbers for dependencies, some installations might require additional steps or alternative approaches depending on your system configuration:

  1. SPAMS Library Installation

    • If the standard SPAMS installation fails, try:
    pip install spams-bin
    
    • On some systems, you might need to install additional system libraries:
    pip install PyOpenGL PyOpenGL_accelerate
    
  2. Version Compatibility

    • While we specify exact versions in requirements.txt, some dependencies might require different versions based on your hardware configuration
    • If you encounter compatibility issues, try installing without version constraints and test functionality

Dataset Label Data Type Issues

When creating the dataset, there is a possibility that an error occurs due to the data type of the label. If you encounter such an issue, try modifying line 83 in extract_train.py as follows:

From:

label = torch.tensor(self.labels[idx], dtype=torch.long)

To:

label = torch.tensor(int(self.labels[idx]), dtype=torch.long)

Data Loading Example

The dataset uses WebDataset format for efficient loading. Here's an example from extract_train.py:

patterns = {
    'train': [os.path.join(work_dir, f"data/dataset_{split}_train_part{str(i).zfill(3)}.tar") for i in range(39)],
    'valid': [os.path.join(work_dir, f"data/dataset_{split}_valid_part{str(i).zfill(3)}.tar") for i in range(file_range)],
    'test': [os.path.join(work_dir, f"data/dataset_{split}_test_part{str(i).zfill(3)}.tar") for i in range(file_range)],
}
dataset = wds.WebDataset(patterns[mode], shardshuffle=False) \
    .shuffle(buffer_size, seed=42) \
    .decode("pil").to_tuple("jpg", "json") \
    .map_tuple(func_transform, lambda x: encode_labels([x["label"]], label_encoder))

Configuration and Usage

  1. Configure your experiment in config.yaml:
model_name: "h_optimus"    # Model selection: "h_optimus", etc.
split_type: "internal"     # Split type: "internal" or "external"
device: "cuda"            # Computation device: "cuda" or "cpu"
feature_exist: True       # Skip feature extraction if features already exist
max_iter: 1000           # Maximum iterations for training
cost: 0.0001             # Cost parameter for linear classifier

Configuration parameters:

  • model_name: Foundation model to use for feature extraction
  • split_type: Dataset split strategy
  • device: Computation device (GPU/CPU)
  • feature_exist: Skip feature extraction if True and features are already available
  • max_iter: Maximum training iterations for the linear classifier
  • cost: Regularization parameter for the linear classifier
  1. Define models and transforms in extract_train.py:
def get_model_transform(model_name):
    # Add your model and transform definitions here
    pass
  1. Run the experiment:
python extract_train.py

This will:

  • Extract features using the specified foundation model
  • Save features to H5 files
  • Perform linear probing
  • Output accuracy and balanced accuracy metrics

License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC-BY-NC-SA 4.0).

  • For non-commercial use: Please use the dataset under CC-BY-NC-SA
  • For commercial use: Please contact us at ishum-prm@m.u-tokyo.ac.jp

Citation

If you use this dataset, please cite the original paper:

@article{komura2022universal,
  title={Universal encoding of pan-cancer histology by deep texture representations},
  author={Komura, D., Kawabe, A., Fukuta, K., Sano, K., Umezaki, T., Koda, H., Suzuki, R., Tominaga, K., Ochi, M., Konishi, H., Masakado, F., Saito, N., Sato, Y., Onoyama, T., Nishida, S., Furuya, G., Katoh, H., Yamashita, H., Kakimi, K., Seto, Y., Ushiku, T., Fukayama, M., Ishikawa, S.},
  journal={Cell Reports},
  volume={38},
  pages={110424},
  year={2022},
  doi={10.1016/j.celrep.2022.110424}
}
Downloads last month
26