nonchev's picture
Update README.md
0772520 verified
metadata
language:
  - en
tags:
  - spatial-transcriptomics
  - histology
  - pathology
  - transcriptomics
  - machine-learning
size_categories:
  - 1K<n<10K
license: cc
extra_gated_prompt: >-
  By agreeing, you accept to share your contact information (email and username)
  with the repository authors.
extra_gated_fields:
  Full name (first and last): text
  Current affiliation (no abbreviations): text
  Type of Affiliation: text
  Current and official institutional email:
    type: text
    help: >-
      **This must match your primary email in your Hugging Face account. Emails
      from @gmail, @hotmail, and @qq domains will be denied.**
  Please explain your intended research use: text
  I agree to all terms outlined above: checkbox
  I agree to use this model for non-commercial, academic purposes only: checkbox
  I agree not to distribute the model:
    type: checkbox
    help: >-
      If another user within my organization wishes to use the dataset, they
      must register as an individual user.
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_description: >-
  By agreeing, you accept to share your contact information (email and username)
  with the repository authors and confirm that you will not use the dataset for
  harmful, unethical, or malicious purposes. Our team may take 3-5 days to
  process your request.

Dataset card for TCGA digital spatial transcriptomics data

This repository contains results from the paper "DeepSpot: Leveraging Spatial Context for Enhanced Spatial Transcriptomics Prediction from H&E Images".

Authors: Kalin Nonchev, Sebastian Dawo, Karina Selina, Holger Moch, Sonali Andani, Tumor Profiler Consortium, Viktor Hendrik Koelzer, and Gunnar Rätsch

What is TCGA digital spatial transcriptomic?

We trained a model using available spatial transcriptomics data to predict gene expression for both fresh frozen (FF) and formalin-fixed paraffin-embedded (FFPE) slides from TCGA SKCM (skin melanoma) and TCGA KIRC (kidney cancer) datasets. More information can be found at: https://github.com/ratschlab/DeepSpot.

Graphical summary Fig: DeepSpot predicts spatial transcriptomics from H&E images by leveraging recent foundation models in pathology and spatial multi-level tissue context. 1: DeepSpot is trained to predict 5 000 genes, with hyperparameters optimized using cross-validation. 2: DeepSpot can be used for de novo spatial transcriptomics prediction or for correcting existing spatial transcriptomics data. 3: Validation involves nested leave-one-out patient cross-validation and out-of-distribution testing. We predicted spatial transcriptomics from TCGA slide images, aggregated the data into pseudo-bulk RNA profiles, and compared them with the available ground truth bulk RNA-seq. 4: DeepSpot generated 1 792 TCGA spatial transcriptomic samples with over 37 million spots from melanoma or kidney cancer patients, enriching the available spatial transcriptomics data for TCGA samples and providing valuable insights into the molecular landscapes of cancer tissues.

The data includes spatial transcriptomic for:

  • TCGA SKCM
    • 472 FF slides
    • 276 FFPE slides
  • TCGA KIRC
    • 528 FF slides
    • 516 FFPE slides

Folder tree:

HF
├── TCGA_KIRC
│   ├── FF
│   └── FFPE
└── TCGA_SKCM
    ├── FF
    └── FFPE

How to start?

pip install datasets

Logging

from huggingface_hub import login
login(token="YOUR HUGGINGFACE TOKEN")

Download the entire TCGA digital spatial transcriptomics data

import datasets

local_dir='TCGA_data' # will be downloaded to this folder

# Note that the full dataset is around 2TB of data

dataset = datasets.load_dataset(
    'nonchev/TCGA_digital_spatial_transcriptomics', 
    cache_dir=local_dir,
    patterns='*'
)

Download subset of TCGA digital spatial transcriptomic:

import datasets

local_dir='TCGA_data' # will be downloaded to this folder

cancer_type = ['TCGA_KIRC'] # OR TCGA_SKCM

## or ['TCGA_KIRC/FF'] or ['TCGA_KIRC/FFPE'] or based on slide type ["FFPE"]

dataset = datasets.load_dataset(
    'nonchev/TCGA_digital_spatial_transcriptomics', 
    cache_dir=local_dir,
    patterns=cancer_type
)

Data organization

Each file is of the form {slide_id}.h5ad.gz and can be loaded as:

import scanpy as sc

# Load the data
adata = sc.read_h5ad("../path/to/slide_id.h5ad.gz") 
# Note: Since the data is compressed, loading it may take more time.
# It is recommended to uncompress the data if sufficient storage is available.
adata
AnnData object with n_obs × n_vars = 1358 × 5000
    obs: 'x_array', 'y_array', 'x_pixel', 'y_pixel', 'barcode', 'predicted_label'
    uns: '20x_slide', 'scaled_slide_info', 'spatial'
    obsm: 'spatial'

where:

.obs

  • x_array and y_array represent the spot coordinates on the image grid.
  • x_pixel and y_pixel correspond to the center spot coordinates on the slide, scaled to 20x magnification.
  • predicted_label is the label transferred from the training data, obtained by fitting a random forest model on the provided manual annotations for Melanoma or the cluster labels for Kidney Cancer.
  • the remaining columns with label names represent the probabilities of the spot being assigned to the predicted_label.

.var

  • 20x_slide - original slide image downloaded from TCGA and scaled to 20x magnification
  • scaled_slide_info metadata
  • spatial - metadata required for squidpy.pl.spatial_scatter

NB: To distinguish in-tissue spots from the background, tiles with a mean RGB value above 200 (near white) were discarded. Additional preprocessing can remove potential image artifacts.

How to cite:

tbd