lens-ai/adversarial-clip-vit-base-patch32_pcam_finetuned
Feature Extraction
β’
Updated
β’
3
image
imagewidth (px) 96
96
| label
class label 2
classes |
---|---|
00
|
|
00
|
|
00
|
|
00
|
|
00
|
|
00
|
|
00
|
|
00
|
|
00
|
|
00
|
|
00
|
|
00
|
|
00
|
|
00
|
|
00
|
|
00
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
|
11
|
This dataset contains adversarial examples generated using various attack techniques on PatchCamelyon (PCAM) images. The adversarial images were crafted to fool the fine-tuned model:
lens-ai/clip-vit-base-patch32_pcam_finetuned.
Researchers and engineers can use this dataset to:
organized_dataset/
βββ train/
β βββ 0/ # Negative samples (adversarial images only)
β β βββ adv_0_labelfalse_pred1_SquareAttack.png
β βββ 1/ # Positive samples (adversarial images only)
β βββ adv_1_labeltrue_pred0_SquareAttack.png
βββ originals/ # Original images
β βββ orig_0_labelfalse_SquareAttack.png
β βββ orig_1_labeltrue_SquareAttack.png
βββ perturbations/ # Perturbation masks
β βββ perturbation_0_SquareAttack.png
β βββ perturbation_1_SquareAttack.png
βββ dataset.json
Each adversarial example consists of:
train/{0,1}/adv_{id}_label{true/false}_pred{pred_label}_{attack_name}.png
β Adversarial image with model predictionoriginals/orig_{id}_label{true/false}_{attack_name}.png
β Original image before perturbationperturbations/perturbation_{id}_{attack_name}.png
β The perturbation applied to the original imageThe dataset.json
file contains detailed metadata for each sample, including:
{
"attack": "SquareAttack",
"type": "black_box_attacks",
"perturbation": "perturbations/perturbation_1_SquareAttack.png",
"adversarial": "train/0/adv_1_labelfalse_pred1_SquareAttack.png",
"original": "originals/orig_1_labelfalse_SquareAttack.png",
"label": 0,
"prediction": 1
}
The dataset contains both black-box and non-black-box adversarial attacks.
These attacks do not require access to model gradients:
These attacks require access to model gradients:
import json
import torch
from torchvision import transforms
from PIL import Image
from pathlib import Path
# Load the dataset information
with open('organized_dataset/dataset.json', 'r') as f:
dataset_info = json.load(f)["train"]["rows"] # Access the rows in train split
# Define transformation
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor()
])
# Function to load and process images
def load_image(image_path):
img = Image.open(image_path).convert("RGB")
return transform(img)
# Example: Loading a set of related images (original, adversarial, and perturbation)
for entry in dataset_info:
# Load adversarial image
adv_path = Path('organized_dataset') / entry['image_path']
adv_image = load_image(adv_path)
# Load original image
orig_path = Path('organized_dataset') / entry['original_path']
orig_image = load_image(orig_path)
# Load perturbation if available
if entry['perturbation_path']:
pert_path = Path('organized_dataset') / entry['perturbation_path']
pert_image = load_image(pert_path)
# Access metadata
attack_type = entry['attack']
label = entry['label']
prediction = entry['prediction']
print(f"Attack: {attack_type}")
print(f"True Label: {label}")
print(f"Model Prediction: {prediction}")
print(f"Image shapes: {adv_image.shape}") # Should be (3, 224, 224)
Success rates for each attack on the target model:
{
"HopSkipJump": {"success_rate": 14},
"Zoo_Attack": {"success_rate": 22},
"SimBA": {"success_rate": 99},
"Boundary_Attack": {"success_rate": 98},
"SpatialTransformation_Attack": {"success_rate": 99}
}
@article{lensai2025adversarial,
title={Adversarial PCAM Dataset},
author={LensAI Team},
year={2025},
url={https://huggingface.co/datasets/lens-ai/adversarial_pcam}
}