protectai-prompt-injection-onnx

protectai-prompt-injection-onnx is a prompt injection risk classifier from protectai/deberta-v3-base-prompt-injection-v2, packaged in ONNX format.

The classifier can be used to evaluate prompt injection risk in a prompt.

Model Description

  • Developed by: protectai
  • Quantized by: llmware
  • Model type: deberta
  • Parameters: 184 million
  • Model Parent: protectai/deberta-v3-base-prompt-injection-v2
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Uses: Prompt safety
  • RAG Benchmark Accuracy Score: NA
  • Quantization: int4

Model Card Contact

llmware on github

llmware on hf

llmware website

Downloads last month
8
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API has been turned off for this model.

Model tree for llmware/protectai-prompt-injection-onnx

Collection including llmware/protectai-prompt-injection-onnx