Compare · Glyphward vs LLM Guard

Glyphward vs LLM Guard

LLM Guard is the most credible open-source text prompt-injection library available, stewarded by Protect AI and shipped MIT-licensed. It is also, by its charter, a text library: no image scanners, no audio scanners, self-hosted. Glyphward is a managed multimodal PI scanner — image and audio first, $29/mo, call the API. Genuinely different products, often deployed together.

TL;DR

Use LLM Guard if your threat is text PI, you want zero vendor lock-in, and you have the engineering capacity to host and maintain it. Use Glyphward if your threat is image or audio PI and you want to skip the hosting and maintenance. Most teams we talk to run LLM Guard's PromptInjection scanner on text I/O and call Glyphward for uploads — that is the intended stack, and both products are stronger for it.

What each product actually is

LLM Guard is a Python library — pip install llm-guard — that runs inside your service. It exposes an input-scanner pipeline and an output-scanner pipeline. Scanners include PromptInjection (using Hugging Face classifiers such as deberta-v3-base-prompt-injection-v2), Toxicity, Secrets, BanSubstrings, Anonymize, and more. You compose the scanners you need, you host the models, you scale the inference, you maintain it as the threat landscape moves.

Glyphward is a managed HTTPS API. You POST an image or audio file and receive a 0–100 risk score plus bounding-box coordinates on flagged pixels or waveform windows. We run the models (FigStep- and AgentTypo-trained text-in-image heads for pixels, waveform-anomaly plus Whisper-small transcript ensemble for audio), we keep them current as new attack vectors land, we publish the confusion matrix per release.

Honest feature table

LLM GuardGlyphward
LicenceMIT, open sourceCommercial, managed service
Text PI detectionYes — PromptInjection scannerOut of scope
Image PI (typographic, FigStep)Not shippedOCR + CLIP + text-in-image head
Audio PI (WhisperInject)Not shippedWaveform + transcript ensemble
HostingSelf-host (your GPU/CPU)Managed (our infra)
Keeping models currentYour responsibilityOurs (roadmap per release)
LatencyDepends on your hardwareSub-200 ms p95 target
CostInfra + eng time$0 / $29 / $99 per month
Data residency100% in your VPCRetention configurable; free tier discards bytes after feature extraction

Where Glyphward wins

Where LLM Guard wins

When to pick which

Pick LLM Guard if your threat is text, you have engineering bandwidth, and either cost-at-scale or data-residency pushes you toward self-hosting. Especially true for regulated industries (finance, healthcare) where a managed API can be a procurement non-starter.

Pick Glyphward if your threat is image or audio uploads, or if you want a scanner you don't have to operate. Most indie AI builders we see — avatar SaaS, voice agents, screenshot-reading agents — do not want to run a second inference stack for guardrails.

Run both is the default recommendation for teams with mixed surfaces. LLM Guard's input scanners on the text path; Glyphward on the upload path. Two independent layers, each specialised.

Integration sketch (running both)

In practice this looks like an upload handler that calls Glyphward's POST /v1/scan, blocks or flags on the risk score, extracts text (OCR, transcript) downstream, and feeds the extracted text through an LLM Guard PromptInjection scanner before it reaches your LLM. You have two independent detectors for two independent stages of the pipeline. When one misses, the other is there.

A worked example lives in our free-tier API docs. The whole integration is a few dozen lines.

FAQ

Does LLM Guard have an image or audio scanner I missed?

Not in the main project at the time of writing. The scanner list in the README is text-scope. If that changes, we will update this page; Protect AI's roadmap is theirs to announce.

Can I self-host Glyphward?

Not at v1. The compounding-corpus effect needs signature sharing across customers. Offline / self-hosted mode is on the Team-tier roadmap for compliance-constrained buyers; if that is a blocker, tell us and we will give you a position in the queue.

Will running both double my bill?

No. LLM Guard is free software — the cost is the GPU/CPU you already run inference on, plus engineering time. Glyphward starts at $0 and is a flat-rate monthly API. Running both does not compound.

What about latency?

LLM Guard's latency depends on your hardware and which scanners you run; a typical PromptInjection call on a warm model is a few tens of milliseconds. Glyphward targets sub-200 ms p95 on our managed infra. For most real-world upload paths, the network hop is a smaller variable than the hash-and-extract pipeline.

What do you do with uploaded bytes?

Free tier: perceptual hash + detector features extracted, bytes discarded. Paid tiers: you choose between day-1 deletion and 30-day opt-in retention for compare reports. We never train third-party models on user uploads and we never sell the corpus.

Further reading

Get early access · Full rate card