SCBE-AETHERMOORE
Benchmark Kit

How Safe Is Your AI?
Find Out in 10 Minutes.

$5

91 attacks across 10 categories. 5 compliance levels from hobbyist to classified. Run against any AI system. Get a scored report. Know exactly where you stand.

Buy Benchmark Kit ($5) See Free Results First

Secure Stripe checkout. Instant access after purchase.

What's Inside

91

Adversarial Prompts

Production-grade attack corpus covering OWASP LLM Top 10, MITRE ATLAS techniques, and SCBE-specific vectors.

10

Attack Categories

Direct override, indirect injection, encoding obfuscation, multilingual, adaptive sequence, tool exfiltration, tongue manipulation, spin drift, boundary exploit, combined multi-vector.

5

Compliance Levels

From hobbyist (basic safety) through enterprise (SOC 2) to classified (NSA CNSA). Know which level your system hits.

15

Benign Baselines

Clean prompts across 6 categories to measure false positive rate. A good test catches attacks without blocking real users.

47

Eval Tasks

Route classification, governance posture, tongue encoding, null pattern detection, domain drift -- scored automatically.

1

Scored Report

JSON output with detection rate, false positive rate, per-class breakdown, compliance tier, and specific recommendations.

5 Compliance Levels

The benchmark scores your system against five tiers. Each tier adds requirements from the previous one.

Level 1

Hobbyist

Basic

Direct prompt injection blocked. Basic safety. Good for personal projects.

Level 2

Startup

OWASP

OWASP LLM Top 10 addressed. Encoding attacks caught. Ready for beta users.

Level 3

Enterprise

SOC 2

Multi-vector attacks, audit logging, SOC 2 AI controls. Ready for paying customers.

Level 4

Government

NIST + EU

NIST AI RMF aligned. EU AI Act conformity. MITRE ATLAS coverage. Post-quantum ready.

Level 5

Classified

NSA CNSA

CNSA 2.0 algorithms. FIPS 140-3 validation path. HSM integration. Formal verification.

RequirementL1L2L3L4L5
Block direct prompt injection
Block encoding obfuscation (base64, ROT13)
Block multilingual attacks
Block indirect injection (RAG poisoning)
Detect domain drift
Audit logging (JSONL)
Multi-vector attack resistance
0% false positive rate
NIST AI RMF alignment
MITRE ATLAS technique coverage
EU AI Act conformity documentation
Post-quantum cryptography (ML-KEM/ML-DSA)
FIPS 140-3 validation path
NSA CNSA 2.0 algorithm suite
Formal verification (Coq/Lean proofs)

How To Run It

Option A: Python (recommended)

pip install scbe-aethermoore
python -m scbe_benchmark --target "your-api-endpoint" --key "your-api-key"
# Report saved to benchmark_report.json
        

Option B: Colab (no install)

Open the included Colab notebook. Paste your API key. Click Run All. Get your report in 10 minutes.

Option C: Manual (any system)

The kit includes all 91 attack prompts as a CSV. Send them to your AI system however you want. Score the responses against the included rubric.

What You Get Back

{
  "system": "your-system-name",
  "timestamp": "2026-03-31T...",
  "compliance_level": 3,
  "compliance_name": "Enterprise",
  "detection_rate": 0.879,
  "false_positive_rate": 0.0,
  "per_class": {
    "direct_override": { "blocked": 10, "total": 10, "rate": 1.0 },
    "encoding_obfuscation": { "blocked": 8, "total": 10, "rate": 0.8 },
    ...
  },
  "recommendations": [
    "Encoding obfuscation: 2 attacks bypassed. Add base64/ROT13 pre-processing.",
    "Multilingual: 1 attack bypassed. Add non-English pattern detection."
  ]
}
        

What People Use It For

Pre-launch audit

"We're shipping an AI feature next week. Is it safe?" Run the benchmark, get a compliance level, fix the gaps before launch.

Vendor evaluation

"Which AI provider has better safety?" Run the benchmark against multiple providers. Compare scores side by side.

Compliance evidence

"Our auditor asked for AI safety documentation." The benchmark report is structured evidence that maps to SOC 2, NIST RMF, and EU AI Act requirements.

Red team training

"Our security team needs practice attacking AI systems." The 91 attacks are organized by category and difficulty. Great for tabletop exercises.

Compare

This Kit ($5)Promptfoo (free OSS)Enterprise Red Team ($50K+)
Attack corpus91 attacks, 10 classes50+ vulnerability typesCustom per engagement
Compliance mappingOWASP + NIST + MITRE + EU AI Act + NSAOWASP + MITREFull custom
Time to results10 minutes30 min - 2 hours2-6 weeks
Scored reportYes (JSON + compliance level)Yes (HTML)Yes (PDF)
Null-space detectionYes (unique to SCBE)NoDepends on team
Sacred Tongue profilingYes (6D domain analysis)NoNo
Price$5Free$50,000+

Promptfoo is excellent open-source tooling (now part of OpenAI). This kit adds SCBE-specific detection (tongue profiling, null-space, compliance levels) and maps to more compliance frameworks. They complement each other.

Get the Kit

$5

91 attacks. 5 compliance levels. 10 minutes. One JSON report that tells you exactly where your AI stands.

Buy Benchmark Kit ($5) See Free Results First

Includes: attack corpus (CSV + JSONL), benchmark script (Python), Colab notebook, scoring rubric, report template. Instant access after secure Stripe checkout.

SCBE-AETHERMOORE · Built by Issac Davis in Port Angeles, WA · Patent Pending USPTO #63/961,403