Hey — you probably got an email from me.
I'm Issac. I build AI governance tools that let regulated teams ship LLM features without getting sued, fined, or owned. Three products, three real prices, no sales deck.
This page exists so you can decide on your own terms. No calendar booking pressure, no gated PDFs, no "request a quote." Here's what's actually available:
Stop your chatbot from promising refunds it can't deliver. Moffatt v. Air Canada made this case law.
ISO 42001 Evidence $50K–150K/yearAudit-ready documentation for banks, insurers, and regulated enterprises facing ISO 42001, SR 11-7, and EU AI Act.
AI Red Team $5K–50K/engagement91 red team prompts · 0 false positives against your LLM. Branded PDF report. Remediation roadmap. 4-week delivery.
If you're wondering why you should trust me
I've been building this for a while. The framework underneath all three products is open source (MIT), patent pending, and ships with 99.42% combined AUC and 91/91 red team prompts blocked in CI. It's not a deck. It's running code you can clone.
Hey — you got an email from me about chatbot liability.
Quick version: after Moffatt v. Air Canada, any company running an AI chatbot is legally on the hook for what it promises. One wrong answer = a binding contract. I built CX Refund Guardrail to prevent that, specifically for mid-market teams who can't afford Decagon or Sierra.
What it actually does
It's policy-enforcement middleware that sits between your LLM and your customer. Every outgoing message gets scored against your actual policies before it ships. Anything outside your allowed envelope gets caught, redirected, or escalated to a human.
- Works with any LLM (OpenAI, Anthropic, local, custom fine-tunes)
- Under 100ms added latency per message
- Policies live in YAML or JSON, versioned in your repo
- Audit log of every decision, ready for legal/compliance
- 30-day trial, up to 1,000 messages, no credit card
What it costs
Full details, included features, and FAQ on the CX Guardrail page.
If you want to talk
Reply to the email, or book a 30-minute discovery call. I'll ask about your current chatbot setup, what your policy risks look like, and whether this is actually a fit. If it isn't, I'll tell you and point you somewhere better.
Hey — you got an email from me about AI audit readiness.
Three regulatory frameworks are converging on your AI deployments in 2026: ISO 42001, SR 11-7, and the EU AI Act. Your auditors are going to start asking for documented adversarial testing, risk assessments, and drift monitoring. Most banks and insurers already deployed LLMs and don't have that paper trail. I deliver it.
What I actually deliver
- Test evidence package — JSONL output of 91 red team prompts (0 false positives) run against your LLM, plus 150/150 compliance tests across 13 frameworks, mapped to ISO 42001 clauses
- Risk assessment report — formatted for SR 11-7 review, signed and timestamped
- Drift monitoring setup — ongoing adversarial probe suite with quarterly re-tests
- Audit response dossier — everything your compliance team hands to auditors, pre-formatted
I don't certify you — certification is a separate process. I deliver the evidence your compliance team presents to auditors. You ingest the output into Vanta, Drata, or whatever GRC platform you already use.
What it costs
Full scope, ISO clause mapping, and FAQ on the ISO 42001 Evidence page.
If you want to talk
Reply to the email, or book a 30-minute audit readiness review. I'll ask what LLMs you've deployed, what your auditors have already asked for, and whether a 3-week initial engagement makes sense for you. MNDA available immediately if we get into specifics.
Hey — you got an email from me about stress-testing your LLM.
If you've shipped an LLM feature — a chatbot, an agent, a RAG system — you need to know how it breaks before an attacker or a reporter does. I run the 91-prompt red team (0 false positives) against your application, deliver a branded PDF report mapped to your threat model, and hand you a prioritized remediation roadmap. Four weeks. Real findings.
What we test for
- Prompt injection — direct, indirect, multi-turn, context poisoning
- Jailbreaks — persona hijacks, instruction override, safety classifier bypass
- Data exfiltration — system prompt extraction, training data leak, PII recovery
- Agent abuse — tool misuse, privilege escalation, infinite loops, cost bombs
What the engagement looks like
Week 1 scoping call → Week 2 execution against your staging endpoint → Week 3 analysis and triage → Week 4 delivery call with your engineering and security team. Zero impact on production. NDA signed before any API keys change hands.
What it costs
Full scope, example findings, and FAQ on the Red Team page.
If you want to talk
Reply to the email, or book a 30-minute scoping call. I'll ask about your LLM setup, what kind of threats worry you most, and whether a Quick Scan is the right starting point. If your risk surface is bigger than that, we'll scope a Deep Engagement instead.
Why a solo builder and not a Fortune 500 vendor?
Because you don't need Fortune 500 pricing. Lakera, Robust Intelligence, and HiddenLayer sell to Fortune 500 at $150K-$1M per engagement. Harvey.ai wants a 20-seat minimum for legal AI. Decagon starts at six figures. If you're mid-market, you're not their customer — you're their future customer, three years from now.
I'm building for the teams in the middle: the 50-person SaaS company deploying OpenAI for support, the regional bank facing ISO 42001 pressure, the AI startup shipping agents to enterprise customers, the fintech that can't afford to hallucinate a refund. Real pricing, real delivery, no gated demos.
The framework I use is open source on GitHub. You can clone it, run the tests, read the patent, and decide whether the math checks out before we ever talk. I'd actually prefer you did.
About the name
SCBE-AETHERMOORE is the framework, not the company. "SCBE" is Symphonic Cipher Based Encryption — the hyperbolic geometric model that prices attacks out of affordability. "Aethermoore" is a reference to the world I built around the math, which also became a novel (The Six Tongues Protocol). Yes, I'm one of those people.
Not interested? No hard feelings. A few options:
- Just ignore the email — I don't run drip sequences or "just checking in" follow-ups more than twice.
- Reply with "not a fit" and I'll mark you as such and move on.
- Forward this page to someone on your team who might care more.
- If you think the framework is interesting but you're not a buyer, star the repo — that genuinely helps more than you'd think.