Confident Security didn’t just ask why AI privacy was broken, they built a cryptographically verifiable answer that actually stands up in court, audits, and a room full of zero-trust CISOs. No tape, no decoys. Just engineering that whispers provable privacy into the ears of banks, browsers, and sovereign AI agencies trying not to wake the beast of non-compliance.
Founded in 2024 by Jonathan Mortensen, PhD, a Stanford-trained informaticist with a resume that reads like a secure data pipeline, Confident Security just stepped out of stealth with $4.2 million in seed funding. The round was led by Decibel with backing from South Park Commons, Ex Ante, and Swyx. Every single one of those investors knows the difference between noise and signal. That’s not metaphor, it’s the whole damn blueprint. CONFSEC, their core product, is quite literally the Signal-for-AI.
Let’s break this down. You’ve got a global arms race of LLMs hoovering up data like it’s free popcorn, and every enterprise exec praying that their prompts don’t end up as training fodder in some fine-tuned hallucination downstream. CONFSEC doesn’t hope that won’t happen. It proves it won’t. Built on Apple’s Private Cloud Compute architecture, it’s the first enterprise-grade implementation of provably private AI inference outside Cupertino. That’s not “trust us.” That’s “audit this.”
The tech stack reads like a who’s-who of applied cryptography: OHTTP, blind signatures, TEEs, TPM attestation. Every prompt enters encrypted. Every inference is wrapped in usage-conditioned decryption. Metadata? Vapor. Logs? Nonexistent. The system is stateless, verifiable, and built to satisfy the EU AI Act, U.S. privacy laws, and whichever regulation tomorrow’s lawyers dream up while doomscrolling.
Jonathan Mortensen and his team, pulling talent from Google, Apple, Databricks, Red Hat, and HashiCorp, aren’t selling fairy tales. They’ve completed third-party security audits pre-launch. They’re already in pilot talks with banks, national AI programs, and major browsers. If you’re building AI that can’t afford leaks, not just for legal reasons, but for ethical ones, you’re going to want what CONFSEC is serving.


