Skip to content

Our Vision.

Using AI interpretability to make AI systems trustworthy and secure.

Our Story

Built to see inside the machine.

The Problem We Saw

Organizations were deploying AI at scale with no security primitive that understood what was happening inside — only what came out. Output monitoring isn't security. It's hope.

What We Are Building

Starseer applies AI interpretability to the security problem — giving teams the visibility, detection engineering, and runtime assurance to protect AI systems the way they actually work.

Inside The Model

Origin

Why we started Starseer.

"The security industry was treating AI like a black box it couldn't open. We'd spent careers opening black boxes."

The moment it clicked for me was during a tabletop exercise between security teams and AI teams. We were walking through an incident response scenario, and the AI teams hit a wall that was immediately obvious to every security person in the room: they had never thought of themselves as adversary targets. If something went wrong with a model, it was an engineering problem. Not someone messing with their stuff.

The security teams got that instinctively. But they hit their own wall just as fast. Everything they knew how to investigate assumed firewall logs, net flow, endpoint telemetry. When the conversation turned to "okay, so what does incident response actually look like here?" the answer was: we have API logs. That's it. We're searching for a needle in a haystack and we don't even know what the needle looks like.

That gap is where Starseer started.

 

 

I went looking for the AI equivalent of Sysmon or EDR telemetry and found mechanistic interpretability—a field pioneered by teams at Anthropic, Google DeepMind, and academia that analyzes what’s happening inside a model’s layers and activations, not just its outputs.

The challenge: most research is fragmented and hard to reproduce, with teams repeatedly reinventing setups and focusing narrowly on single models instead of scaling insights across architectures.

That’s when it clicked: this isn’t an AI problem—it’s a tooling problem. Security has long specialized in reverse-engineering black boxes; we just needed to bring that discipline to AI and build instrumentation that works at scale, across models, and repeatably.

I brought the idea to Carl, whose background in reverse engineering and zero-day research immediately validated it. This wasn’t unsolvable—just a familiar problem lacking the right tradecraft.

So we built Starseer: a platform grounded in AI interpretability, delivering model-level runtime detection and response, AI-native detection engineering, and pre-deployment model validation.

One mission: make AI systems interpretable and secure from the inside out.

Our Approach

Interpretability is the method. Security is the outcome.

Most AI security tools observe outputs and infer intent. We look inside. Here's what that difference means in practice.

starseer_lifecycle_abstract

Interpretability-grounded detection

Starseer uses mechanistic interpretability (activation analysis, circuit tracing, and behavioral probing) to reveal what AI models truly learn and do at inference, detecting hidden threats like backdoors and covert capabilities invisible to traditional signals.

core differentiator

Output monitoring alone

Tools that monitor AI outputs can detect anomalies in what a model says or does — but they're blind to why, and blind to threats that produce normal-looking outputs deliberately. Behavioral monitoring is necessary. It isn't sufficient.

industry baseline

Pre-deployment through runtime

Model Validation validates that approved models are only being used. AI-DE engineers the detections that run at runtime. AI-EDR runs those detections against live AI endpoints and responds. The three products cover the full surface — no handoff gaps between them.

platform vision

The Team

The people building this.

Security practitioners, national lab researches, AI researchers, and engineers who've seen the gap firsthand — and decided to close it.

Tim

Tim Schulz

CEO & Co-founder
Carl (1)

Carl Hurd

CTO & Co-founder

Yoni

Yoni Klein

Head of Sales
Rachael

Rachael Southworth

Head of Operations
Shawn (1)

Shawn Edwards

Founding AI Security Engineer

Lillian (1)

Lillian Warner

Technical Product Manager
Westin (2)

Westin Perry

AI Engineer

Advisors

Shaped by people who've been there.

Our advisors bring deep backgrounds across AI safety research, enterprise security, and federal threat intelligence — the three domains Starseer sits at the intersection of.

Rob

Rob Joyce

Former Director of Cybersecurity, NSA
Gary

Dr. Gary McGraw

Berryville Institute of Machine

Backed By

Investors who understand the mission.

We raised from investors who have conviction in AI security as a category, not just AI as a category. That distinction matters for how we build and who we build it with.