You used to know us as Clavata.

Get to know Moonbounce

Try the Playground
See how quickly you can write and test rules that adapt to your use case
Try the Playground
See how quickly you can write and test rules that adapt to your use case

Introducing Moonbounce

Introducing Moonbounce

Introducing Moonbounce

New Name, Same Mission

New Name, Same Mission

AI systems are making decisions. Who's making sure they're the right ones?

Every major shift in computing has outpaced its safeguards.

Cloud infrastructure needed security and identity. Online payments needed fraud detection. APIs needed rate limiting and observability. Each time, a fundamental change in how computing worked created a gap – and entirely new infrastructure had to emerge to close it.

AI has reached that same moment.

Adoption is widespread. Capabilities are proven. Deployment is accelerating across every industry. But the systems meant to govern AI behavior were built for a much slower world – one where iteration was slow by design, enforcement could vary by reviewer, and a problem could be cleaned up before it spread too far.

That world is gone.

The models aren’t the problem

When something goes wrong in an AI system, the instinct is usually to ask: was the model good enough?

That's the wrong question.

The harder issue – the one a better model doesn’t fix – is the gap between what an organization intends its AI to do and what it actually does. Those two things diverge constantly, across geographies, use cases, edge cases, and contexts that no one anticipated when the policy was written.

We've watched this play out for years in the trust & safety community. The world's largest platforms have spent billions on human review – and still achieved inconsistent results. Policy iteration cycles stretched for months. Enforcement varied by reviewer, by shift, by context. Bad actors adapted faster than systems could respond.

And that was before generative AI turned content creation into a continuous, high-velocity decision-making process.

What actually needs to change

The instinct, when enforcement fails, is to write better policies. More detailed guidelines, clearer rules. But the problem usually isn't that the rules are wrong – it's that they were never specified precisely enough to be executed consistently in the first place.

Consider a policy like: "Graphic conflict imagery is prohibited, except in cases of journalism or public interest reporting."

Reasonable enough. But when war breaks out overnight, what was fringe becomes the front page. Is a video of an airstrike documentation or provocation? Does the answer change based on who's sharing it, where, or why? And when the answer needs to change – how fast can your policy keep up?

When you can't answer those questions, you don't have a policy. You have a preference. And preferences don't scale.

The same failure mode that's plagued content moderation for a decade is now showing up in AI governance – in the guardrails organizations are building around agentic systems, customer-facing models, and AI-generated content. Values are expressed in documentation. Behavior is derived through interpretation. And interpretation, at machine speed, is a liability. It's been our mission to change that, and we've been at it longer than most have been paying attention.

Introducing Moonbounce

Today, we're launching as Moonbounce, with $12M in funding led by Amplify Partners and StepStone Group (Nasdaq: STEP), with participation from angel investors PrimeSet and Josh Leslie, former CEO of Cumulus Networks and Gremlin. This new funding, alongside our new brand, will propel our mission forward.

Our core offering remains the same: we’re a control layer for AI systems – built to narrow the delta between what organizations intend and what their systems actually do.

Better models still operate without explicit behavioral standards. They still require someone to define what "correct" looks like, enforce it consistently across contexts, and know when it's being violated.

That's what we do.

Organizations use Moonbounce to define behavioral standards precisely – not as values in a document, but as executable, testable logic – and enforce them in real time as decisions are being made. 

Most moderation is reactive. By the time a system flags a problem, it’s already happened. Moonbounce works upstream – intercepting violations before content ever reaches a user. Prevention, no detection. 

When context shifts, when regulations change, when new risks emerge, policies can be updated and deployed in hours instead of months. And because every decision is traceable, teams can explain outcomes, fix blind spots, and improve continuously rather than chasing problems after they've spread.

Moonbounce is already deployed across dating platforms, AI chat applications, and generative content sites, processing more than 50M+ pieces of content daily across a customer base of 250M+ monthly active users.

Why now

AI is being deployed at scale before its risks are fully understood. In many cases, we don't yet know what will go wrong. We haven't paid the experience price.

That uncertainty isn't an argument for slowing down. It's an argument for building the infrastructure that makes it safe to move fast –  the same way every prior computing shift eventually did. We want AI to win, but that only happens if it's trustworthy.

The question organizations face today isn't whether to deploy AI. It's whether they have a control layer capable of keeping behavior aligned with intent as these systems operate at scale, across contexts, and with increasing autonomy.

We built Moonbounce because most organizations don't have that yet.

They can.