Want to connect with Alice?
Join organizations building the agentic web. Get introductions, share updates, and shape the future of .agent.
Is this your company?
Claim this profile to update your info, add products, and connect with the community.
Alice is active in the safety and alignment layer of the agent stack. As agents move from passive chat interfaces to active participants that use tools and execute code, the risk of unintended actions increases. Alice provides the guardrails necessary to ensure that an agent’s agency is constrained within safe boundaries.
For developers building in the agent ecosystem, Alice is relevant because it provides the infrastructure to monitor for drift and evaluate model hardening. By offering runtime guardrails, Alice allows companies to deploy agents that interact with sensitive data or public users with a reduced risk of jailbreaking or prompt injection. They are championing the idea that safety cannot be a one-time check but must be a persistent part of the agent's operating environment.
Alice is the evolution of ActiveFence, a company that established itself as a primary provider of safety tools for user-generated content platforms. As the focus of the technology industry shifted toward generative AI and autonomous agents, the company rebranded to Alice to reflect a broader mission: securing the interactions between humans and machines, and increasingly, between machines themselves. The company is headquartered in New York and Tel Aviv, maintaining a headcount of 200 to 500 employees.
The company operates in the layer between foundation models and the applications that use them. While foundation model labs like OpenAI or Anthropic conduct their own internal safety testing, Alice provides third-party evaluations and runtime protections that enterprises require to deploy AI without exposing themselves to reputational or operational risks. This includes model hardening evaluations, pre-deployment red-teaming, and drift detection.
The shift from ActiveFence to Alice is not just a marketing change; it represents a fundamental change in the nature of digital risk. In the previous era of the web, safety meant moderating what one human said to another on a social network. In the AI era, safety involves managing the unpredictable behavior of large language models (LLMs) and the agents built on top of them. An agent with the ability to access a user's calendar or execute code introduces a different class of vulnerability than a standard search bar.
Alice’s suite of solutions targets the entire lifecycle of an AI model. Before a model is released, they conduct evaluations to identify systemic weaknesses. Once a model is in production, their runtime guardrails act as a filter, intercepting potentially harmful inputs or outputs in real-time. This is particularly relevant for the agent ecosystem, where the "agentic" nature of the software means it can take actions in the real world.
Competitively, Alice sits alongside companies like Arthur, Robust Intelligence, and Arize, though their history in content moderation gives them a unique data-driven perspective on harm. They are backed by significant venture capital, having raised a Series B round of $100 million in 2021 as ActiveFence, with investors including Highland Europe and CRV. This capital base has allowed them to scale their engineering teams to meet the demand for model hardening as more enterprises move from prototypes to production deployments.
The challenge for Alice is the same one facing all AI safety companies: the speed of model development. As models become more multimodal and capable, the definitions of safe or aligned continue to move. Alice's strategy relies on being the independent arbiter of that safety, providing a trust layer that is separate from the model providers themselves. Their approach covers the model lifecycle from initial hardening to ongoing monitoring, ensuring that as models evolve or experience data drift, the safety parameters remain intact. This independent verification is becoming a requirement for heavily regulated industries looking to adopt agentic workflows.
A trust and safety suite for model evaluations, red-teaming, and runtime guardrails.
Alice is hiring
You've explored Alice.
Join organizations building the agentic web.