Frontier Model Wrapper
// How the category uses it
Not used by upstarts — they describe their AI as proprietary. In practice, a meaningful portion of the AI-wrapper category runs on rented frontier models with a restaurant-domain system prompt.
A frontier model wrapper takes a public LLM API — OpenAI, Anthropic, Google — adds a domain-specific system prompt, and presents the output as proprietary AI. The system prompt is configurable; the model is not. When the underlying model is deprecated or changes behavior, the wrapper product changes with it. That roadmap is not owned by the wrapper company.
// How superGM defines it
If your AI platform can be replicated this afternoon with a ChatGPT subscription and an hour of prompt engineering, the platform is a wrapper. That is not a criticism. It is a category description. Wrappers are useful. They are not defensible. They are not autonomous. They are a very good prompt.
// Why it matters
Operators evaluating AI platforms should ask which frontier model the intelligence runs on, what happens when that model is deprecated, and what happens if that model provider changes its pricing or policy. The answers describe the platform as a wrapper or as something else. In our experience, the category is more wrapper than it admits.
- AskColette Conversational AI over a frontier model with a restaurant operations system prompt. Any engineer can replicate the core functionality.
- Many upstarts in the category Without proprietary training data and without their own model, the AI layer is by definition a wrapper.
A product whose intelligence layer is a prompt over OpenAI or Anthropic. The product is the prompt.
Most operators who apply
will not be selected.
We work with operators whose operation, culture, and competitive position fit what we built this for. We review every application individually. We select from the backlog.
If you are reading this because a competitor sent it to you, they may already be in production. We don’t confirm or deny active deployments.