Onplana Now Supports Azure OpenAI — Use Your Azure Credits for AI
Onplana now ships with dual AI providers: Claude (Anthropic) and GPT-4 via Azure OpenAI. Admins can switch providers, use their own Azure deployment, and keep inference inside their tenant.
Onplana Now Supports Azure OpenAI — Use Your Azure Credits for AI
Starting today, Onplana ships with dual AI provider support: you can power every AI feature — risk detection, plan generation, the streaming chat panel, natural-language task parsing, everything — with either Claude (Anthropic) or GPT-4 via Azure OpenAI. Admins flip providers from a single screen. No code changes, no redeploys.
This is the feature enterprise customers have been asking us for all year, and it unlocks three things we couldn't offer before.
Why this matters
1. Burn your existing Azure credits
If your organisation already has an Azure enterprise agreement, you're probably sitting on prepaid Azure credits. Those credits can now pay for AI inside Onplana. Point Onplana at your Azure OpenAI deployment and every AI call bills against your Azure spend — not against a separate Anthropic bill your finance team has to approve.
For mid-market customers, this often zeros out the AI line item entirely.
2. Keep inference inside your tenant
With Azure OpenAI, your prompt data never leaves your Azure tenant. No Anthropic API, no cross-region data transfer, no separate data-processing agreement to negotiate. The same DPA you already signed with Microsoft covers AI inference inside Onplana.
For customers in regulated industries (healthcare, financial services, public sector), this is the difference between "we can evaluate Onplana" and "we can actually deploy it."
3. Pick the right model for the job
Claude and GPT-4 are both excellent, but they're not identical. Some teams find Claude more careful on risk analysis; others prefer GPT-4's code generation. With dual-provider support, you can run A/B tests internally and pin specific AI endpoints to specific models — for example, use Claude for risk detection and GPT-4 for plan generation — from the per-endpoint override table in the admin panel.
What it looks like
Admins see a single panel at /admin/ai-usage → Configuration:
- Key Vault secrets status — three dots showing whether your Anthropic API key, Azure OpenAI key, and Azure OpenAI endpoint are provisioned
- Provider toggles — independent on/off switches for Claude and Azure OpenAI
- Tier deployments (Azure only) — three input boxes where you type the deployment names from your Azure resource. These feed automatically into the per-endpoint dropdown below.
- Per-endpoint model overrides — a table where every AI feature can be pinned to a specific concrete model id per provider
When both providers are enabled, a Primary provider radio picks which one gets called by default. Individual features can still override via the per-endpoint table.
How to turn it on
Self-hosted customers:
- Provision an Azure OpenAI resource in your Azure subscription
- Deploy three models (we recommend
gpt-4ofor balanced,gpt-5.4-minifor fast/cheap,gpt-5.3-chatfor powerful reasoning — but you can use any deployment names) - Store three secrets in Azure Key Vault:
anthropic-api-key,azure-openai-api-key,azure-openai-endpoint - Set
AZURE_KEYVAULT_URLin your Onplana app settings - Admins enable Azure OpenAI from
/admin/ai-usage → Configuration, type in your deployment names, and click Save
Cloud-hosted customers: Get in touch with us — we'll provision your Azure OpenAI connection for you. Enterprise plans include setup assistance.
The credential story
One thing we've been opinionated about: secrets never appear in the admin UI. The three API credentials (anthropic-api-key, azure-openai-api-key, azure-openai-endpoint) live in Azure Key Vault and are provisioned out-of-band by ops via az keyvault secret set. The admin panel is read-only for secrets — it shows a green dot if each secret is present and reachable, and a grey dot if it's missing. This is the same pattern we use for other production credentials and keeps secret rotation where it belongs: in your cloud provider's identity plane, not in a web form.
Development environments fall back to environment variables so local contributors don't need Key Vault access.
What's next
This rollout sets up a few things we're planning for Q2:
- Per-organisation AI provider — right now the provider is org-wide; next iteration lets each organisation on a shared Onplana instance pick its own
- Automatic failover — if the primary provider returns a 5xx, fall back to the secondary automatically
- Cost dashboard enrichment — the admin AI usage dashboard already tracks per-model cost; we're adding per-provider comparison charts so finance can see exactly where AI spend is going
If you have an Azure enterprise agreement and you've been waiting for this — it's live today. Talk to your admin, flip the toggle, and start billing AI against your Azure account.
Questions? Contact us or reply to any email from an Onplana team member.
Ready to make the switch?
Start your free Onplana account and import your existing projects in minutes.