All Models
5 Models · 4 Providers · PII Redacted

🌀Mistral Models

by Mistral AI (Open / Closed)

Mistral AI's model family spans from compact open-weights models to powerful commercial variants. Compare Mistral API pricing and Mixtral cost across providers. Known for the pioneering Mixtral MoE architecture and the efficient Mistral 7B lineage.

From $0.10/M tokens
4 providers
28+ PII entities redacted

Why deploy Mistral through AI Security Gateway?

Automatic PII Redaction

Every Mistral request is scanned for 28+ PII entity types — SSNs, credit cards, emails, API keys, and more — before it reaches any provider.

Smart Cost Routing

Mistral is available across 4 providers. Our Smart Router picks the cheapest one per-request. 25% managed markup / 0% on Pro BYOK.

Zero Code Changes

Change two lines in your OpenAI SDK — base_url and api_key — and every request flows through AISG. Full backward compatibility.

Full Observability

Per-request logging of token counts, latency, DLP violations, and cost. Never wonder what your AI spend is again.

Mistral Strengths

  • Mixtral MoE: high capability at lower compute cost
  • Strong multilingual support (European languages)
  • Open-weights variants available for self-hosting
  • Codestral: purpose-built for code generation
  • Efficient inference even on smaller variants

Available Mistral Models (5)

Mixtral 8x7B

oah/mixtral-8x7b
Open Source

Deploy Mixtral 8x7B with built-in PII redaction and Hub governance. 8x7B parameters. Available on Managed Credits and BYOK.

GroqTogether.aiDeepInfraMistral AI
Input: $0.20/MOutput: $0.20/M

Mistral Large

oah/mistral-large
Open Source

Deploy Mistral Large with built-in PII redaction and Hub governance. Available on Managed Credits and BYOK.

Mistral AI
Input: $0.50/MOutput: $1.50/M

Mistral Small

oah/mistral-small
Open Source

Deploy Mistral Small with built-in PII redaction and Hub governance. Available on Managed Credits and BYOK.

Mistral AI
Input: $0.10/MOutput: $0.30/M

Mistral Medium

oah/mistral-medium
Open Source

Deploy Mistral Medium with built-in PII redaction and Hub governance. Available on Managed Credits and BYOK.

Mistral AI
Input: $0.40/MOutput: $2.00/M

Codestral

oah/codestral
Open Source

Deploy Codestral with built-in PII redaction and Hub governance. Available on Managed Credits and BYOK.

Mistral AI
Input: $0.30/MOutput: $0.90/M

Mistral Pricing Comparison (per 1M tokens, USD)

Input / Output pricing by provider. Managed Mode adds a 25% managed markup. Pro BYOK = 0% markup.

ModelParamsContextVisionGroqTogether.aiDeepInfraMistral AI
Mixtral 8x7B
oah/mixtral-8x7b
8x7BNo
$0.24/$0.24
$0.60/$0.60
$0.20/$0.20
$0.70/$0.70
Mistral Large
oah/mistral-large
No
$0.50/$1.50
Mistral Small
oah/mistral-small
No
$0.10/$0.30
Mistral Medium
oah/mistral-medium
No
$0.40/$2.00
Codestral
oah/codestral
No
$0.30/$0.90

Mistral Direct vs AI Security Gateway

What you get at each pricing tier. Hub adds security, governance, and multi-provider routing on top of raw API access.

ModeWhat You PayPII RedactionBudget CapsRoutingAudit Trail
Direct to Mistral AIProvider pricing onlyNoneNoneManualNone
Hub — Managed ModeProvider + 25% markup28+ PII typesPer-key hard capsSmart RouterFull compliance log
Hub — Pro BYOK ($29/mo)Direct to provider (0% markup)28+ PII typesPer-key hard capsSmart RouterFull compliance log

Popular Use Cases

1

Multilingual customer support (especially European languages)

2

Code generation with Codestral

3

Budget-conscious deployments requiring good multilingual performance

4

MoE deployments for cost/quality tradeoffs

Integration — 2 Lines

from openai import OpenAI

client = OpenAI(
    base_url="https://api.aisecuritygateway.ai/v1",
    api_key="your_hub_api_key"
)

# Use any virtual model name from the pricing table above
response = client.chat.completions.create(
    model="oah/mixtral-8x7b",
    messages=[{"role": "user", "content": "Hello!"}]
)

Use any virtual model name from the pricing table above (prefixed with oah/). Works with the standard OpenAI SDK. Every request is PII-scanned before reaching Mistral AI (Open / Closed).

Frequently Asked Questions

What is the Mistral API pricing?
Mistral API pricing varies by model and provider. In Managed Mode, we add a 25% markup. With Pro BYOK, pay the provider directly at 0% markup. Check the pricing table above for current rates.
What is the Mixtral cost?
Mixtral cost depends on the provider (Groq, Together.ai, etc.). As a MoE model, it offers strong performance at lower cost than dense models of equivalent quality. See the pricing comparison above.
Mistral vs Llama — which should I use?
Mistral excels at European multilingual tasks and offers Codestral for coding. Llama has wider provider availability and larger model variants. Both run through AISG with identical PII protection and budget enforcement.
Which Mistral model is best for coding?
Codestral is Mistral's dedicated coding model and excels at code generation, completion, and explanation tasks.
Is Mixtral open-source?
Mixtral's open-weights variants are available under permissive licenses. Some newer Mistral models (like Mistral Large) are closed-source.

Deploy Mistral with Enterprise-Grade Security

Get started with 1,000,000 free credits. Every Mistral request is PII-scanned, cost-optimized, and fully logged — zero configuration.

Not ready yet? Get notified about Mistral updates:

Explore Other Model Families

Model registry last updated: 2026-04-18T17:41:46.389Z. Pricing shown is the lowest available rate across providers (per 1M tokens, USD). Actual pricing depends on provider and plan.