GPT-5.5 API: Getting Started with PII Protection
The 80-word answer
GPT-5.5 launched on April 23, 2026. You can call it today via the OpenAI API using the model string gpt-5.5. Its expanded agentic and reasoning capabilities mean developers will send it more sensitive context per conversation — making PII protection more critical, not less. Adding automatic redaction takes two lines of code: change your base URL and API key to route through an AI security gateway. The model never sees raw PII. Your code stays untouched.
What GPT-5.5 actually is
OpenAI released GPT-5.5 on April 23, 2026, describing it as their "smartest and most intuitive to use model" to date. The release comes five weeks after GPT-5.4 — a sign that OpenAI's release cadence is accelerating, not slowing down. Chief Research Officer Mark Chen highlighted "meaningful gains on scientific and technical research workflows" and enterprise use cases including agentic coding and knowledge work.
The model is available via API starting today and is accessible to Plus, Pro, Business, and Enterprise subscribers on ChatGPT. For developers, it's the same OpenAI API you already use — just a new model string.
GPT-5.5 is positioned as a step toward OpenAI's "super app" vision — combining ChatGPT, Codex, and an AI browser into a unified enterprise product. That direction matters for developers building on the API: it signals OpenAI is investing heavily in agentic, multi-step, and long-context use cases — exactly the workflows where sensitive data exposure risk compounds fastest.
Calling GPT-5.5 via the API
If you're already using the OpenAI Python SDK, calling GPT-5.5 is a one-character change to the model string:
from openai import OpenAI
client = OpenAI(api_key="sk-...")
response = client.chat.completions.create(
model="gpt-5.5", # <- updated model string
messages=[
{
"role": "user",
"content": "Summarize the key risks in this contract draft: ..."
}
],
temperature=0.3,
)
print(response.choices[0].message.content)For Node.js / TypeScript:
import OpenAI from "openai";
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const response = await client.chat.completions.create({
model: "gpt-5.5",
messages: [
{
role: "user",
content: "Analyze this customer support ticket and suggest a response: ..."
}
],
});
console.log(response.choices[0].message.content);For REST calls directly, the endpoint and payload format are identical to every previous OpenAI model — only the model field changes. Streaming, function calling, vision, and tool use all work the same way.
Why more capable means more PII risk
Every time OpenAI releases a significantly more capable model, developers move up the complexity ladder. Prompts that were simple Q&A in 2023 are now multi-turn agentic workflows reading from databases, browsing internal wikis, and processing customer records. GPT-5.5's expanded capabilities accelerate that shift.
Longer context = more data per prompt
GPT-5.5 handles longer context windows than its predecessors. Developers routinely fill available context — meaning each request sent to OpenAI contains more data than ever before. A single prompt can now carry an entire CRM thread, a medical intake form, a contract with counterparty details, or a Slack channel history. More context sent = more PII exposure per API call.
Agentic use cases process regulated data autonomously
The agentic coding and knowledge-work capabilities that OpenAI highlighted for GPT-5.5 mean developers will wire it to internal tools — databases, ticketing systems, HR platforms, financial records. When an agent reads a table to answer a question, it ingests whatever PII lives in that table. Without a redaction layer, all of that flows to OpenAI automatically, at scale, without any human reviewing each request.
Better reasoning = developers trust it with more sensitive tasks
GPT-5.5's benchmark gains in scientific research, mathematics, and knowledge work means teams will route their most sensitive, high-value tasks to it — medical diagnosis assistance, legal document review, financial analysis. These are precisely the tasks where HIPAA, PCI-DSS, GDPR, and SOC 2 apply. Deploying a more capable model without updating your data governance is how breaches happen.
The “super app” direction means deeper vendor lock-in
OpenAI's stated direction is combining ChatGPT, Codex, and an AI browser into a single enterprise platform. The more integrated this becomes, the more of your users' data — and your employees' work context — flows through OpenAI infrastructure. Teams with compliance obligations need a data governance layer that works regardless of which provider they use, not one that assumes a single vendor.
The pattern is consistent across every major LLM release: more capability drives adoption in higher-stakes domains, and higher-stakes domains mean more regulated data. GPT-5.5 is a compelling model. It's also a reason to make sure your data protection layer is in place before you deploy it.
Adding PII protection in two lines of code
Retrofitting PII protection into an application after the fact is expensive — you're hunting down every place where a prompt is constructed and adding redaction logic to each one. The right architecture is a gateway: one place where all LLM traffic passes through, scanned and cleaned before it reaches any model provider. New models like GPT-5.5 just work automatically because the protection lives at the transport layer, not in application code.
Here's the same GPT-5.5 call from Section 2, routed through AI Security Gateway:
from openai import OpenAI
# Before: raw traffic to OpenAI
# client = OpenAI(api_key="sk-...")
# After: route through AISG (2-line change, nothing else moves)
client = OpenAI(
api_key="your-aisg-api-key",
base_url="https://api.aisecuritygateway.ai/v1",
)
response = client.chat.completions.create(
model="gpt-5.5",
messages=[
{
"role": "user",
"content": (
"Summarize this support ticket: "
"Customer John Baker (john@acmecorp.com, +1-312-555-0198) "
"is disputing charge $1,240 on card 4111-1111-1111-1111 "
"from invoice INV-20934. Please draft a resolution."
)
}
],
)
# Response includes aisg_metadata showing what was detected
meta = response.model_extra.get("aisg_metadata", {})
print(f"PII detected: {meta.get('pii_detected')}")
print(f"Entities: {meta.get('entity_types_detected')}")
print(response.choices[0].message.content)What GPT-5.5 actually receives after the gateway processes that request:
"Summarize this support ticket: Customer [PERSON_1]
([EMAIL_1], [PHONE_1]) is disputing charge $1,240
on card [CREDIT_CARD_1] from invoice INV-20934.
Please draft a resolution."The response metadata confirms what was caught:
{
"aisg_metadata": {
"provider_selected": "openai",
"model": "gpt-5.5",
"latency_ms": 18,
"dlp_latency_ms": 14,
"pii_detected": true,
"dlp_action": "redact",
"violations_count": 3,
"entity_types_detected": [
"PERSON",
"EMAIL_ADDRESS",
"PHONE_NUMBER",
"CREDIT_CARD"
]
}
}The DLP scan adds approximately 14-18ms to the request — well within the noise of a GPT-5.5 API call that takes 600ms to 3 seconds to complete. Protection overhead is under 3% of total round-trip time.
What gets detected automatically
The gateway scans every GPT-5.5 request across 13+ entity types out of the box, with no configuration required:
| Category | Entity types detected | Example |
|---|---|---|
| Identity | PERSON, EMAIL_ADDRESS, PHONE_NUMBER, US_SSN | Jane Smith, jane@co.com, 555-123-4567 |
| Financial | CREDIT_CARD, IBAN_CODE | 4111-1111-1111-1111 |
| Location | LOCATION, IP_ADDRESS | 123 Main St, 192.168.1.1 |
| Developer secrets | API_KEY, AWS_ACCESS_KEY, GITHUB_TOKEN, PRIVATE_KEY, SLACK_WEBHOOK | sk-abc123..., AKIA..., ghp_... |
| Prompt injection | PROMPT_INJECTION (always blocked, never redacted) | "Ignore previous instructions..." |
Detection behavior is configurable per project: set action: redact to replace sensitive values with labelled placeholders before forwarding, or action: block to reject requests that contain regulated data entirely. Prompt injection is always blocked regardless of policy.
Using GPT-5.5 in agentic pipelines safely
GPT-5.5's strongest gains are in agentic workflows — autonomous multi-step tasks where the model reads from tools and databases across many turns. These pipelines are where PII exposure is hardest to audit and easiest to miss. Here's a safe pattern:
from openai import OpenAI
# Route all agent calls through the gateway
client = OpenAI(
api_key="your-aisg-api-key",
base_url="https://api.aisecuritygateway.ai/v1",
)
tools = [
{
"type": "function",
"function": {
"name": "get_customer_record",
"description": "Retrieve full customer record by ID",
"parameters": {
"type": "object",
"properties": {
"customer_id": {"type": "string"}
},
"required": ["customer_id"]
}
}
}
]
messages = [
{
"role": "user",
"content": "Summarize the issue history for customer C-88421"
}
]
# Each turn in the agent loop is scanned automatically.
# When get_customer_record returns data containing names,
# emails, or SSNs, the next assistant turn that echoes
# that data back is redacted before it leaves your infra.
response = client.chat.completions.create(
model="gpt-5.5",
messages=messages,
tools=tools,
tool_choice="auto",
)
print(response.choices[0].message)Because the gateway intercepts at the transport layer, every turn of the agent loop is protected automatically — including tool results that contain customer data returned by your own functions. You don't need to audit every tool response for PII before inserting it back into the context. The gateway handles it.
GPT-5.5 vs GPT-5.4 vs GPT-4.1 — which should you use?
With three major OpenAI models now in the current generation, the routing decision matters for both performance and cost:
| Model | Best for | Relative cost | PII risk level |
|---|---|---|---|
| gpt-5.5 | Complex reasoning, agentic coding, scientific research, multi-step tasks | Highest | Highest — used for most sensitive tasks |
| gpt-5.4 | General-purpose production workloads, balanced reasoning and speed | High | High |
| gpt-4.1 | High-volume tasks, cost-sensitive applications, simpler instructions | Lower | Medium — lower stakes tasks typically |
| gpt-4o-mini | Classification, extraction, high-volume lightweight inference | Lowest | Lower — simpler inputs |
Smart routing tip
With AISG Cloud, the Smart Router automatically selects the cheapest eligible model and provider per request based on your configured task type and cost ceiling. You can set GPT-5.5 as the preferred model for specific project keys while routing bulk extraction tasks to gpt-4o-mini at a fraction of the cost — all with the same PII protection layer applied to every request regardless of routing outcome.
Does OpenAI protect PII in GPT-5.5 natively?
This question comes up with every new OpenAI model release, and the answer remains the same: no. OpenAI's enterprise tier provides zero data retention (prompts are not stored after processing) and audit logging for compliance purposes. These are meaningful controls — but they are not PII redaction.
Zero data retention means OpenAI doesn't store your prompt after it returns a response. But the prompt still reaches OpenAI in full — names, SSNs, credit card numbers, API keys, all of it. The model processes the raw prompt text. That transmission event is what GDPR Article 28 (processor agreements), HIPAA (covered entity obligations), and PCI-DSS (cardholder data transmission rules) regulate.
PII filtering has to happen before the data leaves your infrastructure — which means at a gateway proxy in front of the model API. A gateway-layer redaction runs on your infrastructure (or in an SOC 2 certified cloud environment), creates an audit trail of what was detected, and ensures the model provider never receives regulated data in the first place. That's architecturally different from what OpenAI's enterprise controls offer, and both layers can coexist.
Multi-provider strategy: don't bet everything on GPT-5.5
GPT-5.5 outperforms Gemini 3.1 Pro and Claude Opus 4.5 on OpenAI's published benchmarks. Benchmarks are a starting point, not a deployment decision. In practice, different models excel at different task types, and pricing differentials between providers on equivalent tasks can be 3-10x.
More importantly, as OpenAI moves toward a “super app” strategy, the enterprise risk of single-provider dependency increases. If your entire AI stack runs through one vendor and that vendor has an outage, changes pricing, or deprecates a model on short notice, you have no fallback. Teams building production systems in 2026 treat multi-provider routing as a reliability requirement, not a nice-to-have.
from openai import OpenAI
client = OpenAI(
api_key="your-aisg-api-key",
base_url="https://api.aisecuritygateway.ai/v1",
)
# Route explicitly to OpenAI for GPT-5.5
response_gpt55 = client.chat.completions.create(
model="gpt-5.5",
messages=[{"role": "user", "content": "..."}],
extra_headers={"x-provider": "openai"},
)
# Route the same code to Anthropic for comparison
response_claude = client.chat.completions.create(
model="claude-opus-4-5",
messages=[{"role": "user", "content": "..."}],
extra_headers={"x-provider": "anthropic"},
)
# Or let Smart Router pick the cheapest eligible provider
response_smart = client.chat.completions.create(
model="gpt-5.5",
messages=[{"role": "user", "content": "..."}],
# No x-provider header: Smart Router selects automatically
)Because the gateway is OpenAI-SDK-compatible, switching providers requires changing one header — no application code changes. PII protection applies uniformly regardless of which provider or model handles the request.
Frequently asked questions
How do I access GPT-5.5 via the API?
GPT-5.5 is available through the standard OpenAI API starting April 23, 2026. Use the model string gpt-5.5 in your existing OpenAI SDK calls. No SDK upgrade is required — the model string change is the only modification needed. It's available to Plus, Pro, Business, and Enterprise subscribers.
Does GPT-5.5 have built-in PII protection?
No. OpenAI's enterprise tier offers zero data retention (prompts are not stored after processing) but does not redact PII from prompts before the model processes them. The model still receives your raw prompt text. PII filtering must happen at a gateway proxy before the request leaves your infrastructure, which is the architectural role of an AI security gateway.
How much latency does adding PII protection add to GPT-5.5 calls?
The DLP scan adds approximately 14-20ms for typical prompts under 2,000 tokens. GPT-5.5 API calls take 600ms to 3+ seconds to complete depending on output length. Protection overhead is under 3% of total round-trip time — within the normal variance of any API call.
Can I use GPT-5.5 through AI Security Gateway with my existing OpenAI API key?
Yes. On the Pro BYOK plan, you add your OpenAI API key to your AISG account and all requests forward directly to OpenAI using your key — 0% markup. On the Managed Credits plan, AISG uses its own provider keys and deducts from your credit balance. Both plans apply the same PII redaction layer to every request.
What happens if a GPT-5.5 request contains prompt injection?
Prompt injection attempts are always blocked regardless of DLP policy setting. The gateway returns a 400 error with the detected injection pattern type. The request is never forwarded to GPT-5.5 or any other model. This applies to jailbreak patterns, instruction override attempts, system prompt extraction requests, and DAN-style roleplay attacks.
Is GPT-5.5 available for self-hosted deployments?
GPT-5.5 is an OpenAI-hosted model — there is no on-premises or self-hosted version available from OpenAI. However, you can self-host the AI security gateway layer (open-source Apache 2.0) and configure it to proxy your GPT-5.5 calls. This keeps your PII scanning infrastructure on-premises while still routing to OpenAI's hosted model. See the OSS self-hosting guide.
Use GPT-5.5 in production — with PII protection already on
Point your existing OpenAI client at AI Security Gateway and every GPT-5.5 request is automatically scanned and redacted before it reaches OpenAI. Two lines of code. No SDK changes. 1 million free credits to start — no credit card required.
- Works with GPT-5.5, GPT-5.4, GPT-4.1 and 300+ models across 8 providers
- BYOK — use your own OpenAI API key with 0% markup on Pro
- 13+ PII types detected and redacted automatically
- Prompt injection and jailbreak attempts always blocked
- Audit trail of every violation — entity type, count, timestamp
Want to self-host this?
AI Security Gateway is open source. Deploy the core AI security proxy on your own infrastructure — PII redaction, prompt injection blocking, and secret detection included. No account required.
Related Articles
Join the Community