Skip to content

Commit 04c89b1

Browse files
authored
feat,docs(openai,azure):provide Responses API providerMetadata types at the provider-specific metadata / reasoning part metadata (#12010)
## Background The Responses API attaches provider-specific metadata to messages and reasoning parts, but until now there were no exported, typed helpers for accessing this metadata in a type-safe way from client code. As a result, users had to rely on untyped or loosely typed access to `providerMetadata`, especially when working with response IDs, logprobs, service tier information, or reasoning-related metadata. ## Summary This PR adds and documents typed providerMetadata helpers for the Responses API at the message and reasoning level. - Export new providerMetadata types for both OpenAI and Azure: - `OpenaiResponsesProviderMetadata` - `OpenaiResponsesReasoningProviderMetadata` - `AzureResponsesProviderMetadata` - `AzureResponsesReasoningProviderMetadata` - Ensure providerMetadata construction is type-checked internally using `satisfies`. - Update OpenAI and Azure provider documentation with: - Typed examples for accessing message-level providerMetadata. - New sections describing typed providerMetadata in reasoning parts. - Update examples to demonstrate practical usage of the new types (e.g. `responseId`, `logprobs`, `serviceTier`, `reasoningEncryptedContent`). No behavior changes are introduced; this change improves type safety, documentation, and developer ergonomics. ## Manual Verification ### examples in `examples/ai-functions` Edited for add types and run tests. - pnpm tsx src/generate-text/azure-reasoning-encrypted-content.ts - pnpm tsx src/generate-text/azure-reasoning.ts - pnpm tsx src/generate-text/openai-logprobs.ts - pnpm tsx src/generate-text/openai-reasoning-encrypted-content.ts - pnpm tsx src/generate-text/openai-reasoning.ts - pnpm tsx src/generate-text/openai-responses-previous-response-id.ts - pnpm tsx src/stream-text/azure-fullstream-logprobs.ts - pnpm tsx src/stream-text/azure-reasoning-encrypted-content.ts - pnpm tsx src/stream-text/azure-reasoning.ts - pnpm tsx src/stream-text/openai-fullstream-logprobs.ts - pnpm tsx src/stream-text/openai-reasoning-encrypted-content.ts - pnpm tsx src/stream-text/openai-reasoning.ts - pnpm tsx src/stream-text/openai-responses-service-tier.ts ### examples in `examples/next-openai` Edited for add types and run tests. - http://localhost:3000/chat-openai-previous-response-id ## Related Issues #10266 --------- Co-authored-by: tsuzaki430 <tsuzaki430@users.noreply.github.com>
1 parent 1e2dbc0 commit 04c89b1

23 files changed

+618
-272
lines changed

.changeset/afraid-readers-begin.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
---
2+
'@ai-sdk/openai': patch
3+
'@ai-sdk/azure': patch
4+
---
5+
6+
Provide Responses API providerMetadata types at the message / reasoning level.
7+
8+
- Export the following types for use in client code:
9+
- `OpenaiResponsesProviderMetadata`
10+
- `OpenaiResponsesReasoningProviderMetadata`
11+
- `AzureResponsesProviderMetadata`
12+
- `AzureResponsesReasoningProviderMetadata`

content/providers/01-ai-sdk-providers/03-openai.mdx

Lines changed: 67 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -255,24 +255,34 @@ The following provider options are available:
255255

256256
The OpenAI responses provider also returns provider-specific metadata:
257257

258+
For Responses models, you can type this metadata using `OpenaiResponsesProviderMetadata`:
259+
258260
```ts
259-
const { providerMetadata } = await generateText({
260-
model: openai.responses('gpt-5'),
261+
import { openai, type OpenaiResponsesProviderMetadata } from '@ai-sdk/openai';
262+
import { generateText } from 'ai';
263+
264+
const result = await generateText({
265+
model: openai('gpt-5'),
261266
});
262267

263-
const openaiMetadata = providerMetadata?.openai;
264-
```
268+
const providerMetadata = result.providerMetadata as
269+
| OpenaiResponsesProviderMetadata
270+
| undefined;
265271

266-
The following OpenAI-specific metadata is returned:
272+
const { responseId, logprobs, serviceTier } = providerMetadata?.openai ?? {};
267273

268-
- **responseId** _string_
269-
The ID of the response. Can be used to continue a conversation.
274+
// responseId can be used to continue a conversation (previousResponseId).
275+
console.log(responseId);
276+
```
270277

271-
- **cachedPromptTokens** _number_
272-
The number of prompt tokens that were a cache hit.
278+
The following OpenAI-specific metadata may be returned:
273279

274-
- **reasoningTokens** _number_
275-
The number of reasoning tokens that the model generated.
280+
- **responseId** _string | null | undefined_
281+
The ID of the response. Can be used to continue a conversation.
282+
- **logprobs** _(optional)_
283+
Log probabilities of output tokens (when enabled).
284+
- **serviceTier** _(optional)_
285+
Service tier information returned by the API.
276286

277287
#### Reasoning Output
278288

@@ -929,6 +939,52 @@ for (const part of result.content) {
929939
API.
930940
</Note>
931941

942+
#### Typed providerMetadata in Reasoning Parts
943+
944+
When using the OpenAI Responses API, reasoning output parts can include provider metadata.
945+
To handle this metadata in a type-safe way, use `OpenaiResponsesReasoningProviderMetadata`.
946+
947+
For reasoning parts, when `part.type === 'reasoning'`, the `providerMetadata` is provided in the form of `OpenaiResponsesReasoningProviderMetadata`.
948+
949+
This metadata includes the following fields:
950+
951+
- `itemId`
952+
The ID of the reasoning item in the Responses API.
953+
- `reasoningEncryptedContent` (optional)
954+
Encrypted reasoning content (only returned when requested via `include: ['reasoning.encrypted_content']`).
955+
956+
```ts
957+
import {
958+
openai,
959+
type OpenaiResponsesReasoningProviderMetadata,
960+
type OpenAIResponsesProviderOptions,
961+
} from '@ai-sdk/openai';
962+
import { generateText } from 'ai';
963+
964+
const result = await generateText({
965+
model: openai('gpt-5'),
966+
prompt: 'How many "r"s are in the word "strawberry"?',
967+
providerOptions: {
968+
openai: {
969+
store: false,
970+
include: ['reasoning.encrypted_content'],
971+
} satisfies OpenAIResponsesProviderOptions,
972+
},
973+
});
974+
975+
for (const part of result.content) {
976+
if (part.type === 'reasoning') {
977+
const providerMetadata = part.providerMetadata as
978+
| OpenaiResponsesReasoningProviderMetadata
979+
| undefined;
980+
981+
const { itemId, reasoningEncryptedContent } =
982+
providerMetadata?.openai ?? {};
983+
console.log(itemId, reasoningEncryptedContent);
984+
}
985+
}
986+
```
987+
932988
#### Typed providerMetadata in Source Document Parts
933989

934990
For source document parts, when `part.type === 'source'` and `sourceType === 'document'`, the `providerMetadata` is provided as `OpenaiResponsesSourceDocumentProviderMetadata`.

content/providers/01-ai-sdk-providers/04-azure.mdx

Lines changed: 65 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -322,24 +322,34 @@ The following provider options are available:
322322

323323
The Azure OpenAI provider also returns provider-specific metadata:
324324

325+
For Responses models (`azure(deploymentName)`), you can type this metadata using `AzureResponsesProviderMetadata`:
326+
325327
```ts
326-
const { providerMetadata } = await generateText({
328+
import { azure, type AzureResponsesProviderMetadata } from '@ai-sdk/azure';
329+
import { generateText } from 'ai';
330+
331+
const result = await generateText({
327332
model: azure('your-deployment-name'),
328333
});
329334

330-
const openaiMetadata = providerMetadata?.openai;
331-
```
335+
const providerMetadata = result.providerMetadata as
336+
| AzureResponsesProviderMetadata
337+
| undefined;
332338

333-
The following OpenAI-specific metadata is returned:
339+
const { responseId, logprobs, serviceTier } = providerMetadata?.azure ?? {};
334340

335-
- **responseId** _string_
336-
The ID of the response. Can be used to continue a conversation.
341+
// responseId can be used to continue a conversation (previousResponseId).
342+
console.log(responseId);
343+
```
337344

338-
- **cachedPromptTokens** _number_
339-
The number of prompt tokens that were a cache hit.
345+
The following Azure-specific metadata may be returned:
340346

341-
- **reasoningTokens** _number_
342-
The number of reasoning tokens that the model generated.
347+
- **responseId** _string | null | undefined_
348+
The ID of the response. Can be used to continue a conversation.
349+
- **logprobs** _(optional)_
350+
Log probabilities of output tokens (when enabled).
351+
- **serviceTier** _(optional)_
352+
Service tier information returned by the API.
343353

344354
<Note>
345355
The providerMetadata is only returned with the default responses API, and is
@@ -652,6 +662,51 @@ for (const part of result.content) {
652662
API.
653663
</Note>
654664

665+
#### Typed providerMetadata in Reasoning Parts
666+
667+
When using the Azure OpenAI Responses API, reasoning output parts can include provider metadata.
668+
To handle this metadata in a type-safe way, use `AzureResponsesReasoningProviderMetadata`.
669+
670+
For reasoning parts, when `part.type === 'reasoning'`, the `providerMetadata` is provided in the form of `AzureResponsesReasoningProviderMetadata`.
671+
672+
This metadata includes the following fields:
673+
674+
- `itemId`
675+
The ID of the reasoning item in the Responses API.
676+
- `reasoningEncryptedContent` (optional)
677+
Encrypted reasoning content (only returned when requested via `include: ['reasoning.encrypted_content']`).
678+
679+
```ts
680+
import {
681+
azure,
682+
type AzureResponsesReasoningProviderMetadata,
683+
type OpenAIResponsesProviderOptions,
684+
} from '@ai-sdk/azure';
685+
import { generateText } from 'ai';
686+
687+
const result = await generateText({
688+
model: azure('your-deployment-name'),
689+
prompt: 'How many "r"s are in the word "strawberry"?',
690+
providerOptions: {
691+
azure: {
692+
store: false,
693+
include: ['reasoning.encrypted_content'],
694+
} satisfies OpenAIResponsesProviderOptions,
695+
},
696+
});
697+
698+
for (const part of result.content) {
699+
if (part.type === 'reasoning') {
700+
const providerMetadata = part.providerMetadata as
701+
| AzureResponsesReasoningProviderMetadata
702+
| undefined;
703+
704+
const { itemId, reasoningEncryptedContent } = providerMetadata?.azure ?? {};
705+
console.log(itemId, reasoningEncryptedContent);
706+
}
707+
}
708+
```
709+
655710
#### Typed providerMetadata in Source Document Parts
656711

657712
For source document parts, when `part.type === 'source'` and `sourceType === 'document'`, the `providerMetadata` is provided as `AzureResponsesSourceDocumentProviderMetadata`.
Lines changed: 39 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -1,58 +1,50 @@
1-
import { generateText, stepCountIs, tool } from 'ai';
2-
import { z } from 'zod';
1+
import { generateText } from 'ai';
32
import { run } from '../lib/run';
4-
import { azure } from '@ai-sdk/azure';
3+
import {
4+
azure,
5+
AzureResponsesReasoningProviderMetadata,
6+
OpenAIResponsesProviderOptions,
7+
} from '@ai-sdk/azure';
58

69
run(async () => {
710
const result = await generateText({
8-
model: azure.responses('gpt-5.1-codex-max'),
9-
tools: {
10-
calculator: tool({
11-
description:
12-
'A minimal calculator for basic arithmetic. Call it once per step.',
13-
inputSchema: z.object({
14-
a: z.number().describe('First operand.'),
15-
b: z.number().describe('Second operand.'),
16-
op: z
17-
.enum(['add', 'subtract', 'multiply', 'divide'])
18-
.default('add')
19-
.describe('Arithmetic operation to perform.'),
20-
}),
21-
execute: async ({ a, b, op }) => {
22-
switch (op) {
23-
case 'add':
24-
return { result: a + b };
25-
case 'subtract':
26-
return { result: a - b };
27-
case 'multiply':
28-
return { result: a * b };
29-
case 'divide':
30-
if (b === 0) {
31-
return 'Cannot divide by zero.';
32-
}
33-
return { result: a / b };
34-
}
35-
},
36-
}),
37-
},
38-
stopWhen: stepCountIs(20),
11+
model: azure('gpt-5'),
12+
prompt: 'How many "r"s are in the word "strawberry"?',
3913
providerOptions: {
4014
azure: {
41-
reasoningEffort: 'high',
42-
maxCompletionTokens: 32_000,
15+
reasoningEffort: 'low',
16+
reasoningSummary: 'detailed',
4317
store: false,
44-
include: ['reasoning.encrypted_content'],
45-
reasoningSummary: 'auto',
46-
},
18+
include: ['reasoning.encrypted_content'], // Use encrypted reasoning items
19+
} satisfies OpenAIResponsesProviderOptions,
4720
},
48-
messages: [
49-
{
50-
role: 'user',
51-
content:
52-
'Use the calculator tool to add 12 and 7, then multiply that sum by 3 then multiply by 10. Call the tool separately for each arithmetic step and only 1 tool call per step and report the final result.',
53-
},
54-
],
5521
});
5622

57-
console.dir(result.response, { depth: Infinity });
23+
for (const part of result.content) {
24+
switch (part.type) {
25+
case 'reasoning': {
26+
console.log('--- reasoning ---');
27+
console.log(part.text);
28+
const providerMetadata = part.providerMetadata as
29+
| AzureResponsesReasoningProviderMetadata
30+
| undefined;
31+
if (!providerMetadata) break;
32+
const {
33+
azure: { itemId, reasoningEncryptedContent },
34+
} = providerMetadata;
35+
console.log(`itemId: ${itemId}`);
36+
37+
// In the Responses API, explicitly setting store to false opts out of both conversation history and reasoning token storage.
38+
// As a result, reasoningEncryptedContent is used to restore the reasoning tokens for the conversation history.
39+
console.log(`reasoningEncryptedContent: ${reasoningEncryptedContent}`);
40+
break;
41+
}
42+
case 'text': {
43+
console.log('--- text ---');
44+
console.log(part.text);
45+
break;
46+
}
47+
}
48+
console.log();
49+
}
5850
});
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
import {
2+
azure,
3+
AzureResponsesReasoningProviderMetadata,
4+
OpenAIResponsesProviderOptions,
5+
} from '@ai-sdk/azure';
6+
import { generateText } from 'ai';
7+
import { run } from '../lib/run';
8+
9+
run(async () => {
10+
const result = await generateText({
11+
model: azure('gpt-5'),
12+
prompt: 'How many "r"s are in the word "strawberry"?',
13+
providerOptions: {
14+
azure: {
15+
reasoningEffort: 'low',
16+
reasoningSummary: 'detailed',
17+
} satisfies OpenAIResponsesProviderOptions,
18+
},
19+
});
20+
21+
for (const part of result.content) {
22+
switch (part.type) {
23+
case 'reasoning': {
24+
console.log('--- reasoning ---');
25+
console.log(part.text);
26+
const providerMetadata = part.providerMetadata as
27+
| AzureResponsesReasoningProviderMetadata
28+
| undefined;
29+
if (!providerMetadata) break;
30+
const {
31+
azure: { itemId, reasoningEncryptedContent },
32+
} = providerMetadata;
33+
console.log(`itemId: ${itemId}`);
34+
35+
// In the Responses API, store is set to true by default, so conversation history is cached.
36+
// The reasoning tokens from that interaction are also cached, and as a result, reasoningEncryptedContent returns null.
37+
console.log(`reasoningEncryptedContent: ${reasoningEncryptedContent}`);
38+
break;
39+
}
40+
case 'text': {
41+
console.log('--- text ---');
42+
console.log(part.text);
43+
break;
44+
}
45+
}
46+
console.log();
47+
}
48+
});
Lines changed: 25 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
import { openai } from '@ai-sdk/openai';
1+
import { openai, OpenaiResponsesProviderMetadata } from '@ai-sdk/openai';
22
import { generateText } from 'ai';
33
import { run } from '../lib/run';
44

55
run(async () => {
66
const result = await generateText({
7-
model: openai('gpt-3.5-turbo'),
7+
model: openai('gpt-4.1-mini'),
88
prompt: 'Invent a new holiday and describe its traditions.',
99
providerOptions: {
1010
openai: {
@@ -13,5 +13,27 @@ run(async () => {
1313
},
1414
});
1515

16-
console.log(result.providerMetadata?.openai.logprobs);
16+
const providerMetadata = result.providerMetadata as
17+
| OpenaiResponsesProviderMetadata
18+
| undefined;
19+
if (!providerMetadata) return;
20+
const {
21+
openai: { responseId, logprobs, serviceTier },
22+
} = providerMetadata;
23+
responseId && console.log(`responseId: ${responseId}`);
24+
serviceTier && console.log(`serviceTier: ${serviceTier}`);
25+
if (!logprobs) return;
26+
let printed = 0;
27+
for (const logprob of logprobs) {
28+
if (logprob != null) {
29+
for (const token_info of logprob) {
30+
console.log(
31+
`token: ${token_info.token} , logprob: ${token_info.logprob} , top_logprobs: ${JSON.stringify(token_info.top_logprobs)}`,
32+
);
33+
}
34+
console.log();
35+
printed++;
36+
}
37+
if (printed >= 5) break; // Output only the first 5 entries to prevent excessive logging
38+
}
1739
});

0 commit comments

Comments
 (0)