Skip to content

Commit d3769ec

Browse files
feat(ai): add experimental callbacks in generateText (#12654)
## Background our current telemetry setup was heavily integrated with OTel. this is the first step in decoupling the telemetry from the core functions ## Summary - added callback `experimental_onStart` that exposes the data/events that happen at the very beginning - added callback `experimental_onStepStart` - added callback `experimental_onToolCallStart` - added callback `experimental_onToolCallFinish` - modified callback `onStepFinish` - turned into an event that returns [this](https://github.com/vercel/ai/pull/12654/changes#diff-d56ad94c7c3802f6ed5388ec6735018d179fa70735a40c8c79def79e5927d946R1024-R1045) information ## Manual Verification ## Checklist - [x] Tests have been added / updated (for bug fixes / features) - [x] Documentation has been added / updated (for bug fixes / features) - [x] A _patch_ changeset for relevant packages has been added (for bug fixes / features - run `pnpm changeset` in the project root) - [x] I have reviewed this pull request (self-review) ## Future Work - add callbacks for streamText() - refactor generateImage - refactor embed - https://github.com/vercel/ai/pull/12654/changes#r2829080416
1 parent 0f1e6de commit d3769ec

File tree

14 files changed

+4186
-251
lines changed

14 files changed

+4186
-251
lines changed

.changeset/lazy-cougars-smoke.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'ai': patch
3+
---
4+
5+
feat(ai): add experimental callbacks in generateText

content/docs/03-agents/02-building-agents.mdx

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -331,13 +331,14 @@ export async function POST(request: Request) {
331331

332332
### Track Step Progress
333333

334-
Use `onStepFinish` to track each step's progress, including token usage:
334+
Use `onStepFinish` to track each step's progress, including token usage.
335+
The callback receives a `stepNumber` (zero-based) to identify which step just completed:
335336

336337
```ts
337338
const result = await myAgent.generate({
338339
prompt: 'Research and summarize the latest AI trends',
339-
onStepFinish: async ({ usage, finishReason, toolCalls }) => {
340-
console.log('Step completed:', {
340+
onStepFinish: async ({ stepNumber, usage, finishReason, toolCalls }) => {
341+
console.log(`Step ${stepNumber} completed:`, {
341342
inputTokens: usage.inputTokens,
342343
outputTokens: usage.outputTokens,
343344
finishReason,
@@ -352,18 +353,18 @@ You can also define `onStepFinish` in the constructor for agent-wide tracking. W
352353
```ts
353354
const agent = new ToolLoopAgent({
354355
model: __MODEL__,
355-
onStepFinish: async ({ usage }) => {
356+
onStepFinish: async ({ stepNumber, usage }) => {
356357
// Agent-wide logging
357-
console.log('Agent step:', usage.totalTokens);
358+
console.log(`Agent step ${stepNumber}:`, usage.totalTokens);
358359
},
359360
});
360361

361362
// Method-level callback runs after constructor callback
362363
const result = await agent.generate({
363364
prompt: 'Hello',
364-
onStepFinish: async ({ usage }) => {
365+
onStepFinish: async ({ stepNumber, usage }) => {
365366
// Per-call tracking (e.g., for billing)
366-
await trackUsage(usage);
367+
await trackUsage(stepNumber, usage);
367368
},
368369
});
369370
```

content/docs/03-ai-sdk-core/05-generating-text.mdx

Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -105,6 +105,60 @@ const result = await generateText({
105105
});
106106
```
107107

108+
### Lifecycle callbacks (experimental)
109+
110+
<Note type="warning">
111+
Experimental callbacks are subject to breaking changes in incremental package
112+
releases.
113+
</Note>
114+
115+
`generateText` provides several experimental lifecycle callbacks that let you hook into different phases of the generation process.
116+
These are useful for logging, observability, debugging, and custom telemetry.
117+
Errors thrown inside these callbacks are silently caught and do not break the generation flow.
118+
119+
```tsx
120+
import { generateText } from 'ai';
121+
__PROVIDER_IMPORT__;
122+
123+
const result = await generateText({
124+
model: __MODEL__,
125+
prompt: 'What is the weather in San Francisco?',
126+
tools: {
127+
// ... your tools
128+
},
129+
130+
experimental_onStart({ model, settings, functionId }) {
131+
console.log('Generation started', { model, functionId });
132+
},
133+
134+
experimental_onStepStart({ stepNumber, model, promptMessages }) {
135+
console.log(`Step ${stepNumber} starting`, { model: model.modelId });
136+
},
137+
138+
experimental_onToolCallStart({ toolName, toolCallId, input }) {
139+
console.log(`Tool call starting: ${toolName}`, { toolCallId });
140+
},
141+
142+
experimental_onToolCallFinish({ toolName, durationMs, error }) {
143+
console.log(`Tool call finished: ${toolName} (${durationMs}ms)`, {
144+
success: !error,
145+
});
146+
},
147+
148+
onStepFinish({ stepNumber, finishReason, usage }) {
149+
console.log(`Step ${stepNumber} finished`, { finishReason, usage });
150+
},
151+
});
152+
```
153+
154+
The available lifecycle callbacks are:
155+
156+
- **`experimental_onStart`**: Called once when the `generateText` operation begins, before any LLM calls. Receives model info, prompt, settings, and telemetry metadata.
157+
- **`experimental_onStepStart`**: Called before each step (LLM call). Receives the step number, model, prompt messages being sent, tools, and prior steps.
158+
- **`experimental_onToolCallStart`**: Called right before a tool's `execute` function runs. Receives the tool name, call ID, and input.
159+
- **`experimental_onToolCallFinish`**: Called right after a tool's `execute` function completes or errors. Receives the tool name, call ID, input, output (or undefined on error), error (or undefined on success), and `durationMs`.
160+
- **`onStepFinish`**: Called after each step finishes. Now also includes `stepNumber` (zero-based index of the completed step).
161+
108162
## `streamText`
109163

110164
Depending on your model and prompt, it can take a large language model (LLM) up to a minute to finish generating its response. This delay can be unacceptable for interactive use cases such as chatbots or real-time applications, where users expect immediate responses.

content/docs/03-ai-sdk-core/15-tools-and-tool-calling.mdx

Lines changed: 44 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -304,17 +304,59 @@ is triggered when a step is finished,
304304
i.e. all text deltas, tool calls, and tool results for the step are available.
305305
When you have multiple steps, the callback is triggered for each step.
306306

307-
```tsx highlight="5-7"
307+
The callback receives a `stepNumber` (zero-based) to identify which step just completed:
308+
309+
```tsx highlight="5-8"
308310
import { generateText } from 'ai';
309311

310312
const result = await generateText({
311313
// ...
312-
onStepFinish({ text, toolCalls, toolResults, finishReason, usage }) {
314+
onStepFinish({
315+
stepNumber,
316+
text,
317+
toolCalls,
318+
toolResults,
319+
finishReason,
320+
usage,
321+
}) {
322+
console.log(`Step ${stepNumber} finished (${finishReason})`);
313323
// your own logic, e.g. for saving the chat history or recording usage
314324
},
315325
});
316326
```
317327

328+
### Tool execution lifecycle callbacks
329+
330+
You can use `experimental_onToolCallStart` and `experimental_onToolCallFinish` to observe tool execution.
331+
These callbacks are called right before and after each tool's `execute` function, giving you
332+
visibility into tool execution timing, inputs, outputs, and errors:
333+
334+
```tsx highlight="5-14"
335+
import { generateText } from 'ai';
336+
337+
const result = await generateText({
338+
// ... model, tools, prompt
339+
experimental_onToolCallStart({ toolName, toolCallId, input }) {
340+
console.log(`Calling tool: ${toolName}`, { toolCallId, input });
341+
},
342+
experimental_onToolCallFinish({
343+
toolName,
344+
toolCallId,
345+
output,
346+
error,
347+
durationMs,
348+
}) {
349+
if (error) {
350+
console.error(`Tool ${toolName} failed after ${durationMs}ms:`, error);
351+
} else {
352+
console.log(`Tool ${toolName} completed in ${durationMs}ms`, { output });
353+
}
354+
},
355+
});
356+
```
357+
358+
Errors thrown inside these callbacks are silently caught and do not break the generation flow.
359+
318360
### `prepareStep` callback
319361

320362
The `prepareStep` callback is called before a step is started.

0 commit comments

Comments
 (0)