Skip to content

Commit 765b013

Browse files
authored
feat(provider/google): add support for gemini-3.1-pro-preview (#12695)
## Background Google released `gemini-3.1-pro-preview`, which needs to be added to the AI SDK provider packages and documentation. See: https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-pro/ ## Summary Adds `gemini-3.1-pro-preview` as a supported model ID to `@ai-sdk/google`, `@ai-sdk/google-vertex`, and `@ai-sdk/gateway`, and updates documentation and examples to reference the new model where it makes sense. Updates `thinkingLevel` docs to note that Gemini 3.1 Pro supports 'low', 'medium', and 'high' levels. ## Manual Verification N/A ## Checklist - [x] Tests have been added / updated (for bug fixes / features) - [x] Documentation has been added / updated (for bug fixes / features) - [x] A _patch_ changeset for relevant packages has been added (for bug fixes / features - run `pnpm changeset` in the project root) - [x] I have reviewed this pull request (self-review) ## Future Work N/A ## Related Issues N/A
1 parent b3e6f70 commit 765b013

File tree

15 files changed

+30
-16
lines changed

15 files changed

+30
-16
lines changed

.changeset/flat-olives-talk.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
---
2+
'@ai-sdk/google-vertex': patch
3+
'@ai-sdk/gateway': patch
4+
'@ai-sdk/google': patch
5+
---
6+
7+
feat(provider/google): add support for `gemini-3.1-pro-preview`

content/providers/01-ai-sdk-providers/15-google-generative-ai.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -180,7 +180,7 @@ The following optional provider options are available for Google Generative AI m
180180

181181
- **thinkingLevel** _'minimal' | 'low' | 'medium' | 'high'_
182182

183-
Optional. Controls the thinking depth for Gemini 3 models. Gemini 3 Pro supports 'low' and 'high', while Gemini 3 Flash supports all four levels: 'minimal', 'low', 'medium', and 'high'. Only supported by Gemini 3 models (`gemini-3-pro-preview` and later).
183+
Optional. Controls the thinking depth for Gemini 3 models. Gemini 3.1 Pro supports 'low', 'medium', and 'high', Gemini 3 Pro supports 'low' and 'high', while Gemini 3 Flash supports all four levels: 'minimal', 'low', 'medium', and 'high'. Only supported by Gemini 3 models.
184184

185185
- **thinkingBudget** _number_
186186

@@ -261,7 +261,7 @@ For Gemini 3 models, use the `thinkingLevel` parameter to control the depth of r
261261
import { google, GoogleLanguageModelOptions } from '@ai-sdk/google';
262262
import { generateText } from 'ai';
263263

264-
const model = google('gemini-3-pro-preview');
264+
const model = google('gemini-3.1-pro-preview');
265265

266266
const { text, reasoning } = await generateText({
267267
model: model,
@@ -1045,6 +1045,7 @@ The following Zod features are known to not work with Google Generative AI:
10451045

10461046
| Model | Image Input | Object Generation | Tool Usage | Tool Streaming | Google Search | URL Context |
10471047
| ------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
1048+
| `gemini-3.1-pro-preview` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
10481049
| `gemini-3-pro-preview` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
10491050
| `gemini-3-pro-image-preview` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
10501051
| `gemini-3-flash-preview` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |

content/providers/01-ai-sdk-providers/index.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,9 +45,11 @@ Not all providers support all AI SDK features. Here's a quick comparison of the
4545
| [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-sonnet-4-0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
4646
| [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-3-7-sonnet-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
4747
| [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-3-5-haiku-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
48+
| [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai) | `gemini-3.1-pro-preview` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
4849
| [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai) | `gemini-3-pro-preview` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
4950
| [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai) | `gemini-2.5-pro` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
5051
| [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai) | `gemini-2.5-flash` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
52+
| [Google Vertex](/providers/ai-sdk-providers/google-vertex) | `gemini-3.1-pro-preview` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
5153
| [Google Vertex](/providers/ai-sdk-providers/google-vertex) | `gemini-3-pro-preview` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
5254
| [Google Vertex](/providers/ai-sdk-providers/google-vertex) | `gemini-2.5-pro` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
5355
| [Google Vertex](/providers/ai-sdk-providers/google-vertex) | `gemini-2.5-flash` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |

content/providers/03-community-providers/18-gemini-cli.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ const model = gemini('gemini-2.5-pro');
9494

9595
Supported models:
9696

97-
- **gemini-3-pro-preview**: Latest model with enhanced reasoning (supports `thinkingLevel`)
97+
- **gemini-3.1-pro-preview**: Latest model with enhanced reasoning (supports `thinkingLevel`)
9898
- **gemini-3-flash-preview**: Fast Gemini 3 model (supports `thinkingLevel`)
9999
- **gemini-2.5-pro**: Production-ready model with 64K output tokens (supports `thinkingBudget`)
100100
- **gemini-2.5-flash**: Fast, efficient model with 64K output tokens (supports `thinkingBudget`)
@@ -118,7 +118,7 @@ const { text } = await generateText({
118118
### Model Settings
119119

120120
```ts
121-
const model = gemini('gemini-3-pro-preview', {
121+
const model = gemini('gemini-3.1-pro-preview', {
122122
temperature: 0.7,
123123
topP: 0.95,
124124
topK: 40,
@@ -135,6 +135,7 @@ const model = gemini('gemini-3-pro-preview', {
135135

136136
| Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
137137
| ------------------------ | ------------------- | ------------------- | ------------------- | ------------------- |
138+
| `gemini-3.1-pro-preview` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
138139
| `gemini-3-pro-preview` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
139140
| `gemini-3-flash-preview` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
140141
| `gemini-2.5-pro` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |

content/providers/03-community-providers/31-opencode-sdk.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ opencode(OpencodeModels['gpt-4o']); // openai/gpt-4o
9292
opencode(OpencodeModels['gpt-4o-mini']); // openai/gpt-4o-mini
9393

9494
// Google Gemini
95-
opencode(OpencodeModels['gemini-3-pro']); // google/gemini-3-pro-preview
95+
opencode(OpencodeModels['gemini-3.1-pro-preview']); // google/gemini-3.1-pro-preview
9696
opencode(OpencodeModels['gemini-2.5-pro']); // google/gemini-2.5-pro
9797
opencode(OpencodeModels['gemini-2.5-flash']); // google/gemini-2.5-flash
9898
opencode(OpencodeModels['gemini-2.0-flash']); // google/gemini-2.0-flash
@@ -103,7 +103,7 @@ You can also use full model identifiers:
103103
```ts
104104
opencode('openai/gpt-5.1-codex');
105105
opencode('openai/gpt-5.1-codex-max');
106-
opencode('google/gemini-3-pro-preview');
106+
opencode('google/gemini-3.1-pro-preview');
107107
```
108108

109109
### Example

content/providers/03-community-providers/47-apertis.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ const model = apertis.chat('claude-sonnet-4.5');
6363

6464
- **OpenAI**: `gpt-5.2`, `gpt-5.2-chat`, `gpt-5.2-pro`
6565
- **Anthropic**: `claude-opus-4-5-20251101`, `claude-sonnet-4.5`, `claude-haiku-4.5`
66-
- **Google**: `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-pro`
66+
- **Google**: `gemini-3.1-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-pro`
6767
- **Other**: `glm-4.7`, `minimax-m2.1`, and 470+ more
6868

6969
## Embedding Models

content/providers/03-community-providers/49-cencori.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ const opus = cencori('claude-3-opus');
6464

6565
// Google Gemini models
6666
const gemini = cencori('gemini-2.5-flash');
67-
const geminiPro = cencori('gemini-3-pro');
67+
const geminiPro = cencori('gemini-3.1-pro-preview');
6868

6969
// Other providers
7070
const mistral = cencori('mistral-large');

examples/ai-functions/src/generate-text/google-vertex-tool-call.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ import { run } from '../lib/run';
55

66
run(async () => {
77
const { text } = await generateText({
8-
model: vertex('gemini-3-pro-preview'),
8+
model: vertex('gemini-3.1-pro-preview'),
99
prompt: 'What is the weather in New York City? ',
1010
tools: {
1111
weather: tool({

examples/ai-functions/src/generate-text/openai-compatible-google-thought-signatures.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ run(async () => {
1212
},
1313
});
1414

15-
const model = googleOpenAI.chatModel('gemini-3-pro-preview');
15+
const model = googleOpenAI.chatModel('gemini-3.1-pro-preview');
1616

1717
const tools = {
1818
check_flight: tool({

examples/ai-functions/src/stream-text/google-multiturn-tool-error.ts

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ run(async () => {
1515

1616
console.log('=== turn 1: tool call that will naturally fail ===');
1717
const turn1 = streamText({
18-
model: google('gemini-3-pro-preview'),
18+
model: google('gemini-3.1-pro-preview'),
1919
tools: {
2020
readuserdata: tool({
2121
description: 'read user data from file',
@@ -127,7 +127,7 @@ run(async () => {
127127

128128
try {
129129
const turn2 = streamText({
130-
model: google('gemini-3-pro-preview'),
130+
model: google('gemini-3.1-pro-preview'),
131131
messages: messagesForTurn2,
132132
includeRawChunks: true,
133133
tools: {
@@ -181,7 +181,7 @@ run(async () => {
181181
];
182182

183183
const turn3 = streamText({
184-
model: google('gemini-3-pro-preview'),
184+
model: google('gemini-3.1-pro-preview'),
185185
messages: messagesForTurn3,
186186
includeRawChunks: true,
187187
tools: {

0 commit comments

Comments
 (0)