-
Notifications
You must be signed in to change notification settings - Fork 746
Description
Summary
When a Gemini agent runs before an OpenAI agent in a Strands Swarm, Gemini's reasoningContent blocks are stored in the shared conversation history. When that history is later passed to OpenAIResponsesModel._format_request_messages, the method raises TypeError: content_type=<reasoningContent> | unsupported type because the content-type filter only skips toolResult and toolUse blocks — not reasoningContent.
Library version: strands-agents 1.33.0
Python: 3.12
Steps to Reproduce
from strands import Agent
from strands.multiagent import Swarm
from strands.models.gemini import GeminiModel
from strands.models.openai_responses import OpenAIResponsesModel
from google.genai import types as _gtypes
gemini = Agent(
model=GeminiModel(
model_id="gemini-3.1-pro-preview",
client_args={"api_key": GOOGLE_API_KEY},
gemini_tools=[_gtypes.Tool(google_search=_gtypes.GoogleSearch())],
params={
"max_output_tokens": 512,
"tool_config": _gtypes.ToolConfig(
include_server_side_tool_invocations=True
),
},
),
system_prompt="Be brief.",
)
gemini.name = "Gemini"
openai_agent = Agent(
model=OpenAIResponsesModel(
client_args={"api_key": OPENAI_API_KEY},
model_id="gpt-4.1",
params={"max_output_tokens": 256},
),
system_prompt="Be brief.",
)
openai_agent.name = "OpenAI"
# Gemini first → OpenAI second
swarm = Swarm([gemini, openai_agent], entry_point=gemini, max_handoffs=2)
result = swarm("In one sentence say what AI is, then call handoff_to_agent to pass to the next agent.")Expected Behavior
Both agents execute successfully.
Actual Behavior
TypeError: content_type=<reasoningContent> | unsupported type
Additionally, in some runs (when Gemini uses built-in tools such as Google Search), an inconsistent secondary error is also observed:
openai.APIError: An error occurred while processing the request.
The reverse ordering (OpenAI → Gemini) works correctly in both cases.
Root Cause
Where reasoningContent is produced
streaming.py handle_content_block_stop (lines 307–318) appends reasoningContent blocks to the agent's conversation history when a Gemini model streams a reasoning response. These are Gemini-specific and have no equivalent in OpenAI's API.
Where the crash occurs
openai_responses.py _format_request_messages (lines 469–472):
formatted_contents = [
cls._format_request_message_content(content, role=role)
for content in contents
if not any(block_type in content for block_type in ["toolResult", "toolUse"])
]reasoningContent is not in the skip list. When _format_request_message_content receives {"reasoningContent": {...}}, none of the "document", "image", or "text" branches match, so it hits:
raise TypeError(f"content_type=<{next(iter(content))}> | unsupported type")at line 534.
Proposed Fix
Add reasoningContent to the set of provider-specific block types that are dropped when formatting for a different provider:
_PROVIDER_SPECIFIC_BLOCKS = {"toolResult", "toolUse", "reasoningContent"}
formatted_contents = [
cls._format_request_message_content(content, role=role)
for content in contents
if not any(block_type in content for block_type in _PROVIDER_SPECIFIC_BLOCKS)
]reasoningContent carries Gemini's internal reasoning traces and has no OpenAI equivalent. Dropping it silently is safe — the surrounding text blocks already contain the visible response content.
A more defensive long-term fix would be an allow-list approach (only pass known block types) rather than a deny-list, to protect against future provider-specific block types leaking across providers in mixed-model Swarms.