Skip to content

[Bug]向 LLM 发送图片时似乎会触发 500 报错,疑似图片 JSON 格式兼容问题 #8174

@yuuuuuouo

Description

@yuuuuuouo

What happened / 发生了什么

在让大模型看图时,似乎会稳定触发请求失败,抛出 500 InternalServerErrormax_retries 错误。
起初我以为是我写的插件工具返回 mcp.types.ImageContent 导致的问题,但后来我尝试了不使用工具,直接在聊天中发图片给机器人,发现也会报同样的错误。
对话上下文(Context)中包含了图片数据(比如组装了 {"type": "image_url", ...} 结构),继续对话时也会报错,删除这个图片数据就会恢复。

Reproduce / 如何复现?

为了定位问题做了一些交叉验证,可参考:
方式一:直接发图
在聊天软件中向机器人发送图片,触发大模型的视觉识别,控制台会报错。
方式二:篡改上下文json数据注入极小图片
强行向上下文注入了一个只有 1x1 像素(纯黑)的极小 Base64 图片,想以此排除“图片体积过大导致代理商超时”的可能性

AstrBot version, deployment method (e.g., Windows Docker Desktop deployment), provider used, and messaging platform used. / AstrBot 版本、部署方式(如 Windows Docker Desktop 部署)、使用的提供商、使用的消息平台适配器

AstrBot 版本: [v4.24.2]
使用的 LLM 驱动及模型: 换了多个api中转站、不同模型,一律报错。
操作系统: [Windows 10]
消息平台适配器:qq

OS

Windows

Logs / 报错日志

控制台打印出的报错信息如下:
LLM 响应错误: All chat models failed: InternalServerError: Error code: 500 - {'error': {'message': 'Max retries reached:', 'type': 'server_error', 'param': '', 'code': 'max_retries'}}
[v4.24.2] [runners.tool_loop_agent_runner:555]: Chat Model 90/[k]claude-sonnet-4-6 request error: Error code: 500 - {'error': {'message': 'Max retries reached:', 'type': 'server_error', 'param': '', 'code': 'max_retries'}}
Traceback (most recent call last):
File "C:\Users\Administrator\Desktop\AstrBotLauncher-0.2.0\AstrBot\astrbot\core\agent\runners\tool_loop_agent_runner.py", line 510, in iter_llm_responses_with_fallback
async for attempt in retrying:
File "C:\Users\Administrator\Desktop\AstrBotLauncher-0.2.0\AstrBot\venv\Lib\site-packages\tenacity\asyncio_init
.py", line 170, in anext
do = await self.iter(retry_state=self.retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\AstrBotLauncher-0.2.0\AstrBot\venv\Lib\site-packages\tenacity\asyncio_init
.py", line 157, in iter
result = await action(retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\AstrBotLauncher-0.2.0\AstrBot\venv\Lib\site-packages\tenacity_utils.py", line 111, in inner
return call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\AstrBotLauncher-0.2.0\AstrBot\venv\Lib\site-packages\tenacity_init_.py", line 393, in
self._add_action_func(lambda rs: rs.outcome.result())
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 401, in __get_result
raise self._exception
File "C:\Users\Administrator\Desktop\AstrBotLauncher-0.2.0\AstrBot\astrbot\core\agent\runners\tool_loop_agent_runner.py", line 514, in _iter_llm_responses_with_fallback
async for resp in self._iter_llm_responses(
File "C:\Users\Administrator\Desktop\AstrBotLauncher-0.2.0\AstrBot\astrbot\core\agent\runners\tool_loop_agent_runner.py", line 477, in _iter_llm_responses
yield await self.provider.text_chat(**payload)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\AstrBotLauncher-0.2.0\AstrBot\astrbot\core\provider\sources\openai_source.py", line 1214, in text_chat
) = await self._handle_api_error(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\AstrBotLauncher-0.2.0\AstrBot\astrbot\core\provider\sources\openai_source.py", line 1160, in _handle_api_error
raise e
File "C:\Users\Administrator\Desktop\AstrBotLauncher-0.2.0\AstrBot\astrbot\core\provider\sources\openai_source.py", line 1202, in text_chat
llm_response = await self._query(payloads, func_tool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\AstrBotLauncher-0.2.0\AstrBot\astrbot\core\provider\sources\openai_source.py", line 596, in _query
completion = await self.client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\AstrBotLauncher-0.2.0\AstrBot\venv\Lib\site-packages\openai\resources\chat\completions\completions.py", line 2714, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\AstrBotLauncher-0.2.0\AstrBot\venv\Lib\site-packages\openai_base_client.py", line 1884, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\AstrBotLauncher-0.2.0\AstrBot\venv\Lib\site-packages\openai_base_client.py", line 1669, in request
raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 500 - {'error': {'message': 'Max retries reached:', 'type': 'server_error', 'param': '', 'code': 'max_retries'}}

Are you willing to submit a PR? / 你愿意提交 PR 吗?

  • Yes!

Code of Conduct

Metadata

Metadata

Assignees

No one assigned

    Labels

    area:providerThe bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner.bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions