fix: support gpt-5 (#8945)

* Update models.ts

* Update models.ts

* Update models.ts

* feat: add OpenAI verbosity setting for GPT-5 model

Introduces a new 'verbosity' option for the OpenAI GPT-5 model, allowing users to control the level of detail in model output. Updates settings state, migration logic, UI components, and i18n translations to support this feature.

* fix(models): 修正gpt-5模型判断逻辑以支持包含gpt-5的模型ID

* fix(i18n): 修正繁体中文和希腊语的翻译错误

* fix(models): 优化OpenAI推理模型判断逻辑

* fix(OpenAIResponseAPIClient): 不再为response api添加stream_options

* fix: update OpenAI model check and add verbosity setting

Changed GPT-5 model detection to use includes instead of strict equality. Added default 'verbosity' property to OpenAI settings in migration logic.

* feat(models): 添加 GPT-5 系列模型的图标和配置

添加 GPT-5、GPT-5-chat、GPT-5-mini 和 GPT-5-nano 的图标文件,并在 models.ts 中配置对应的模型 logo

* Merge branch 'main' into fix-gpt5

* Add verbosity setting to OpenAI API client

Introduces a getVerbosity method in BaseApiClient to retrieve verbosity from settings, and passes this value in the OpenAIResponseAPIClient request payload. This enables configurable response verbosity for OpenAI API interactions.

* Upgrade OpenAI package to 5.12.2 and update patch

Replaced the OpenAI dependency from version 5.12.0 to 5.12.2 and updated related patch files and references in package.json and yarn.lock. Also updated a log message in BaseApiClient.ts for clarity.

* fix: add type and property checks for tool call handling

Improves robustness by adding explicit checks for 'function' property and 'type' when parsing tool calls and estimating tokens. Also adds error handling for unknown tool call types in mcp-tools and updates related test logic.

* feat(模型配置): 添加gpt5模型支持及相关配置

- 在模型类型中新增gpt5支持
- 添加gpt5系列模型检测函数
- 更新推理选项配置和国际化文本
- 调整effort ratio数值

* fix(ThinkingButton): 为gpt-5及后续模型添加minimal到low的选项回退映射

* feat(i18n): 更新思维链长度的中文翻译并调整对应图标

为思维链长度的"minimal"选项添加中文翻译"微念",同时调整各选项对应的灯泡图标亮度

* feat(i18n): 为推理努力设置添加"minimal"选项并调整英文文案

* fix: openai patch

* wip: OpenAISettingsGroup display

* fix: 修复OpenAISettingsGroup组件中GPT5条件下的渲染逻辑

* refactor(OpenAISettingsGroup): 优化设置项的分组和分隔符逻辑

* feat(模型配置): 添加gpt-5到visionAllowedModels列表

* feat(模型配置): 添加gpt-5到函数调用支持列表

将gpt-5及其变体添加到FUNCTION_CALLING_MODELS支持列表,同时将gpt-5-chat添加到排除列表

* fix: 在OpenAI推理模型检查中添加gpt-5-chat支持

* Update OpenAISettingsGroup.tsx

* feat(模型支持): 添加对verbosity模型的支持判断

新增isSupportVerbosityModel函数用于判断是否支持verbosity模型
替换原有isGPT5SeriesModel判断逻辑,统一使用新函数

* fix: 修复支持详细程度模型的判断逻辑

使用 getLowerBaseModelName 处理模型 ID 以确保大小写不敏感的比较

* feat: 添加对gpt-5模型的网络搜索支持但不包括chat变体

* fix(models): 修复gpt5模型支持选项缺少'off'的问题

* fix: 添加gpt-5到支持Flex Service Tier的模型列表

* refactor(aiCore): 优化OpenAI verbosity类型定义和使用

移除OpenAIResponseAPIClient中冗余的OpenAIVerbosity导入
在BaseApiClient中明确getVerbosity返回类型为OpenAIVerbosity
简化OpenAIResponseAPIClient中verbosity的类型断言

* fix(openai): 仅在支持verbosity的模型中添加verbosity参数

* fix(i18n): 修正OpenAI设置中不一致的翻译

* fix: modify low effort ratio

* fix(openai): 修复GPT5系列模型在启用网页搜索时不能使用minimal reasoning_effort的问题

* fix(openai): 修复GPT5系列模型在启用web搜索时不能使用minimal推理的问题

---------

Co-authored-by: icarus <eurfelux@gmail.com>
Co-authored-by: Phantom <59059173+EurFelux@users.noreply.github.com>
This commit is contained in:
Pleasure1234 2025-08-10 14:27:26 +08:00 committed by GitHub
parent 0b89e9a8f9
commit 27c9ceab9f
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
31 changed files with 355 additions and 91 deletions

View File

@ -1 +1,8 @@
NODE_OPTIONS=--max-old-space-size=8000
API_KEY="sk-xxx"
BASE_URL="https://api.siliconflow.cn/v1/"
MODEL="Qwen/Qwen3-235B-A22B-Instruct-2507"
CSLOGGER_MAIN_LEVEL=info
CSLOGGER_RENDERER_LEVEL=info
#CSLOGGER_MAIN_SHOW_MODULES=
#CSLOGGER_RENDERER_SHOW_MODULES=

View File

@ -216,7 +216,7 @@
"motion": "^12.10.5",
"notion-helper": "^1.3.22",
"npx-scope-finder": "^1.2.0",
"openai": "patch:openai@npm%3A5.12.0#~/.yarn/patches/openai-npm-5.12.0-a06a6369b2.patch",
"openai": "patch:openai@npm%3A5.12.2#~/.yarn/patches/openai-npm-5.12.2-30b075401c.patch",
"p-queue": "^8.1.0",
"pdf-lib": "^1.17.1",
"playwright": "^1.52.0",
@ -274,10 +274,8 @@
"@langchain/openai@npm:^0.3.16": "patch:@langchain/openai@npm%3A0.3.16#~/.yarn/patches/@langchain-openai-npm-0.3.16-e525b59526.patch",
"@langchain/openai@npm:>=0.1.0 <0.4.0": "patch:@langchain/openai@npm%3A0.3.16#~/.yarn/patches/@langchain-openai-npm-0.3.16-e525b59526.patch",
"libsql@npm:^0.4.4": "patch:libsql@npm%3A0.4.7#~/.yarn/patches/libsql-npm-0.4.7-444e260fb1.patch",
"openai@npm:^4.77.0": "patch:openai@npm%3A5.12.0#~/.yarn/patches/openai-npm-5.12.0-a06a6369b2.patch",
"pkce-challenge@npm:^4.1.0": "patch:pkce-challenge@npm%3A4.1.0#~/.yarn/patches/pkce-challenge-npm-4.1.0-fbc51695a3.patch",
"app-builder-lib@npm:26.0.13": "patch:app-builder-lib@npm%3A26.0.13#~/.yarn/patches/app-builder-lib-npm-26.0.13-a064c9e1d0.patch",
"openai@npm:^4.87.3": "patch:openai@npm%3A5.12.0#~/.yarn/patches/openai-npm-5.12.0-a06a6369b2.patch",
"app-builder-lib@npm:26.0.15": "patch:app-builder-lib@npm%3A26.0.15#~/.yarn/patches/app-builder-lib-npm-26.0.15-360e5b0476.patch",
"@langchain/core@npm:^0.3.26": "patch:@langchain/core@npm%3A0.3.44#~/.yarn/patches/@langchain-core-npm-0.3.44-41d5c3cb0a.patch",
"node-abi": "4.12.0",
@ -285,7 +283,9 @@
"vite": "npm:rolldown-vite@latest",
"atomically@npm:^1.7.0": "patch:atomically@npm%3A1.7.0#~/.yarn/patches/atomically-npm-1.7.0-e742e5293b.patch",
"file-stream-rotator@npm:^0.6.1": "patch:file-stream-rotator@npm%3A0.6.1#~/.yarn/patches/file-stream-rotator-npm-0.6.1-eab45fb13d.patch",
"windows-system-proxy@npm:^1.0.0": "patch:windows-system-proxy@npm%3A1.0.0#~/.yarn/patches/windows-system-proxy-npm-1.0.0-ff2a828eec.patch"
"windows-system-proxy@npm:^1.0.0": "patch:windows-system-proxy@npm%3A1.0.0#~/.yarn/patches/windows-system-proxy-npm-1.0.0-ff2a828eec.patch",
"openai@npm:^4.77.0": "patch:openai@npm%3A5.12.2#~/.yarn/patches/openai-npm-5.12.2-30b075401c.patch",
"openai@npm:^4.87.3": "patch:openai@npm%3A5.12.2#~/.yarn/patches/openai-npm-5.12.2-30b075401c.patch"
},
"packageManager": "yarn@4.9.1",
"lint-staged": {

View File

@ -23,6 +23,7 @@ import {
MemoryItem,
Model,
OpenAIServiceTiers,
OpenAIVerbosity,
Provider,
SystemProviderIds,
ToolCallResponse,
@ -233,6 +234,21 @@ export abstract class BaseApiClient<
return serviceTierSetting
}
protected getVerbosity(): OpenAIVerbosity {
try {
const state = window.store?.getState()
const verbosity = state?.settings?.openAI?.verbosity
if (verbosity && ['low', 'medium', 'high'].includes(verbosity)) {
return verbosity
}
} catch (error) {
logger.warn('Failed to get verbosity from state:', error as Error)
}
return 'medium'
}
protected getTimeout(model: Model) {
if (isSupportFlexServiceTierModel(model)) {
return 15 * 1000 * 60

View File

@ -6,6 +6,7 @@ import {
getOpenAIWebSearchParams,
getThinkModelType,
isDoubaoThinkingAutoModel,
isGPT5SeriesModel,
isGrokReasoningModel,
isNotSupportSystemMessageModel,
isQwenAlwaysThinkModel,
@ -391,10 +392,14 @@ export class OpenAIAPIClient extends OpenAIBaseClient<
): ToolCallResponse {
let parsedArgs: any
try {
if ('function' in toolCall) {
parsedArgs = JSON.parse(toolCall.function.arguments)
}
} catch {
if ('function' in toolCall) {
parsedArgs = toolCall.function.arguments
}
}
return {
id: toolCall.id,
toolCallId: toolCall.id,
@ -471,7 +476,10 @@ export class OpenAIAPIClient extends OpenAIBaseClient<
}
if ('tool_calls' in message && message.tool_calls) {
sum += message.tool_calls.reduce((acc, toolCall) => {
if (toolCall.type === 'function' && 'function' in toolCall) {
return acc + estimateTextTokens(JSON.stringify(toolCall.function.arguments))
}
return acc
}, 0)
}
return sum
@ -572,6 +580,13 @@ export class OpenAIAPIClient extends OpenAIBaseClient<
// Note: Some providers like Mistral don't support stream_options
const shouldIncludeStreamOptions = streamOutput && isSupportStreamOptionsProvider(this.provider)
const reasoningEffort = this.getReasoningEffort(assistant, model)
// minimal cannot be used with web_search tool
if (isGPT5SeriesModel(model) && reasoningEffort.reasoning_effort === 'minimal' && enableWebSearch) {
reasoningEffort.reasoning_effort = 'low'
}
const commonParams: OpenAISdkParams = {
model: model.id,
messages:
@ -587,7 +602,7 @@ export class OpenAIAPIClient extends OpenAIBaseClient<
// groq 有不同的 service tier 配置,不符合 openai 接口类型
service_tier: this.getServiceTier(model) as OpenAIServiceTier,
...this.getProviderSpecificParameters(assistant, model),
...this.getReasoningEffort(assistant, model),
...reasoningEffort,
...getOpenAIWebSearchParams(model, enableWebSearch),
// OpenRouter usage tracking
...(this.provider.id === 'openrouter' ? { usage: { include: true } } : {}),
@ -901,8 +916,10 @@ export class OpenAIAPIClient extends OpenAIBaseClient<
type: 'function'
}
} else if (fun?.arguments) {
if (toolCalls[index] && toolCalls[index].type === 'function' && 'function' in toolCalls[index]) {
toolCalls[index].function.arguments += fun.arguments
}
}
} else {
toolCalls.push(toolCall)
}

View File

@ -2,12 +2,14 @@ import { loggerService } from '@logger'
import { GenericChunk } from '@renderer/aiCore/middleware/schemas'
import { CompletionsContext } from '@renderer/aiCore/middleware/types'
import {
isGPT5SeriesModel,
isOpenAIChatCompletionOnlyModel,
isOpenAILLMModel,
isSupportedReasoningEffortOpenAIModel,
isSupportVerbosityModel,
isVisionModel
} from '@renderer/config/models'
import { isSupportDeveloperRoleProvider, isSupportStreamOptionsProvider } from '@renderer/config/providers'
import { isSupportDeveloperRoleProvider } from '@renderer/config/providers'
import { estimateTextTokens } from '@renderer/services/TokenService'
import {
FileMetadata,
@ -304,8 +306,7 @@ export class OpenAIResponseAPIClient extends OpenAIBaseClient<
const content = this.convertResponseToMessageContent(output)
const newReqMessages = [...currentReqMessages, ...content, ...(toolResults || [])]
return newReqMessages
return [...currentReqMessages, ...content, ...(toolResults || [])]
}
override estimateMessageTokens(message: OpenAIResponseSdkMessageParam): number {
@ -442,7 +443,12 @@ export class OpenAIResponseAPIClient extends OpenAIBaseClient<
tools = tools.concat(extraTools)
const shouldIncludeStreamOptions = streamOutput && isSupportStreamOptionsProvider(this.provider)
const reasoningEffort = this.getReasoningEffort(assistant, model)
// minimal cannot be used with web_search tool
if (isGPT5SeriesModel(model) && reasoningEffort.reasoning?.effort === 'minimal' && enableWebSearch) {
reasoningEffort.reasoning.effort = 'low'
}
const commonParams: OpenAIResponseSdkParams = {
model: model.id,
@ -454,10 +460,16 @@ export class OpenAIResponseAPIClient extends OpenAIBaseClient<
top_p: this.getTopP(assistant, model),
max_output_tokens: maxTokens,
stream: streamOutput,
...(shouldIncludeStreamOptions ? { stream_options: { include_usage: true } } : {}),
tools: !isEmpty(tools) ? tools : undefined,
// groq 有不同的 service tier 配置,不符合 openai 接口类型
service_tier: this.getServiceTier(model) as OpenAIServiceTier,
...(isSupportVerbosityModel(model)
? {
text: {
verbosity: this.getVerbosity()
}
}
: {}),
...(this.getReasoningEffort(assistant, model) as OpenAI.Reasoning),
// 只在对话场景下应用自定义参数,避免影响翻译、总结等其他业务逻辑
// 注意:用户自定义参数总是应该覆盖其他参数

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

View File

@ -56,6 +56,18 @@ export function MdiLightbulbOn10(props: SVGProps<SVGSVGElement>) {
)
}
export function MdiLightbulbOn30(props: SVGProps<SVGSVGElement>) {
return (
<svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24" {...props}>
{/* Icon from Material Design Icons by Pictogrammers - https://github.com/Templarian/MaterialDesign/blob/master/LICENSE */}
<path
fill="currentColor"
d="M7 5.6L5.6 7L3.5 4.9L4.9 3.5L7 5.6M1 13H4V11H1V13M13 1H11V4H13V1M18 12C18 14.2 16.8 16.2 15 17.2V19C15 19.6 14.6 20 14 20H10C9.4 20 9 19.6 9 19V17.2C7.2 16.2 6 14.2 6 12C6 8.7 8.7 6 12 6S18 8.7 18 12M16 12C16 9.79 14.21 8 12 8S8 9.79 8 12C8 13.2 8.54 14.27 9.38 15H14.62C15.46 14.27 16 13.2 16 12M10 22C10 22.6 10.4 23 11 23H13C13.6 23 14 22.6 14 22V21H10V22M20 11V13H23V11H20M19.1 3.5L17 5.6L18.4 7L20.5 4.9L19.1 3.5Z"
/>
</svg>
)
}
export function MdiLightbulbOn50(props: SVGProps<SVGSVGElement>) {
return (
<svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24" {...props}>
@ -67,6 +79,17 @@ export function MdiLightbulbOn50(props: SVGProps<SVGSVGElement>) {
)
}
export function MdiLightbulbOn80(props: SVGProps<SVGSVGElement>) {
return (
<svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24" {...props}>
{/* Icon from Material Design Icons by Pictogrammers - https://github.com/Templarian/MaterialDesign/blob/master/LICENSE */}
<path
fill="currentColor"
d="M7 5.6L5.6 7L3.5 4.9L4.9 3.5L7 5.6M1 13H4V11H1V13M13 1H11V4H13V1M10 22C10 22.6 10.4 23 11 23H13C13.6 23 14 22.6 14 22V21H10V22M20 11V13H23V11H20M19.1 3.5L17 5.6L18.4 7L20.5 4.9L19.1 3.5M18 12C18 14.2 16.8 16.2 15 17.2V19C15 19.6 14.6 20 14 20H10C9.4 20 9 19.6 9 19V17.2C7.2 16.2 6 14.2 6 12C6 8.7 8.7 6 12 6S18 8.7 18 12M8.56 10H15.44C14.75 8.81 13.5 8 12 8S9.25 8.81 8.56 10Z"
/>
</svg>
)
}
export function MdiLightbulbOn90(props: SVGProps<SVGSVGElement>) {
return (
<svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24" {...props}>
@ -77,3 +100,15 @@ export function MdiLightbulbOn90(props: SVGProps<SVGSVGElement>) {
</svg>
)
}
export function MdiLightbulbOn(props: SVGProps<SVGSVGElement>) {
// {/* Icon from Material Design Icons by Pictogrammers - https://github.com/Templarian/MaterialDesign/blob/master/LICENSE */}
return (
<svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24" {...props}>
<path
fill="currentColor"
d="M12,6A6,6 0 0,1 18,12C18,14.22 16.79,16.16 15,17.2V19A1,1 0 0,1 14,20H10A1,1 0 0,1 9,19V17.2C7.21,16.16 6,14.22 6,12A6,6 0 0,1 12,6M14,21V22A1,1 0 0,1 13,23H11A1,1 0 0,1 10,22V21H14M20,11H23V13H20V11M1,11H4V13H1V11M13,1V4H11V1H13M4.92,3.5L7.05,5.64L5.63,7.05L3.5,4.93L4.92,3.5M16.95,5.63L19.07,3.5L20.5,4.93L18.37,7.05L16.95,5.63Z"
/>
</svg>
)
}

View File

@ -57,6 +57,10 @@ import {
} from '@renderer/assets/images/models/gpt_dark.png'
import ChatGPTImageModelLogo from '@renderer/assets/images/models/gpt_image_1.png'
import ChatGPTo1ModelLogo from '@renderer/assets/images/models/gpt_o1.png'
import GPT5ModelLogo from '@renderer/assets/images/models/gpt-5.png'
import GPT5ChatModelLogo from '@renderer/assets/images/models/gpt-5-chat.png'
import GPT5MiniModelLogo from '@renderer/assets/images/models/gpt-5-mini.png'
import GPT5NanoModelLogo from '@renderer/assets/images/models/gpt-5-nano.png'
import GrokModelLogo from '@renderer/assets/images/models/grok.png'
import GrokModelLogoDark from '@renderer/assets/images/models/grok_dark.png'
import GrypheModelLogo from '@renderer/assets/images/models/gryphe.png'
@ -185,6 +189,7 @@ const visionAllowedModels = [
'gpt-4.1(?:-[\\w-]+)?',
'gpt-4o(?:-[\\w-]+)?',
'gpt-4.5(?:-[\\w-]+)',
'gpt-5(?:-[\\w-]+)?',
'chatgpt-4o(?:-[\\w-]+)?',
'o1(?:-[\\w-]+)?',
'o3(?:-[\\w-]+)?',
@ -247,6 +252,7 @@ export const FUNCTION_CALLING_MODELS = [
'gpt-4',
'gpt-4.5',
'gpt-oss(?:-[\\w-]+)',
'gpt-5(?:-[\\w-]+)?',
'o(1|3|4)(?:-[\\w-]+)?',
'claude',
'qwen',
@ -269,7 +275,8 @@ const FUNCTION_CALLING_EXCLUDED_MODELS = [
'o1-preview',
'AIDC-AI/Marco-o1',
'gemini-1(?:\\.[\\w-]+)?',
'qwen-mt(?:-[\\w-]+)?'
'qwen-mt(?:-[\\w-]+)?',
'gpt-5-chat(?:-[\\w-]+)?'
]
export const FUNCTION_CALLING_REGEX = new RegExp(
@ -285,6 +292,7 @@ export const CLAUDE_SUPPORTED_WEBSEARCH_REGEX = new RegExp(
// 模型类型到支持的reasoning_effort的映射表
export const MODEL_SUPPORTED_REASONING_EFFORT: ReasoningEffortConfig = {
default: ['low', 'medium', 'high'] as const,
gpt5: ['minimal', 'low', 'medium', 'high'] as const,
grok: ['low', 'high'] as const,
gemini: ['low', 'medium', 'high', 'auto'] as const,
gemini_pro: ['low', 'medium', 'high', 'auto'] as const,
@ -299,18 +307,22 @@ export const MODEL_SUPPORTED_REASONING_EFFORT: ReasoningEffortConfig = {
// 模型类型到支持选项的映射表
export const MODEL_SUPPORTED_OPTIONS: ThinkingOptionConfig = {
default: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.default] as const,
grok: [...MODEL_SUPPORTED_REASONING_EFFORT.grok] as const,
gpt5: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.gpt5] as const,
grok: MODEL_SUPPORTED_REASONING_EFFORT.grok,
gemini: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini] as const,
gemini_pro: [...MODEL_SUPPORTED_REASONING_EFFORT.gemini_pro] as const,
gemini_pro: MODEL_SUPPORTED_REASONING_EFFORT.gemini_pro,
qwen: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.qwen] as const,
qwen_thinking: [...MODEL_SUPPORTED_REASONING_EFFORT.qwen_thinking] as const,
qwen_thinking: MODEL_SUPPORTED_REASONING_EFFORT.qwen_thinking,
doubao: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao] as const,
hunyuan: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.hunyuan] as const,
zhipu: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.zhipu] as const,
perplexity: [...MODEL_SUPPORTED_REASONING_EFFORT.perplexity] as const
perplexity: MODEL_SUPPORTED_REASONING_EFFORT.perplexity
} as const
export const getThinkModelType = (model: Model): ThinkingModelType => {
if (isGPT5SeriesModel(model)) {
return 'gpt5'
}
if (isSupportedThinkingTokenGeminiModel(model)) {
if (GEMINI_FLASH_MODEL_REGEX.test(model.id)) {
return 'gemini'
@ -380,6 +392,10 @@ export function getModelLogo(modelId: string) {
'gpt-image': ChatGPTImageModelLogo,
'gpt-3': isLight ? ChatGPT35ModelLogo : ChatGPT35ModelLogoDark,
'gpt-4': isLight ? ChatGPT4ModelLogo : ChatGPT4ModelLogoDark,
'gpt-5$': GPT5ModelLogo,
'gpt-5-mini': GPT5MiniModelLogo,
'gpt-5-nano': GPT5NanoModelLogo,
'gpt-5-chat': GPT5ChatModelLogo,
gpts: isLight ? ChatGPT4ModelLogo : ChatGPT4ModelLogoDark,
'gpt-oss(?:-[\\w-]+)': isLight ? ChatGptModelLogo : ChatGptModelLogoDark,
'text-moderation': isLight ? ChatGptModelLogo : ChatGptModelLogoDark,
@ -2453,7 +2469,7 @@ export function isVisionModel(model: Model): boolean {
export function isOpenAIReasoningModel(model: Model): boolean {
const modelId = getLowerBaseModelName(model.id, '/')
return modelId.includes('o1') || modelId.includes('o3') || modelId.includes('o4') || modelId.includes('gpt-oss')
return isSupportedReasoningEffortOpenAIModel(model) || modelId.includes('o1') || modelId.includes('gpt-5-chat')
}
export function isOpenAILLMModel(model: Model): boolean {
@ -2479,6 +2495,7 @@ export function isOpenAIModel(model: Model): boolean {
return false
}
const modelId = getLowerBaseModelName(model.id)
return modelId.includes('gpt') || isOpenAIReasoningModel(model)
}
@ -2487,7 +2504,14 @@ export function isSupportFlexServiceTierModel(model: Model): boolean {
return false
}
const modelId = getLowerBaseModelName(model.id)
return (modelId.includes('o3') && !modelId.includes('o3-mini')) || modelId.includes('o4-mini')
return (
(modelId.includes('o3') && !modelId.includes('o3-mini')) || modelId.includes('o4-mini') || modelId.includes('gpt-5')
)
}
export function isSupportVerbosityModel(model: Model): boolean {
const modelId = getLowerBaseModelName(model.id)
return isGPT5SeriesModel(model) && !modelId.includes('chat')
}
export function isSupportedReasoningEffortOpenAIModel(model: Model): boolean {
@ -2495,7 +2519,9 @@ export function isSupportedReasoningEffortOpenAIModel(model: Model): boolean {
return (
(modelId.includes('o1') && !(modelId.includes('o1-preview') || modelId.includes('o1-mini'))) ||
modelId.includes('o3') ||
modelId.includes('o4')
modelId.includes('o4') ||
modelId.includes('gpt-oss') ||
(isGPT5SeriesModel(model) && !modelId.includes('chat'))
)
}
@ -2527,7 +2553,8 @@ export function isOpenAIWebSearchModel(model: Model): boolean {
(modelId.includes('gpt-4.1') && !modelId.includes('gpt-4.1-nano')) ||
(modelId.includes('gpt-4o') && !modelId.includes('gpt-4o-image')) ||
modelId.includes('o3') ||
modelId.includes('o4')
modelId.includes('o4') ||
(modelId.includes('gpt-5') && !modelId.includes('chat'))
)
}
@ -3133,17 +3160,14 @@ export const isQwenMTModel = (model: Model): boolean => {
}
export const isNotSupportedTextDelta = (model: Model): boolean => {
if (isQwenMTModel(model)) {
return true
}
return false
return isQwenMTModel(model)
}
export const isNotSupportSystemMessageModel = (model: Model): boolean => {
if (isQwenMTModel(model) || isGemmaModel(model)) {
return true
return isQwenMTModel(model) || isGemmaModel(model)
}
return false
export const isGPT5SeriesModel = (model: Model) => {
const modelId = getLowerBaseModelName(model.id)
return modelId.includes('gpt-5')
}

View File

@ -5,6 +5,7 @@
*/
import { loggerService } from '@logger'
import { ThinkingOption } from '@renderer/types'
import i18n from './index'
@ -266,13 +267,13 @@ export const getHttpMessageLabel = (key: string): string => {
return getLabel(key, httpMessageKeyMap)
}
const reasoningEffortOptionsKeyMap = {
auto: 'assistants.settings.reasoning_effort.default',
const reasoningEffortOptionsKeyMap: Record<ThinkingOption, string> = {
off: 'assistants.settings.reasoning_effort.off',
minimal: 'assistants.settings.reasoning_effort.minimal',
high: 'assistants.settings.reasoning_effort.high',
label: 'assistants.settings.reasoning_effort.label',
low: 'assistants.settings.reasoning_effort.low',
medium: 'assistants.settings.reasoning_effort.medium',
off: 'assistants.settings.reasoning_effort.off'
auto: 'assistants.settings.reasoning_effort.default'
} as const
export const getReasoningEffortOptionsLabel = (key: string): string => {

View File

@ -183,10 +183,11 @@
"prompt": "Prompt Settings",
"reasoning_effort": {
"default": "Default",
"high": "Think harder",
"high": "High",
"label": "Reasoning effort",
"low": "Think less",
"medium": "Think normally",
"low": "Low",
"medium": "Medium",
"minimal": "Minimal",
"off": "Off"
},
"regular_phrases": {
@ -3119,7 +3120,14 @@
"tip": "A summary of the reasoning performed by the model",
"title": "Summary Mode"
},
"title": "OpenAI Settings"
"title": "OpenAI Settings",
"verbosity": {
"high": "High",
"low": "Low",
"medium": "Medium",
"tip": "Control the level of detail in the model's output",
"title": "Level of detail"
}
},
"privacy": {
"enable_privacy_mode": "Anonymous reporting of errors and statistics",

View File

@ -187,6 +187,7 @@
"label": "思考連鎖の長さ",
"low": "少しの思考",
"medium": "普通の思考",
"minimal": "最小限の思考",
"off": "オフ"
},
"regular_phrases": {
@ -3119,7 +3120,14 @@
"tip": "モデルが行った推論の要約",
"title": "要約モード"
},
"title": "OpenAIの設定"
"title": "OpenAIの設定",
"verbosity": {
"high": "高",
"low": "低",
"medium": "中",
"tip": "制御モデル出力の詳細さ",
"title": "詳細度"
}
},
"privacy": {
"enable_privacy_mode": "匿名エラーレポートとデータ統計の送信",

View File

@ -187,6 +187,7 @@
"label": "Настройки размышлений",
"low": "Меньше думать",
"medium": "Среднее",
"minimal": "минимальный",
"off": "Выключить"
},
"regular_phrases": {
@ -3119,7 +3120,14 @@
"tip": "Резюме рассуждений, выполненных моделью",
"title": "Режим резюме"
},
"title": "Настройки OpenAI"
"title": "Настройки OpenAI",
"verbosity": {
"high": "Высокий",
"low": "низкий",
"medium": "китайский",
"tip": "Управление степенью детализации вывода модели",
"title": "подробность"
}
},
"privacy": {
"enable_privacy_mode": "Анонимная отчетность об ошибках и статистике",

View File

@ -187,6 +187,7 @@
"label": "思维链长度",
"low": "浮想",
"medium": "斟酌",
"minimal": "微念",
"off": "关闭"
},
"regular_phrases": {
@ -3119,7 +3120,14 @@
"tip": "模型执行的推理摘要",
"title": "摘要模式"
},
"title": "OpenAI 设置"
"title": "OpenAI 设置",
"verbosity": {
"high": "高",
"low": "低",
"medium": "中",
"tip": "控制模型输出的详细程度",
"title": "详细程度"
}
},
"privacy": {
"enable_privacy_mode": "匿名发送错误报告和数据统计",

View File

@ -187,6 +187,7 @@
"label": "思維鏈長度",
"low": "稍微思考",
"medium": "正常思考",
"minimal": "最少思考",
"off": "關閉"
},
"regular_phrases": {
@ -3119,7 +3120,14 @@
"tip": "模型所執行的推理摘要",
"title": "摘要模式"
},
"title": "OpenAI 設定"
"title": "OpenAI 設定",
"verbosity": {
"high": "高",
"low": "低",
"medium": "中",
"tip": "控制模型輸出的詳細程度",
"title": "詳細程度"
}
},
"privacy": {
"enable_privacy_mode": "匿名發送錯誤報告和資料統計",

View File

@ -187,6 +187,7 @@
"label": "Μήκος λογισμικού αλυσίδας",
"low": "Μικρό",
"medium": "Μεσαίο",
"minimal": "ελάχιστος",
"off": "Απενεργοποίηση"
},
"regular_phrases": {
@ -3119,7 +3120,14 @@
"tip": "Περίληψη συλλογισμού που εκτελείται από το μοντέλο",
"title": "Λειτουργία περίληψης"
},
"title": "Ρυθμίσεις OpenAI"
"title": "Ρυθμίσεις OpenAI",
"verbosity": {
"high": "Ψηλός",
"low": "χαμηλό",
"medium": "Μεσαίο",
"tip": "Ελέγχει το βαθμό λεπτομέρειας της έξοδου του μοντέλου.",
"title": "λεπτομέρεια"
}
},
"privacy": {
"enable_privacy_mode": "Αποστολή ανώνυμων αναφορών σφαλμάτων και στατιστικών δεδομένων",

View File

@ -187,6 +187,7 @@
"label": "Longitud de Cadena de Razonamiento",
"low": "Corto",
"medium": "Medio",
"minimal": "minimal",
"off": "Apagado"
},
"regular_phrases": {
@ -3119,7 +3120,14 @@
"tip": "Resumen de la inferencia realizada por el modelo",
"title": "Modo de resumen"
},
"title": "Configuración de OpenAI"
"title": "Configuración de OpenAI",
"verbosity": {
"high": "alto",
"low": "bajo",
"medium": "medio",
"tip": "Controlar el nivel de detalle de la salida del modelo",
"title": "nivel de detalle"
}
},
"privacy": {
"enable_privacy_mode": "Enviar informes de errores y estadísticas de forma anónima",

View File

@ -187,6 +187,7 @@
"label": "Longueur de la chaîne de raisonnement",
"low": "Court",
"medium": "Moyen",
"minimal": "minimal",
"off": "Off"
},
"regular_phrases": {
@ -3119,7 +3120,14 @@
"tip": "Résumé des inférences effectuées par le modèle",
"title": "Mode de résumé"
},
"title": "Paramètres OpenAI"
"title": "Paramètres OpenAI",
"verbosity": {
"high": "haut",
"low": "faible",
"medium": "moyen",
"tip": "Contrôler le niveau de détail de la sortie du modèle",
"title": "niveau de détail"
}
},
"privacy": {
"enable_privacy_mode": "Отправлять анонимные сообщения об ошибках и статистику",

View File

@ -187,6 +187,7 @@
"label": "Comprimento da Cadeia de Raciocínio",
"low": "Curto",
"medium": "Médio",
"minimal": "mínimo",
"off": "Desligado"
},
"regular_phrases": {
@ -3119,7 +3120,14 @@
"tip": "Resumo do raciocínio executado pelo modelo",
"title": "Modo de Resumo"
},
"title": "Configurações do OpenAI"
"title": "Configurações do OpenAI",
"verbosity": {
"high": "alto",
"low": "baixo",
"medium": "médio",
"tip": "Controlar o nível de detalhe da saída do modelo",
"title": "nível de detalhe"
}
},
"privacy": {
"enable_privacy_mode": "Enviar relatórios de erro e estatísticas de forma anônima",

View File

@ -1,9 +1,10 @@
import {
MdiLightbulbAutoOutline,
MdiLightbulbOffOutline,
MdiLightbulbOn10,
MdiLightbulbOn,
MdiLightbulbOn30,
MdiLightbulbOn50,
MdiLightbulbOn90
MdiLightbulbOn80
} from '@renderer/components/Icons/SVGIcon'
import { useQuickPanel } from '@renderer/components/QuickPanel'
import { getThinkModelType, isDoubaoThinkingAutoModel, MODEL_SUPPORTED_OPTIONS } from '@renderer/config/models'
@ -28,6 +29,7 @@ interface Props {
// 选项转换映射表:当选项不支持时使用的替代选项
const OPTION_FALLBACK: Record<ThinkingOption, ThinkingOption> = {
off: 'low', // off -> low (for Gemini Pro models)
minimal: 'low', // minimal -> low (for gpt-5 and after)
low: 'high',
medium: 'high', // medium -> high (for Grok models)
high: 'high',
@ -74,12 +76,14 @@ const ThinkingButton: FC<Props> = ({ ref, model, assistant, ToolbarButton }): Re
const iconColor = isActive ? 'var(--color-link)' : 'var(--color-icon)'
switch (true) {
case option === 'minimal':
return <MdiLightbulbOn30 width={18} height={18} style={{ color: iconColor, marginTop: -2 }} />
case option === 'low':
return <MdiLightbulbOn10 width={18} height={18} style={{ color: iconColor, marginTop: -2 }} />
case option === 'medium':
return <MdiLightbulbOn50 width={18} height={18} style={{ color: iconColor, marginTop: -2 }} />
case option === 'medium':
return <MdiLightbulbOn80 width={18} height={18} style={{ color: iconColor, marginTop: -2 }} />
case option === 'high':
return <MdiLightbulbOn90 width={18} height={18} style={{ color: iconColor, marginTop: -2 }} />
return <MdiLightbulbOn width={18} height={18} style={{ color: iconColor, marginTop: -2 }} />
case option === 'auto':
return <MdiLightbulbAutoOutline width={18} height={18} style={{ color: iconColor, marginTop: -2 }} />
case option === 'off':

View File

@ -1,11 +1,15 @@
import Selector from '@renderer/components/Selector'
import { isSupportedReasoningEffortOpenAIModel, isSupportFlexServiceTierModel } from '@renderer/config/models'
import {
isSupportedReasoningEffortOpenAIModel,
isSupportFlexServiceTierModel,
isSupportVerbosityModel
} from '@renderer/config/models'
import { isSupportServiceTierProvider } from '@renderer/config/providers'
import { useProvider } from '@renderer/hooks/useProvider'
import { SettingDivider, SettingRow } from '@renderer/pages/settings'
import { CollapsibleSettingGroup } from '@renderer/pages/settings/SettingGroup'
import { RootState, useAppDispatch } from '@renderer/store'
import { setOpenAISummaryText } from '@renderer/store/settings'
import { setOpenAISummaryText, setOpenAIVerbosity } from '@renderer/store/settings'
import {
GroqServiceTiers,
Model,
@ -15,6 +19,7 @@ import {
ServiceTier,
SystemProviderIds
} from '@renderer/types'
import { OpenAIVerbosity } from '@types'
import { Tooltip } from 'antd'
import { CircleHelp } from 'lucide-react'
import { FC, useCallback, useEffect, useMemo } from 'react'
@ -31,6 +36,7 @@ interface Props {
const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, SettingRowTitleSmall }) => {
const { t } = useTranslation()
const { provider, updateProvider } = useProvider(providerId)
const verbosity = useSelector((state: RootState) => state.settings.openAI.verbosity)
const summaryText = useSelector((state: RootState) => state.settings.openAI.summaryText)
const serviceTierMode = provider.serviceTier
const dispatch = useAppDispatch()
@ -39,6 +45,7 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
isSupportedReasoningEffortOpenAIModel(model) &&
!model.id.includes('o1-pro') &&
(provider.type === 'openai-response' || provider.id === 'aihubmix')
const isSupportVerbosity = isSupportVerbosityModel(model)
const isSupportServiceTier = isSupportServiceTierProvider(provider)
const isSupportedFlexServiceTier = isSupportFlexServiceTierModel(model)
@ -56,6 +63,13 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
[updateProvider]
)
const setVerbosity = useCallback(
(value: OpenAIVerbosity) => {
dispatch(setOpenAIVerbosity(value))
},
[dispatch]
)
const summaryTextOptions = [
{
value: 'auto',
@ -71,6 +85,21 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
}
]
const verbosityOptions = [
{
value: 'low',
label: t('settings.openai.verbosity.low')
},
{
value: 'medium',
label: t('settings.openai.verbosity.medium')
},
{
value: 'high',
label: t('settings.openai.verbosity.high')
}
]
const serviceTierOptions = useMemo(() => {
let baseOptions: { value: ServiceTier; label: string }[]
if (provider.id === SystemProviderIds.groq) {
@ -131,7 +160,7 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
}
}, [provider.id, serviceTierMode, serviceTierOptions, setServiceTierMode])
if (!isOpenAIReasoning && !isSupportServiceTier) {
if (!isOpenAIReasoning && !isSupportServiceTier && !isSupportVerbosity) {
return null
}
@ -139,6 +168,7 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
<CollapsibleSettingGroup title={t('settings.openai.title')} defaultExpanded={true}>
<SettingGroup>
{isSupportServiceTier && (
<>
<SettingRow>
<SettingRowTitleSmall>
{t('settings.openai.service_tier.title')}{' '}
@ -155,10 +185,11 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
placeholder={t('settings.openai.service_tier.auto')}
/>
</SettingRow>
{(isOpenAIReasoning || isSupportVerbosity) && <SettingDivider />}
</>
)}
{isOpenAIReasoning && (
<>
<SettingDivider />
<SettingRow>
<SettingRowTitleSmall>
{t('settings.openai.summary_text_mode.title')}{' '}
@ -174,8 +205,26 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
options={summaryTextOptions}
/>
</SettingRow>
{isSupportVerbosity && <SettingDivider />}
</>
)}
{isSupportVerbosity && (
<SettingRow>
<SettingRowTitleSmall>
{t('settings.openai.verbosity.title')}{' '}
<Tooltip title={t('settings.openai.verbosity.tip')}>
<CircleHelp size={14} style={{ marginLeft: 4 }} color="var(--color-text-2)" />
</Tooltip>
</SettingRowTitleSmall>
<Selector
value={verbosity}
onChange={(value) => {
setVerbosity(value as OpenAIVerbosity)
}}
options={verbosityOptions}
/>
</SettingRow>
)}
</SettingGroup>
<SettingDivider />
</CollapsibleSettingGroup>

View File

@ -1222,8 +1222,10 @@ const mockOpenaiApiClient = {
type: 'function'
}
} else if (fun?.arguments) {
if (toolCalls[index] && toolCalls[index].type === 'function' && 'function' in toolCalls[index]) {
toolCalls[index].function.arguments += fun.arguments
}
}
} else {
toolCalls.push(toolCall)
}

View File

@ -60,7 +60,7 @@ const persistedReducer = persistReducer(
{
key: 'cherry-studio',
storage,
version: 129,
version: 130,
blacklist: ['runtime', 'messages', 'messageBlocks', 'tabs'],
migrate
},

View File

@ -1438,7 +1438,8 @@ const migrateConfig = {
try {
state.settings.openAI = {
summaryText: 'off',
serviceTier: 'auto'
serviceTier: 'auto',
verbosity: 'medium'
}
state.settings.codeExecution = {
@ -1530,7 +1531,8 @@ const migrateConfig = {
if (!state.settings.openAI) {
state.settings.openAI = {
summaryText: 'off',
serviceTier: 'auto'
serviceTier: 'auto',
verbosity: 'medium'
}
}
return state
@ -2072,12 +2074,22 @@ const migrateConfig = {
updateProvider(state, p.id, { apiOptions: changes })
}
})
return state
} catch (error) {
logger.error('migrate 129 error', error as Error)
return state
}
},
'130': (state: RootState) => {
try {
if (state.settings && state.settings.openAI && !state.settings.openAI.verbosity) {
state.settings.openAI.verbosity = 'medium'
}
return state
} catch (error) {
logger.error('migrate 130 error', error as Error)
return state
}
}
}

View File

@ -15,6 +15,7 @@ import {
} from '@renderer/types'
import { uuid } from '@renderer/utils'
import { UpgradeChannel } from '@shared/config/constant'
import { OpenAIVerbosity } from '@types'
import { RemoteSyncState } from './backup'
@ -194,6 +195,7 @@ export interface SettingsState {
summaryText: OpenAISummaryText
/** @deprecated 现在该设置迁移到Provider对象中 */
serviceTier: OpenAIServiceTier
verbosity: OpenAIVerbosity
}
// Notification
notification: {
@ -365,7 +367,8 @@ export const initialState: SettingsState = {
// OpenAI
openAI: {
summaryText: 'off',
serviceTier: 'auto'
serviceTier: 'auto',
verbosity: 'medium'
},
notification: {
assistant: false,
@ -775,6 +778,9 @@ const settingsSlice = createSlice({
setOpenAISummaryText: (state, action: PayloadAction<OpenAISummaryText>) => {
state.openAI.summaryText = action.payload
},
setOpenAIVerbosity: (state, action: PayloadAction<OpenAIVerbosity>) => {
state.openAI.verbosity = action.payload
},
setNotificationSettings: (state, action: PayloadAction<SettingsState['notification']>) => {
state.notification = action.payload
},
@ -939,6 +945,7 @@ export const {
setEnableBackspaceDeleteModel,
setDisableHardwareAcceleration,
setOpenAISummaryText,
setOpenAIVerbosity,
setNotificationSettings,
// Local backup settings
setLocalBackupDir,

View File

@ -52,10 +52,11 @@ export type AssistantSettingCustomParameters = {
type: 'string' | 'number' | 'boolean' | 'json'
}
export type ReasoningEffortOption = 'low' | 'medium' | 'high' | 'auto'
export type ReasoningEffortOption = NonNullable<OpenAI.ReasoningEffort> | 'auto'
export type ThinkingOption = ReasoningEffortOption | 'off'
export type ThinkingModelType =
| 'default'
| 'gpt5'
| 'grok'
| 'gemini'
| 'gemini_pro'
@ -87,6 +88,7 @@ export function isThinkModelType(type: string): type is ThinkingModelType {
}
export const EFFORT_RATIO: EffortRatio = {
minimal: 0.05,
low: 0.05,
medium: 0.5,
high: 0.8,
@ -946,6 +948,8 @@ export interface StoreSyncAction {
}
}
export type OpenAIVerbosity = 'high' | 'medium' | 'low'
export type OpenAISummaryText = 'auto' | 'concise' | 'detailed' | 'off'
export const OpenAIServiceTiers = {

View File

@ -78,8 +78,10 @@ export function openAIToolsToMcpTool(
try {
if ('name' in toolCall) {
toolName = toolCall.name
} else {
} else if (toolCall.type === 'function' && 'function' in toolCall) {
toolName = toolCall.function.name
} else {
throw new Error('Unknown tool call type')
}
} catch (error) {
logger.error(`Error parsing tool call: ${toolCall}`, error as Error)

View File

@ -7786,7 +7786,7 @@ __metadata:
notion-helper: "npm:^1.3.22"
npx-scope-finder: "npm:^1.2.0"
officeparser: "npm:^4.2.0"
openai: "patch:openai@npm%3A5.12.0#~/.yarn/patches/openai-npm-5.12.0-a06a6369b2.patch"
openai: "patch:openai@npm%3A5.12.2#~/.yarn/patches/openai-npm-5.12.2-30b075401c.patch"
os-proxy-config: "npm:^1.1.2"
p-queue: "npm:^8.1.0"
pdf-lib: "npm:^1.17.1"
@ -16688,9 +16688,9 @@ __metadata:
languageName: node
linkType: hard
"openai@npm:5.12.0":
version: 5.12.0
resolution: "openai@npm:5.12.0"
"openai@npm:5.12.2":
version: 5.12.2
resolution: "openai@npm:5.12.2"
peerDependencies:
ws: ^8.18.0
zod: ^3.23.8
@ -16701,13 +16701,13 @@ __metadata:
optional: true
bin:
openai: bin/cli
checksum: 10c0/adab04e90cae8f393f76c007f98c0636af97a280fb05766b0cee5ab202c802db01c113d0ce0dfea42e1a1fe3b08c9a3881b6eea9a0b0703375f487688aaca1fc
checksum: 10c0/7737b9b24edc81fcf9e6dcfb18a196cc0f8e29b6e839adf06a2538558c03908e3aa4cd94901b1a7f4a9dd62676fe9e34d6202281b2395090d998618ea1614c0c
languageName: node
linkType: hard
"openai@patch:openai@npm%3A5.12.0#~/.yarn/patches/openai-npm-5.12.0-a06a6369b2.patch":
version: 5.12.0
resolution: "openai@patch:openai@npm%3A5.12.0#~/.yarn/patches/openai-npm-5.12.0-a06a6369b2.patch::version=5.12.0&hash=d96796"
"openai@patch:openai@npm%3A5.12.2#~/.yarn/patches/openai-npm-5.12.2-30b075401c.patch":
version: 5.12.2
resolution: "openai@patch:openai@npm%3A5.12.2#~/.yarn/patches/openai-npm-5.12.2-30b075401c.patch::version=5.12.2&hash=ad5d10"
peerDependencies:
ws: ^8.18.0
zod: ^3.23.8
@ -16718,7 +16718,7 @@ __metadata:
optional: true
bin:
openai: bin/cli
checksum: 10c0/207f70a43839d34f6ad3322a4bdf6d755ac923ca9c6b5fb49bd13263d816c5acb1a501228b9124b1f72eae2f7efffc8890e2d901907b3c8efc2fee3f8a273cec
checksum: 10c0/2964a1c88a98cf169c9b73e8cd6776c03c8f3103fee30961c6953e5d995ad57a697e2179615999356809349186df6496abae105928ff7ce0229e5016dec87cb3
languageName: node
linkType: hard