Compare commits

...

10 Commits

Author SHA1 Message Date
LiuVaayne
04d1d1e0c7
Merge ef22e74794 into 8ab375161d 2025-12-18 20:33:44 +08:00
George·Dong
8ab375161d
fix: disable reasoning mode for translation to improve efficiency (#11998)
* fix: disable reasoning mode for translation to improve efficiency

- 修改 getDefaultTranslateAssistant 函数,将默认推理选项设置为 'none'
- 避免 PR #11942 引入的 'default' 选项导致翻译重新启用思考模式
- 显著提升翻译速度和性能
- 符合翻译场景不需要复杂推理的业务逻辑

* fix(AssistantService): adjust reasoning effort

Set reasoning effort to 'none' only if supported by model, otherwise use 'default'.

---------

Co-authored-by: icarus <eurfelux@gmail.com>
2025-12-18 20:16:09 +08:00
GeekMr
42260710d8
fix(azure): restore deployment-based URLs for non-v1 apiVersion and add tests (#11966)
* fix: support Azure OpenAI deployment URLs

* test: stabilize renderer setup

---------

Co-authored-by: William Wang <WilliamOnline1721@hotmail.com>
2025-12-18 18:12:26 +08:00
kangfenmao
5e8646c6a5 fix: update API path for image generation requests in OpenAIBaseClient 2025-12-18 14:45:30 +08:00
Phantom
7e93e8b9b2
feat(gemini): add support for Gemini 3 Flash and Pro model detection (#11984)
* feat(gemini): update model types and add support for gemini3 variants

add new model type identifiers for gemini3 flash and pro variants
implement utility functions to detect gemini3 flash and pro models
update reasoning configuration and tests for new gemini variants

* docs(i18n): update chinese translation for minimal_description

* chore: update @ai-sdk/google and @ai-sdk/google-vertex dependencies

- Update @ai-sdk/google to version 2.0.49 with patch for model path fix
- Update @ai-sdk/google-vertex to version 3.0.94 with updated dependencies

* feat(gemini): add thinking level mapping for Gemini 3 models

Implement mapping between reasoning effort options and Gemini's thinking levels. Enable thinking config for Gemini 3 models to support advanced reasoning features.

* chore: update yarn.lock with patched @ai-sdk/google dependency

* test(reasoning): update tests for Gemini model type classification and reasoning options

Update test cases to reflect new Gemini model type classifications (gemini2_flash, gemini3_flash, gemini2_pro, gemini3_pro) and their corresponding reasoning effort options. Add tests for Gemini 3 models and adjust existing ones to match current behavior.

* docs(reasoning): remove outdated TODO comment about model support
2025-12-18 14:35:36 +08:00
SuYao
eb7a2cc85a
feat: add support for Xiaomi MiMo model (#11961)
* feat: add support for Xiaomi MiMo model

- Implemented support for the MiMo model in reasoning logic.
- Added MiMo model configuration in default models.
- Included MiMo logos for both models and providers.
- Updated provider configurations to include Xiaomi MiMo.
- Enhanced reasoning effort and options to accommodate MiMo.
- Added migration logic for state management to include MiMo.
- Updated versioning in store to reflect changes.

* chore(i18n): add specific provider name

* fix(provider): add xiaomi mimo anthropic apihost

* chore: url

* fix: add tool use capability
2025-12-18 13:49:09 +08:00
dependabot[bot]
fd6986076a
chore(deps): bump jws from 4.0.0 to 4.0.1 (#11977)
Bumps [jws](https://github.com/brianloveswords/node-jws) from 4.0.0 to 4.0.1.
- [Release notes](https://github.com/brianloveswords/node-jws/releases)
- [Changelog](https://github.com/auth0/node-jws/blob/master/CHANGELOG.md)
- [Commits](https://github.com/brianloveswords/node-jws/compare/v4.0.0...v4.0.1)

---
updated-dependencies:
- dependency-name: jws
  dependency-version: 4.0.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-18 13:34:39 +08:00
LiuVaayne
6309cc179d
feat(mcp): add Nowledge Mem builtin MCP server (#11875)
*  feat(mcp): add Nowledge Mem builtin MCP server

Add @cherry/nowLedgeMem as a new builtin MCP server that connects
to local Nowledge Mem service via HTTP at 127.0.0.1:14242/mcp.

- Add nowLedgeMem to BuiltinMCPServerNames type definitions
- Add HTTP transport handling in MCPService with APP header
- Add server config to builtinMCPServers array
- Add i18n translations (en-us, zh-cn, zh-tw)

* Fix Nowledge Mem server name typos across codebase

* 🌐 i18n: add missing translations for Nowledge Mem and Git Bash settings

Translate [to be translated] markers across 8 locale files:
- zh-tw, de-de, fr-fr, es-es, pt-pt, ru-ru: nowledgeMem description
- fr-fr, es-es, pt-pt, ru-ru, el-gr, ja-jp: xhigh reasoning chain option
- el-gr, ja-jp: Git Bash configuration strings

* 🐛 fix: address PR review comments for Nowledge Mem MCP

- Fix log message typo: use server.name instead of hardcoded "NowLedgeMem"
- Rename i18n key from "nowledgeMem" to "nowledge_mem" for consistency
- Update descriptions to warn about external dependency requirement
2025-12-18 13:34:06 +08:00
SuYao
c04529a23c
refactor: improve budget calculation logic (#11973)
* refactor: improve budget calculation logic

* Update src/renderer/src/aiCore/utils/__tests__/reasoning.test.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/renderer/src/aiCore/utils/__tests__/reasoning.test.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [WIP] Address feedback on budget calculation logic refactor (#11974)

* Initial plan

* fix: revert budget calculation to linear interpolation formula

Reverted the budget calculation in getAnthropicThinkingBudget from
`tokenLimit.max * effortRatio` back to the original linear interpolation
formula `(tokenLimit.max - tokenLimit.min) * effortRatio + tokenLimit.min`.

The new formula was causing lower budgets for all effort ratios (e.g.,
LOW effort changed from 2609 to 1638 tokens, a 37% reduction). The linear
interpolation formula ensures budgets range from min (at effortRatio=0) to
max (at effortRatio=1), matching the behavior in other parts of the codebase
(lines 221, 597).

Updated tests to reflect the correct expected values with the linear
interpolation formula.

Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

* fix(test): reasoning

* fix: test

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2025-12-18 13:30:41 +08:00
Vaayne
ef22e74794 🐛 fix: agent OAuth support and fast model settings
- Fix agent OAuth authentication by routing through proxy API server
- Update model environment variables to use ANTHROPIC_DEFAULT_*_MODEL
- Allow forwarding of sentry-trace, baggage, content-length, connection headers

Fixes #10785, #11014
2025-10-30 16:15:18 +08:00
39 changed files with 775 additions and 148 deletions

View File

@ -1,5 +1,5 @@
diff --git a/dist/index.js b/dist/index.js
index 51ce7e423934fb717cb90245cdfcdb3dae6780e6..0f7f7009e2f41a79a8669d38c8a44867bbff5e1f 100644
index d004b415c5841a1969705823614f395265ea5a8a..6b1e0dad4610b0424393ecc12e9114723bbe316b 100644
--- a/dist/index.js
+++ b/dist/index.js
@@ -474,7 +474,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
@ -12,7 +12,7 @@ index 51ce7e423934fb717cb90245cdfcdb3dae6780e6..0f7f7009e2f41a79a8669d38c8a44867
// src/google-generative-ai-options.ts
diff --git a/dist/index.mjs b/dist/index.mjs
index f4b77e35c0cbfece85a3ef0d4f4e67aa6dde6271..8d2fecf8155a226006a0bde72b00b6036d4014b6 100644
index 1780dd2391b7f42224a0b8048c723d2f81222c44..1f12ed14399d6902107ce9b435d7d8e6cc61e06b 100644
--- a/dist/index.mjs
+++ b/dist/index.mjs
@@ -480,7 +480,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
@ -24,3 +24,14 @@ index f4b77e35c0cbfece85a3ef0d4f4e67aa6dde6271..8d2fecf8155a226006a0bde72b00b603
}
// src/google-generative-ai-options.ts
@@ -1909,8 +1909,7 @@ function createGoogleGenerativeAI(options = {}) {
}
var google = createGoogleGenerativeAI();
export {
- VERSION,
createGoogleGenerativeAI,
- google
+ google, VERSION
};
//# sourceMappingURL=index.mjs.map
\ No newline at end of file

View File

@ -114,8 +114,8 @@
"@ai-sdk/anthropic": "^2.0.49",
"@ai-sdk/cerebras": "^1.0.31",
"@ai-sdk/gateway": "^2.0.15",
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.43#~/.yarn/patches/@ai-sdk-google-npm-2.0.43-689ed559b3.patch",
"@ai-sdk/google-vertex": "^3.0.79",
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.49#~/.yarn/patches/@ai-sdk-google-npm-2.0.49-84720f41bd.patch",
"@ai-sdk/google-vertex": "^3.0.94",
"@ai-sdk/huggingface": "^0.0.10",
"@ai-sdk/mistral": "^2.0.24",
"@ai-sdk/openai": "patch:@ai-sdk/openai@npm%3A2.0.85#~/.yarn/patches/@ai-sdk-openai-npm-2.0.85-27483d1d6a.patch",
@ -416,7 +416,8 @@
"@langchain/openai@npm:>=0.2.0 <0.7.0": "patch:@langchain/openai@npm%3A1.0.0#~/.yarn/patches/@langchain-openai-npm-1.0.0-474d0ad9d4.patch",
"@ai-sdk/openai@npm:^2.0.42": "patch:@ai-sdk/openai@npm%3A2.0.85#~/.yarn/patches/@ai-sdk-openai-npm-2.0.85-27483d1d6a.patch",
"@ai-sdk/google@npm:^2.0.40": "patch:@ai-sdk/google@npm%3A2.0.40#~/.yarn/patches/@ai-sdk-google-npm-2.0.40-47e0eeee83.patch",
"@ai-sdk/openai-compatible@npm:^1.0.27": "patch:@ai-sdk/openai-compatible@npm%3A1.0.27#~/.yarn/patches/@ai-sdk-openai-compatible-npm-1.0.27-06f74278cf.patch"
"@ai-sdk/openai-compatible@npm:^1.0.27": "patch:@ai-sdk/openai-compatible@npm%3A1.0.27#~/.yarn/patches/@ai-sdk-openai-compatible-npm-1.0.27-06f74278cf.patch",
"@ai-sdk/google@npm:2.0.49": "patch:@ai-sdk/google@npm%3A2.0.49#~/.yarn/patches/@ai-sdk-google-npm-2.0.49-84720f41bd.patch"
},
"packageManager": "yarn@4.9.1",
"lint-staged": {

View File

@ -7,15 +7,7 @@ import type { Provider } from '@types'
import type { Response } from 'express'
const logger = loggerService.withContext('MessagesService')
const EXCLUDED_FORWARD_HEADERS: ReadonlySet<string> = new Set([
'host',
'x-api-key',
'authorization',
'sentry-trace',
'baggage',
'content-length',
'connection'
])
const EXCLUDED_FORWARD_HEADERS: ReadonlySet<string> = new Set(['host', 'x-api-key', 'authorization'])
export interface ValidationResult {
isValid: boolean

View File

@ -249,6 +249,26 @@ class McpService {
StdioClientTransport | SSEClientTransport | InMemoryTransport | StreamableHTTPClientTransport
> => {
// Create appropriate transport based on configuration
// Special case for nowledgeMem - uses HTTP transport instead of in-memory
if (isBuiltinMCPServer(server) && server.name === BuiltinMCPServerNames.nowledgeMem) {
const nowledgeMemUrl = 'http://127.0.0.1:14242/mcp'
const options: StreamableHTTPClientTransportOptions = {
fetch: async (url, init) => {
return net.fetch(typeof url === 'string' ? url : url.toString(), init)
},
requestInit: {
headers: {
...defaultAppHeaders(),
APP: 'Cherry Studio'
}
},
authProvider
}
getServerLogger(server).debug(`Using StreamableHTTPClientTransport for ${server.name}`)
return new StreamableHTTPClientTransport(new URL(nowledgeMemUrl), options)
}
if (isBuiltinMCPServer(server) && server.name !== BuiltinMCPServerNames.mcpAutoInstall) {
getServerLogger(server).debug(`Using in-memory transport`)
const [clientTransport, serverTransport] = InMemoryTransport.createLinkedPair()

View File

@ -115,12 +115,12 @@ class ClaudeCodeService implements AgentServiceInterface {
const env = {
...loginShellEnvWithoutProxies,
// TODO: fix the proxy api server
// ANTHROPIC_API_KEY: apiConfig.apiKey,
// ANTHROPIC_AUTH_TOKEN: apiConfig.apiKey,
// ANTHROPIC_BASE_URL: `http://${apiConfig.host}:${apiConfig.port}/${modelInfo.provider.id}`,
ANTHROPIC_API_KEY: modelInfo.provider.apiKey,
ANTHROPIC_AUTH_TOKEN: modelInfo.provider.apiKey,
ANTHROPIC_BASE_URL: modelInfo.provider.anthropicApiHost?.trim() || modelInfo.provider.apiHost,
ANTHROPIC_API_KEY: apiConfig.apiKey,
ANTHROPIC_AUTH_TOKEN: apiConfig.apiKey,
ANTHROPIC_BASE_URL: `http://${apiConfig.host}:${apiConfig.port}/${modelInfo.provider.id}`,
// ANTHROPIC_API_KEY: modelInfo.provider.apiKey,
// ANTHROPIC_AUTH_TOKEN: modelInfo.provider.apiKey,
// ANTHROPIC_BASE_URL: modelInfo.provider.anthropicApiHost?.trim() || modelInfo.provider.apiHost,
ANTHROPIC_MODEL: modelInfo.modelId,
ANTHROPIC_DEFAULT_OPUS_MODEL: modelInfo.modelId,
ANTHROPIC_DEFAULT_SONNET_MODEL: modelInfo.modelId,

View File

@ -69,7 +69,7 @@ export abstract class OpenAIBaseClient<
const sdk = await this.getSdkInstance()
const response = (await sdk.request({
method: 'post',
path: '/images/generations',
path: '/v1/images/generations',
signal,
body: {
model,

View File

@ -79,7 +79,7 @@ vi.mock('@renderer/services/AssistantService', () => ({
import { getProviderByModel } from '@renderer/services/AssistantService'
import type { Model, Provider } from '@renderer/types'
import { formatApiHost } from '@renderer/utils/api'
import { isCherryAIProvider, isPerplexityProvider } from '@renderer/utils/provider'
import { isAzureOpenAIProvider, isCherryAIProvider, isPerplexityProvider } from '@renderer/utils/provider'
import { COPILOT_DEFAULT_HEADERS, COPILOT_EDITOR_VERSION, isCopilotResponsesModel } from '../constants'
import { getActualProvider, providerToAiSdkConfig } from '../providerConfig'
@ -133,6 +133,17 @@ const createPerplexityProvider = (): Provider => ({
isSystem: false
})
const createAzureProvider = (apiVersion: string): Provider => ({
id: 'azure-openai',
type: 'azure-openai',
name: 'Azure OpenAI',
apiKey: 'test-key',
apiHost: 'https://example.openai.azure.com/openai',
apiVersion,
models: [],
isSystem: true
})
describe('Copilot responses routing', () => {
beforeEach(() => {
;(globalThis as any).window = {
@ -504,3 +515,46 @@ describe('Stream options includeUsage configuration', () => {
expect(config.providerId).toBe('github-copilot-openai-compatible')
})
})
describe('Azure OpenAI traditional API routing', () => {
beforeEach(() => {
;(globalThis as any).window = {
...(globalThis as any).window,
keyv: createWindowKeyv()
}
mockGetState.mockReturnValue({
settings: {
openAI: {
streamOptions: {
includeUsage: undefined
}
}
}
})
vi.mocked(isAzureOpenAIProvider).mockImplementation((provider) => provider.type === 'azure-openai')
})
it('uses deployment-based URLs when apiVersion is a date version', () => {
const provider = createAzureProvider('2024-02-15-preview')
const config = providerToAiSdkConfig(provider, createModel('gpt-4o', 'GPT-4o', provider.id))
expect(config.providerId).toBe('azure')
expect(config.options.apiVersion).toBe('2024-02-15-preview')
expect(config.options.useDeploymentBasedUrls).toBe(true)
})
it('does not force deployment-based URLs for apiVersion v1/preview', () => {
const v1Provider = createAzureProvider('v1')
const v1Config = providerToAiSdkConfig(v1Provider, createModel('gpt-4o', 'GPT-4o', v1Provider.id))
expect(v1Config.providerId).toBe('azure-responses')
expect(v1Config.options.apiVersion).toBe('v1')
expect(v1Config.options.useDeploymentBasedUrls).toBeUndefined()
const previewProvider = createAzureProvider('preview')
const previewConfig = providerToAiSdkConfig(previewProvider, createModel('gpt-4o', 'GPT-4o', previewProvider.id))
expect(previewConfig.providerId).toBe('azure-responses')
expect(previewConfig.options.apiVersion).toBe('preview')
expect(previewConfig.options.useDeploymentBasedUrls).toBeUndefined()
})
})

View File

@ -214,6 +214,15 @@ export function providerToAiSdkConfig(actualProvider: Provider, model: Model): A
} else if (aiSdkProviderId === 'azure') {
extraOptions.mode = 'chat'
}
if (isAzureOpenAIProvider(actualProvider)) {
const apiVersion = actualProvider.apiVersion?.trim()
if (apiVersion) {
extraOptions.apiVersion = apiVersion
if (!['preview', 'v1'].includes(apiVersion)) {
extraOptions.useDeploymentBasedUrls = true
}
}
}
// bedrock
if (aiSdkProviderId === 'bedrock') {

View File

@ -11,6 +11,7 @@ import { beforeEach, describe, expect, it, vi } from 'vitest'
import {
getAnthropicReasoningParams,
getAnthropicThinkingBudget,
getBedrockReasoningParams,
getCustomParameters,
getGeminiReasoningParams,
@ -89,7 +90,8 @@ vi.mock('@renderer/config/models', async (importOriginal) => {
isQwenAlwaysThinkModel: vi.fn(() => false),
isSupportedThinkingTokenHunyuanModel: vi.fn(() => false),
isSupportedThinkingTokenModel: vi.fn(() => false),
isGPT51SeriesModel: vi.fn(() => false)
isGPT51SeriesModel: vi.fn(() => false),
findTokenLimit: vi.fn(actual.findTokenLimit)
}
})
@ -649,7 +651,7 @@ describe('reasoning utils', () => {
expect(result).toEqual({
thinking: {
type: 'enabled',
budgetTokens: 2048
budgetTokens: 4096
}
})
})
@ -729,7 +731,7 @@ describe('reasoning utils', () => {
const result = getGeminiReasoningParams(assistant, model)
expect(result).toEqual({
thinkingConfig: {
thinkingBudget: 16448,
thinkingBudget: expect.any(Number),
includeThoughts: true
}
})
@ -893,7 +895,7 @@ describe('reasoning utils', () => {
expect(result).toEqual({
reasoningConfig: {
type: 'enabled',
budgetTokens: 2048
budgetTokens: 4096
}
})
})
@ -994,4 +996,89 @@ describe('reasoning utils', () => {
})
})
})
describe('getAnthropicThinkingBudget', () => {
it('should return undefined when reasoningEffort is undefined', async () => {
const result = getAnthropicThinkingBudget(4096, undefined, 'claude-3-7-sonnet')
expect(result).toBeUndefined()
})
it('should return undefined when reasoningEffort is none', async () => {
const result = getAnthropicThinkingBudget(4096, 'none', 'claude-3-7-sonnet')
expect(result).toBeUndefined()
})
it('should return undefined when tokenLimit is not found', async () => {
const { findTokenLimit } = await import('@renderer/config/models')
vi.mocked(findTokenLimit).mockReturnValue(undefined)
const result = getAnthropicThinkingBudget(4096, 'medium', 'unknown-model')
expect(result).toBeUndefined()
})
it('should calculate budget correctly when maxTokens is provided', async () => {
const { findTokenLimit } = await import('@renderer/config/models')
vi.mocked(findTokenLimit).mockReturnValue({ min: 1024, max: 32768 })
const result = getAnthropicThinkingBudget(4096, 'medium', 'claude-3-7-sonnet')
// EFFORT_RATIO['medium'] = 0.5
// budget = Math.floor((32768 - 1024) * 0.5 + 1024)
// = Math.floor(31744 * 0.5 + 1024) = Math.floor(15872 + 1024) = 16896
// budgetTokens = Math.min(16896, 4096) = 4096
// result = Math.max(1024, 4096) = 4096
expect(result).toBe(4096)
})
it('should use tokenLimit.max when maxTokens is undefined', async () => {
const { findTokenLimit } = await import('@renderer/config/models')
vi.mocked(findTokenLimit).mockReturnValue({ min: 1024, max: 32768 })
const result = getAnthropicThinkingBudget(undefined, 'medium', 'claude-3-7-sonnet')
// When maxTokens is undefined, budget is not constrained by maxTokens
// EFFORT_RATIO['medium'] = 0.5
// budget = Math.floor((32768 - 1024) * 0.5 + 1024)
// = Math.floor(31744 * 0.5 + 1024) = Math.floor(15872 + 1024) = 16896
// result = Math.max(1024, 16896) = 16896
expect(result).toBe(16896)
})
it('should enforce minimum budget of 1024', async () => {
const { findTokenLimit } = await import('@renderer/config/models')
vi.mocked(findTokenLimit).mockReturnValue({ min: 100, max: 1000 })
const result = getAnthropicThinkingBudget(500, 'low', 'claude-3-7-sonnet')
// EFFORT_RATIO['low'] = 0.05
// budget = Math.floor((1000 - 100) * 0.05 + 100)
// = Math.floor(900 * 0.05 + 100) = Math.floor(45 + 100) = 145
// budgetTokens = Math.min(145, 500) = 145
// result = Math.max(1024, 145) = 1024
expect(result).toBe(1024)
})
it('should respect effort ratio for high reasoning effort', async () => {
const { findTokenLimit } = await import('@renderer/config/models')
vi.mocked(findTokenLimit).mockReturnValue({ min: 1024, max: 32768 })
const result = getAnthropicThinkingBudget(8192, 'high', 'claude-3-7-sonnet')
// EFFORT_RATIO['high'] = 0.8
// budget = Math.floor((32768 - 1024) * 0.8 + 1024)
// = Math.floor(31744 * 0.8 + 1024) = Math.floor(25395.2 + 1024) = 26419
// budgetTokens = Math.min(26419, 8192) = 8192
// result = Math.max(1024, 8192) = 8192
expect(result).toBe(8192)
})
it('should use full token limit when maxTokens is undefined and reasoning effort is high', async () => {
const { findTokenLimit } = await import('@renderer/config/models')
vi.mocked(findTokenLimit).mockReturnValue({ min: 1024, max: 32768 })
const result = getAnthropicThinkingBudget(undefined, 'high', 'claude-3-7-sonnet')
// When maxTokens is undefined, budget is not constrained by maxTokens
// EFFORT_RATIO['high'] = 0.8
// budget = Math.floor((32768 - 1024) * 0.8 + 1024)
// = Math.floor(31744 * 0.8 + 1024) = Math.floor(25395.2 + 1024) = 26419
// result = Math.max(1024, 26419) = 26419
expect(result).toBe(26419)
})
})
})

View File

@ -29,13 +29,14 @@ import {
isSupportedThinkingTokenDoubaoModel,
isSupportedThinkingTokenGeminiModel,
isSupportedThinkingTokenHunyuanModel,
isSupportedThinkingTokenMiMoModel,
isSupportedThinkingTokenModel,
isSupportedThinkingTokenQwenModel,
isSupportedThinkingTokenZhipuModel
} from '@renderer/config/models'
import { getStoreSetting } from '@renderer/hooks/useSettings'
import { getAssistantSettings, getProviderByModel } from '@renderer/services/AssistantService'
import type { Assistant, Model } from '@renderer/types'
import type { Assistant, Model, ReasoningEffortOption } from '@renderer/types'
import { EFFORT_RATIO, isSystemProvider, SystemProviderIds } from '@renderer/types'
import type { OpenAIReasoningSummary } from '@renderer/types/aiCoreTypes'
import type { ReasoningEffortOptionalParams } from '@renderer/types/sdk'
@ -409,6 +410,12 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
return { thinking: { type: 'enabled' } }
}
if (isSupportedThinkingTokenMiMoModel(model)) {
return {
thinking: { type: 'enabled' }
}
}
// Default case: no special thinking settings
return {}
}
@ -480,16 +487,14 @@ export function getAnthropicThinkingBudget(
return undefined
}
const budgetTokens = Math.max(
1024,
Math.floor(
Math.min(
(tokenLimit.max - tokenLimit.min) * effortRatio + tokenLimit.min,
(maxTokens || DEFAULT_MAX_TOKENS) * effortRatio
)
)
)
return budgetTokens
const budget = Math.floor((tokenLimit.max - tokenLimit.min) * effortRatio + tokenLimit.min)
let budgetTokens = budget
if (maxTokens !== undefined) {
budgetTokens = Math.min(budget, maxTokens)
}
return Math.max(1024, budgetTokens)
}
/**
@ -534,20 +539,25 @@ export function getAnthropicReasoningParams(
return {}
}
// type GoogleThinkingLevel = NonNullable<GoogleGenerativeAIProviderOptions['thinkingConfig']>['thinkingLevel']
type GoogleThinkingLevel = NonNullable<GoogleGenerativeAIProviderOptions['thinkingConfig']>['thinkingLevel']
// function mapToGeminiThinkingLevel(reasoningEffort: ReasoningEffortOption): GoogelThinkingLevel {
// switch (reasoningEffort) {
// case 'low':
// return 'low'
// case 'medium':
// return 'medium'
// case 'high':
// return 'high'
// default:
// return 'medium'
// }
// }
function mapToGeminiThinkingLevel(reasoningEffort: ReasoningEffortOption): GoogleThinkingLevel {
switch (reasoningEffort) {
case 'default':
return undefined
case 'minimal':
return 'minimal'
case 'low':
return 'low'
case 'medium':
return 'medium'
case 'high':
return 'high'
default:
logger.warn('Unknown thinking level for Gemini. Fallback to medium instead.', { reasoningEffort })
return 'medium'
}
}
/**
* Gemini
@ -580,15 +590,15 @@ export function getGeminiReasoningParams(
}
}
// TODO: 很多中转还不支持
// https://ai.google.dev/gemini-api/docs/gemini-3?thinking=high#new_api_features_in_gemini_3
// if (isGemini3ThinkingTokenModel(model)) {
// return {
// thinkingConfig: {
// thinkingLevel: mapToGeminiThinkingLevel(reasoningEffort)
// }
// }
// }
if (isGemini3ThinkingTokenModel(model)) {
return {
thinkingConfig: {
includeThoughts: true,
thinkingLevel: mapToGeminiThinkingLevel(reasoningEffort)
}
}
}
const effortRatio = EFFORT_RATIO[reasoningEffort]

View File

@ -0,0 +1,17 @@
<svg width="100" height="100" viewBox="0 0 100 100" fill="none" xmlns="http://www.w3.org/2000/svg">
<g transform="translate(10, 42) scale(1.35)">
<!-- m -->
<path d="M1.2683 15.9987C0.9317 15.998 0.6091 15.8638 0.3713 15.6256C0.1335 15.3873 0 15.0644 0 14.7278V7.165C0.0148 6.83757 0.1554 6.52848 0.3924 6.30203C0.6293 6.07559 0.9445 5.94922 1.2722 5.94922C1.6 5.94922 1.9152 6.07559 2.1521 6.30203C2.3891 6.52848 2.5296 6.83757 2.5445 7.165V14.7278C2.5442 14.895 2.5109 15.0606 2.4466 15.215C2.3822 15.3693 2.2881 15.5095 2.1696 15.6276C2.0511 15.7456 1.9105 15.8391 1.7559 15.9028C1.6012 15.9665 1.4356 15.9991 1.2683 15.9987Z" fill="currentColor"/>
<path d="M14.8841 15.9993C14.5468 15.9993 14.2232 15.8655 13.9845 15.6272C13.7457 15.389 13.6112 15.0657 13.6105 14.7284V4.67881L8.9888 9.45281C8.7538 9.69657 8.4315 9.83697 8.0929 9.84312C7.7544 9.84928 7.4272 9.72069 7.1835 9.48563C6.9397 9.25058 6.7993 8.92832 6.7931 8.58976C6.7901 8.42211 6.8201 8.25551 6.8814 8.09947C6.9428 7.94342 7.0342 7.80098 7.1506 7.68028L13.9703 0.661082C14.1463 0.478921 14.3728 0.35354 14.6207 0.301033C14.8685 0.248526 15.1264 0.271291 15.3612 0.366403C15.5961 0.461516 15.7971 0.624637 15.9385 0.834827C16.08 1.04502 16.1554 1.29268 16.1551 1.54603V14.7284C16.1551 15.0655 16.0212 15.3887 15.7828 15.6271C15.5444 15.8654 15.2212 15.9993 14.8841 15.9993Z" fill="currentColor"/>
<path d="M8.0748 9.82621C7.9058 9.82749 7.7383 9.79518 7.5818 9.73117C7.4254 9.66716 7.2833 9.57272 7.1636 9.45332L0.3571 2.4315C0.1224 2.18948 -0.0065 1.86414 -0.0014 1.52705C0.0038 1.18996 0.1427 0.868726 0.3847 0.634023C0.6267 0.399319 0.9521 0.270369 1.2892 0.27554C1.6262 0.280711 1.9475 0.419579 2.1822 0.661595L8.9887 7.66767C9.1623 7.84735 9.2792 8.07413 9.3249 8.31977C9.3706 8.56541 9.343 8.81906 9.2456 9.04914C9.1482 9.27922 8.9852 9.47557 8.7771 9.61374C8.5689 9.75191 8.3247 9.8258 8.0748 9.82621Z" fill="currentColor"/>
<!-- i -->
<path d="M20.3539 15.9997C20.0169 15.9997 19.6936 15.8658 19.4552 15.6274C19.2169 15.3891 19.083 15.0658 19.083 14.7287V1.54636C19.083 1.20928 19.2169 0.886001 19.4552 0.647648C19.6936 0.409296 20.0169 0.275391 20.3539 0.275391C20.691 0.275391 21.0143 0.409296 21.2526 0.647648C21.491 0.886001 21.6249 1.20928 21.6249 1.54636V14.7287C21.6249 14.8956 21.592 15.0609 21.5282 15.2151C21.4643 15.3693 21.3707 15.5094 21.2526 15.6274C21.1346 15.7454 20.9945 15.839 20.8403 15.9029C20.6861 15.9668 20.5208 15.9997 20.3539 15.9997Z" fill="currentColor"/>
<!-- m -->
<path d="M25.8263 15.9992C25.4893 15.9992 25.166 15.8653 24.9276 15.627C24.6893 15.3886 24.5554 15.0654 24.5554 14.7283V7.1655C24.5554 6.82842 24.6893 6.50514 24.9276 6.26679C25.166 6.02844 25.4893 5.89453 25.8263 5.89453C26.1634 5.89453 26.4867 6.02844 26.7251 6.26679C26.9634 6.50514 27.0973 6.82842 27.0973 7.1655V14.7283C27.0973 15.0654 26.9634 15.3886 26.7251 15.627C26.4867 15.8653 26.1634 15.9992 25.8263 15.9992Z" fill="currentColor"/>
<path d="M39.4394 16.0004C39.1023 16.0004 38.779 15.8664 38.5406 15.6281C38.3023 15.3897 38.1684 15.0665 38.1684 14.7294V4.67982L33.5467 9.45382C33.3117 9.69584 32.9901 9.83457 32.6523 9.83949C32.3156 9.84442 31.9894 9.71513 31.7474 9.48008C31.5054 9.24503 31.3674 8.92346 31.3623 8.58613C31.3573 8.24879 31.4863 7.92331 31.7214 7.6813L38.5284 0.662093C38.7044 0.483575 38.9304 0.361405 39.1767 0.311007C39.4233 0.260609 39.6787 0.284243 39.9114 0.378925C40.1437 0.473608 40.3427 0.635093 40.4837 0.842994C40.6247 1.05089 40.7007 1.29589 40.7027 1.54704V14.7294C40.7017 15.0649 40.5687 15.3866 40.3327 15.6246C40.0957 15.8625 39.7747 15.9976 39.4394 16.0004Z" fill="currentColor"/>
<path d="M32.6324 9.82618C32.4634 9.82746 32.2964 9.79516 32.1394 9.73115C31.9834 9.66713 31.8414 9.57269 31.7214 9.45329L24.9151 2.43147C24.7921 2.31326 24.6942 2.1715 24.6271 2.01463C24.5601 1.85777 24.5253 1.68901 24.5249 1.51842C24.5244 1.34783 24.5583 1.1789 24.6246 1.02169C24.6908 0.864476 24.788 0.722207 24.9104 0.603357C25.0327 0.484507 25.1778 0.391509 25.3369 0.329905C25.4959 0.268302 25.6658 0.239353 25.8363 0.244785C26.0068 0.250217 26.1745 0.289918 26.3293 0.361522C26.4841 0.433126 26.623 0.535168 26.7375 0.661566L33.5467 7.66764C33.7204 7.84732 33.8374 8.0741 33.8824 8.31974C33.9284 8.56538 33.9014 8.81903 33.8034 9.04911C33.7064 9.27919 33.5434 9.47554 33.3354 9.61371C33.1267 9.75189 32.8824 9.82577 32.6324 9.82618Z" fill="currentColor"/>
<!-- o -->
<path d="M50.9434 15.9814C49.5534 15.9865 48.1864 15.6287 46.9774 14.9433C45.7674 14.2579 44.7584 13.2687 44.0484 12.0735C43.3384 10.8783 42.9534 9.5185 42.9304 8.12863C42.9074 6.73875 43.2474 5.36692 43.9164 4.1488C44.0844 3.86356 44.3564 3.65487 44.6754 3.56707C44.9944 3.47927 45.3344 3.51928 45.6244 3.67859C45.9144 3.8379 46.1314 4.10397 46.2274 4.42026C46.3244 4.73656 46.2944 5.07816 46.1434 5.3725C45.5764 6.40664 45.3594 7.59693 45.5264 8.76468C45.6924 9.93243 46.2334 11.0147 47.0674 11.8489C47.9014 12.6831 48.9834 13.2244 50.1514 13.3914C51.3184 13.5584 52.5094 13.3421 53.5434 12.7751C53.8384 12.6125 54.1864 12.5738 54.5104 12.6676C54.8344 12.7614 55.1074 12.98 55.2704 13.2753C55.4324 13.5706 55.4714 13.9184 55.3774 14.2422C55.2834 14.566 55.0654 14.8393 54.7694 15.0019C53.5974 15.6455 52.2814 15.9824 50.9434 15.9814Z" fill="currentColor"/>
<path d="M56.8104 12.5052C56.5944 12.5044 56.3834 12.4484 56.1954 12.3424C55.9014 12.1795 55.6824 11.9066 55.5894 11.5833C55.4954 11.26 55.5324 10.9126 55.6944 10.6171C56.2614 9.58297 56.4784 8.39268 56.3114 7.22493C56.1454 6.05718 55.6044 4.97496 54.7704 4.14073C53.9364 3.30649 52.8544 2.76525 51.6864 2.59825C50.5194 2.43125 49.3284 2.64749 48.2944 3.21452C48.1474 3.30059 47.9854 3.3564 47.8164 3.37863C47.6484 3.40087 47.4774 3.38908 47.3134 3.34397C47.1494 3.29886 46.9964 3.22134 46.8624 3.116C46.7294 3.01066 46.6184 2.87964 46.5364 2.73069C46.4544 2.58174 46.4034 2.41788 46.3864 2.24882C46.3684 2.07975 46.3854 1.90891 46.4354 1.7464C46.4854 1.58389 46.5674 1.43301 46.6764 1.3027C46.7854 1.17238 46.9194 1.06527 47.0704 0.987704C48.5874 0.155491 50.3324 -0.162266 52.0454 0.0821474C53.7574 0.326561 55.3454 1.11995 56.5684 2.34319C57.7914 3.56642 58.5844 5.15347 58.8294 6.86604C59.0734 8.5786 58.7554 10.3242 57.9234 11.8408C57.8144 12.0411 57.6534 12.2084 57.4574 12.3253C57.2624 12.4422 57.0384 12.5043 56.8104 12.5052Z" fill="currentColor"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 6.2 KiB

View File

@ -0,0 +1,17 @@
<svg width="100" height="100" viewBox="0 0 100 100" fill="none" xmlns="http://www.w3.org/2000/svg">
<g transform="translate(10, 42) scale(1.35)">
<!-- m -->
<path d="M1.2683 15.9987C0.9317 15.998 0.6091 15.8638 0.3713 15.6256C0.1335 15.3873 0 15.0644 0 14.7278V7.165C0.0148 6.83757 0.1554 6.52848 0.3924 6.30203C0.6293 6.07559 0.9445 5.94922 1.2722 5.94922C1.6 5.94922 1.9152 6.07559 2.1521 6.30203C2.3891 6.52848 2.5296 6.83757 2.5445 7.165V14.7278C2.5442 14.895 2.5109 15.0606 2.4466 15.215C2.3822 15.3693 2.2881 15.5095 2.1696 15.6276C2.0511 15.7456 1.9105 15.8391 1.7559 15.9028C1.6012 15.9665 1.4356 15.9991 1.2683 15.9987Z" fill="currentColor"/>
<path d="M14.8841 15.9993C14.5468 15.9993 14.2232 15.8655 13.9845 15.6272C13.7457 15.389 13.6112 15.0657 13.6105 14.7284V4.67881L8.9888 9.45281C8.7538 9.69657 8.4315 9.83697 8.0929 9.84312C7.7544 9.84928 7.4272 9.72069 7.1835 9.48563C6.9397 9.25058 6.7993 8.92832 6.7931 8.58976C6.7901 8.42211 6.8201 8.25551 6.8814 8.09947C6.9428 7.94342 7.0342 7.80098 7.1506 7.68028L13.9703 0.661082C14.1463 0.478921 14.3728 0.35354 14.6207 0.301033C14.8685 0.248526 15.1264 0.271291 15.3612 0.366403C15.5961 0.461516 15.7971 0.624637 15.9385 0.834827C16.08 1.04502 16.1554 1.29268 16.1551 1.54603V14.7284C16.1551 15.0655 16.0212 15.3887 15.7828 15.6271C15.5444 15.8654 15.2212 15.9993 14.8841 15.9993Z" fill="currentColor"/>
<path d="M8.0748 9.82621C7.9058 9.82749 7.7383 9.79518 7.5818 9.73117C7.4254 9.66716 7.2833 9.57272 7.1636 9.45332L0.3571 2.4315C0.1224 2.18948 -0.0065 1.86414 -0.0014 1.52705C0.0038 1.18996 0.1427 0.868726 0.3847 0.634023C0.6267 0.399319 0.9521 0.270369 1.2892 0.27554C1.6262 0.280711 1.9475 0.419579 2.1822 0.661595L8.9887 7.66767C9.1623 7.84735 9.2792 8.07413 9.3249 8.31977C9.3706 8.56541 9.343 8.81906 9.2456 9.04914C9.1482 9.27922 8.9852 9.47557 8.7771 9.61374C8.5689 9.75191 8.3247 9.8258 8.0748 9.82621Z" fill="currentColor"/>
<!-- i -->
<path d="M20.3539 15.9997C20.0169 15.9997 19.6936 15.8658 19.4552 15.6274C19.2169 15.3891 19.083 15.0658 19.083 14.7287V1.54636C19.083 1.20928 19.2169 0.886001 19.4552 0.647648C19.6936 0.409296 20.0169 0.275391 20.3539 0.275391C20.691 0.275391 21.0143 0.409296 21.2526 0.647648C21.491 0.886001 21.6249 1.20928 21.6249 1.54636V14.7287C21.6249 14.8956 21.592 15.0609 21.5282 15.2151C21.4643 15.3693 21.3707 15.5094 21.2526 15.6274C21.1346 15.7454 20.9945 15.839 20.8403 15.9029C20.6861 15.9668 20.5208 15.9997 20.3539 15.9997Z" fill="currentColor"/>
<!-- m -->
<path d="M25.8263 15.9992C25.4893 15.9992 25.166 15.8653 24.9276 15.627C24.6893 15.3886 24.5554 15.0654 24.5554 14.7283V7.1655C24.5554 6.82842 24.6893 6.50514 24.9276 6.26679C25.166 6.02844 25.4893 5.89453 25.8263 5.89453C26.1634 5.89453 26.4867 6.02844 26.7251 6.26679C26.9634 6.50514 27.0973 6.82842 27.0973 7.1655V14.7283C27.0973 15.0654 26.9634 15.3886 26.7251 15.627C26.4867 15.8653 26.1634 15.9992 25.8263 15.9992Z" fill="currentColor"/>
<path d="M39.4394 16.0004C39.1023 16.0004 38.779 15.8664 38.5406 15.6281C38.3023 15.3897 38.1684 15.0665 38.1684 14.7294V4.67982L33.5467 9.45382C33.3117 9.69584 32.9901 9.83457 32.6523 9.83949C32.3156 9.84442 31.9894 9.71513 31.7474 9.48008C31.5054 9.24503 31.3674 8.92346 31.3623 8.58613C31.3573 8.24879 31.4863 7.92331 31.7214 7.6813L38.5284 0.662093C38.7044 0.483575 38.9304 0.361405 39.1767 0.311007C39.4233 0.260609 39.6787 0.284243 39.9114 0.378925C40.1437 0.473608 40.3427 0.635093 40.4837 0.842994C40.6247 1.05089 40.7007 1.29589 40.7027 1.54704V14.7294C40.7017 15.0649 40.5687 15.3866 40.3327 15.6246C40.0957 15.8625 39.7747 15.9976 39.4394 16.0004Z" fill="currentColor"/>
<path d="M32.6324 9.82618C32.4634 9.82746 32.2964 9.79516 32.1394 9.73115C31.9834 9.66713 31.8414 9.57269 31.7214 9.45329L24.9151 2.43147C24.7921 2.31326 24.6942 2.1715 24.6271 2.01463C24.5601 1.85777 24.5253 1.68901 24.5249 1.51842C24.5244 1.34783 24.5583 1.1789 24.6246 1.02169C24.6908 0.864476 24.788 0.722207 24.9104 0.603357C25.0327 0.484507 25.1778 0.391509 25.3369 0.329905C25.4959 0.268302 25.6658 0.239353 25.8363 0.244785C26.0068 0.250217 26.1745 0.289918 26.3293 0.361522C26.4841 0.433126 26.623 0.535168 26.7375 0.661566L33.5467 7.66764C33.7204 7.84732 33.8374 8.0741 33.8824 8.31974C33.9284 8.56538 33.9014 8.81903 33.8034 9.04911C33.7064 9.27919 33.5434 9.47554 33.3354 9.61371C33.1267 9.75189 32.8824 9.82577 32.6324 9.82618Z" fill="currentColor"/>
<!-- o -->
<path d="M50.9434 15.9814C49.5534 15.9865 48.1864 15.6287 46.9774 14.9433C45.7674 14.2579 44.7584 13.2687 44.0484 12.0735C43.3384 10.8783 42.9534 9.5185 42.9304 8.12863C42.9074 6.73875 43.2474 5.36692 43.9164 4.1488C44.0844 3.86356 44.3564 3.65487 44.6754 3.56707C44.9944 3.47927 45.3344 3.51928 45.6244 3.67859C45.9144 3.8379 46.1314 4.10397 46.2274 4.42026C46.3244 4.73656 46.2944 5.07816 46.1434 5.3725C45.5764 6.40664 45.3594 7.59693 45.5264 8.76468C45.6924 9.93243 46.2334 11.0147 47.0674 11.8489C47.9014 12.6831 48.9834 13.2244 50.1514 13.3914C51.3184 13.5584 52.5094 13.3421 53.5434 12.7751C53.8384 12.6125 54.1864 12.5738 54.5104 12.6676C54.8344 12.7614 55.1074 12.98 55.2704 13.2753C55.4324 13.5706 55.4714 13.9184 55.3774 14.2422C55.2834 14.566 55.0654 14.8393 54.7694 15.0019C53.5974 15.6455 52.2814 15.9824 50.9434 15.9814Z" fill="currentColor"/>
<path d="M56.8104 12.5052C56.5944 12.5044 56.3834 12.4484 56.1954 12.3424C55.9014 12.1795 55.6824 11.9066 55.5894 11.5833C55.4954 11.26 55.5324 10.9126 55.6944 10.6171C56.2614 9.58297 56.4784 8.39268 56.3114 7.22493C56.1454 6.05718 55.6044 4.97496 54.7704 4.14073C53.9364 3.30649 52.8544 2.76525 51.6864 2.59825C50.5194 2.43125 49.3284 2.64749 48.2944 3.21452C48.1474 3.30059 47.9854 3.3564 47.8164 3.37863C47.6484 3.40087 47.4774 3.38908 47.3134 3.34397C47.1494 3.29886 46.9964 3.22134 46.8624 3.116C46.7294 3.01066 46.6184 2.87964 46.5364 2.73069C46.4544 2.58174 46.4034 2.41788 46.3864 2.24882C46.3684 2.07975 46.3854 1.90891 46.4354 1.7464C46.4854 1.58389 46.5674 1.43301 46.6764 1.3027C46.7854 1.17238 46.9194 1.06527 47.0704 0.987704C48.5874 0.155491 50.3324 -0.162266 52.0454 0.0821474C53.7574 0.326561 55.3454 1.11995 56.5684 2.34319C57.7914 3.56642 58.5844 5.15347 58.8294 6.86604C59.0734 8.5786 58.7554 10.3242 57.9234 11.8408C57.8144 12.0411 57.6534 12.2084 57.4574 12.3253C57.2624 12.4422 57.0384 12.5043 56.8104 12.5052Z" fill="currentColor"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 6.2 KiB

View File

@ -695,15 +695,20 @@ describe('getThinkModelType - Comprehensive Coverage', () => {
})
describe('Gemini models', () => {
it('should return gemini for Flash models', () => {
expect(getThinkModelType(createModel({ id: 'gemini-2.5-flash-latest' }))).toBe('gemini')
expect(getThinkModelType(createModel({ id: 'gemini-flash-latest' }))).toBe('gemini')
expect(getThinkModelType(createModel({ id: 'gemini-flash-lite-latest' }))).toBe('gemini')
it('should return gemini2_flash for Flash models', () => {
expect(getThinkModelType(createModel({ id: 'gemini-2.5-flash-latest' }))).toBe('gemini2_flash')
})
it('should return gemini3_flash for Gemini 3 Flash models', () => {
expect(getThinkModelType(createModel({ id: 'gemini-3-flash-preview' }))).toBe('gemini3_flash')
expect(getThinkModelType(createModel({ id: 'gemini-flash-latest' }))).toBe('gemini3_flash')
})
it('should return gemini_pro for Pro models', () => {
expect(getThinkModelType(createModel({ id: 'gemini-2.5-pro-latest' }))).toBe('gemini_pro')
expect(getThinkModelType(createModel({ id: 'gemini-pro-latest' }))).toBe('gemini_pro')
it('should return gemini2_pro for Gemini 2.5 Pro models', () => {
expect(getThinkModelType(createModel({ id: 'gemini-2.5-pro-latest' }))).toBe('gemini2_pro')
})
it('should return gemini3_pro for Gemini 3 Pro models', () => {
expect(getThinkModelType(createModel({ id: 'gemini-3-pro-preview' }))).toBe('gemini3_pro')
expect(getThinkModelType(createModel({ id: 'gemini-pro-latest' }))).toBe('gemini3_pro')
})
})
@ -810,7 +815,7 @@ describe('getThinkModelType - Comprehensive Coverage', () => {
name: 'gemini-2.5-flash-latest'
})
)
).toBe('gemini')
).toBe('gemini2_flash')
})
it('should use id result when id matches', () => {
@ -835,7 +840,7 @@ describe('getThinkModelType - Comprehensive Coverage', () => {
it('should handle case insensitivity correctly', () => {
expect(getThinkModelType(createModel({ id: 'GPT-5.1' }))).toBe('gpt5_1')
expect(getThinkModelType(createModel({ id: 'Gemini-2.5-Flash-Latest' }))).toBe('gemini')
expect(getThinkModelType(createModel({ id: 'Gemini-2.5-Flash-Latest' }))).toBe('gemini2_flash')
expect(getThinkModelType(createModel({ id: 'DeepSeek-V3.1' }))).toBe('deepseek_hybrid')
})
@ -855,7 +860,7 @@ describe('getThinkModelType - Comprehensive Coverage', () => {
it('should handle models with version suffixes', () => {
expect(getThinkModelType(createModel({ id: 'gpt-5-preview-2024' }))).toBe('gpt5')
expect(getThinkModelType(createModel({ id: 'o3-mini-2024' }))).toBe('o')
expect(getThinkModelType(createModel({ id: 'gemini-2.5-flash-latest-001' }))).toBe('gemini')
expect(getThinkModelType(createModel({ id: 'gemini-2.5-flash-latest-001' }))).toBe('gemini2_flash')
})
it('should prioritize GPT-5.1 over GPT-5 detection', () => {
@ -955,6 +960,14 @@ describe('Gemini Models', () => {
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-flash-preview',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'google/gemini-3-pro-preview',
@ -996,6 +1009,31 @@ describe('Gemini Models', () => {
group: ''
})
).toBe(true)
// Version with date suffixes
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-flash-preview-09-2025',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-pro-preview-09-2025',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-flash-exp-1234',
name: '',
provider: '',
group: ''
})
).toBe(true)
// Version with decimals
expect(
isSupportedThinkingTokenGeminiModel({
@ -1015,7 +1053,8 @@ describe('Gemini Models', () => {
).toBe(true)
})
it('should return true for gemini-3 image models', () => {
it('should return true for gemini-3-pro-image models only', () => {
// Only gemini-3-pro-image models should return true
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-pro-image-preview',
@ -1024,6 +1063,17 @@ describe('Gemini Models', () => {
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-pro-image',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return false for other gemini-3 image models', () => {
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3.0-flash-image-preview',
@ -1086,6 +1136,22 @@ describe('Gemini Models', () => {
group: ''
})
).toBe(false)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-flash-preview-tts',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-pro-tts',
name: '',
provider: '',
group: ''
})
).toBe(false)
})
it('should return false for older gemini models', () => {
@ -1811,7 +1877,7 @@ describe('getModelSupportedReasoningEffortOptions', () => {
describe('Gemini models', () => {
it('should return correct options for Gemini Flash models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-2.5-flash-latest' }))).toEqual([
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-2.5-flash' }))).toEqual([
'default',
'none',
'low',
@ -1819,36 +1885,46 @@ describe('getModelSupportedReasoningEffortOptions', () => {
'high',
'auto'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-flash-latest' }))).toEqual([
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-3-flash-preview' }))).toEqual([
'default',
'none',
'minimal',
'low',
'medium',
'high',
'auto'
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-flash-latest' }))).toEqual([
'default',
'minimal',
'low',
'medium',
'high'
])
})
it('should return correct options for Gemini Pro models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-2.5-pro-latest' }))).toEqual([
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-2.5-pro' }))).toEqual([
'default',
'low',
'medium',
'high',
'auto'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-3-pro-preview' }))).toEqual([
'default',
'low',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-pro-latest' }))).toEqual([
'default',
'low',
'medium',
'high',
'auto'
'high'
])
})
it('should return correct options for Gemini 3 models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-3-flash' }))).toEqual([
'default',
'minimal',
'low',
'medium',
'high'
@ -1856,7 +1932,6 @@ describe('getModelSupportedReasoningEffortOptions', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-3-pro-preview' }))).toEqual([
'default',
'low',
'medium',
'high'
])
})
@ -2078,7 +2153,7 @@ describe('getModelSupportedReasoningEffortOptions', () => {
const geminiModel = createModel({ id: 'gemini-2.5-flash-latest' })
const geminiResult = getModelSupportedReasoningEffortOptions(geminiModel)
expect(geminiResult).toEqual(MODEL_SUPPORTED_OPTIONS.gemini)
expect(geminiResult).toEqual(MODEL_SUPPORTED_OPTIONS.gemini2_flash)
})
})
})

View File

@ -20,6 +20,8 @@ import {
getModelSupportedVerbosity,
groupQwenModels,
isAnthropicModel,
isGemini3FlashModel,
isGemini3ProModel,
isGeminiModel,
isGemmaModel,
isGenerateImageModels,
@ -432,6 +434,101 @@ describe('model utils', () => {
})
})
describe('isGemini3FlashModel', () => {
it('detects gemini-3-flash model', () => {
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-flash' }))).toBe(true)
})
it('detects gemini-3-flash-preview model', () => {
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-flash-preview' }))).toBe(true)
})
it('detects gemini-3-flash with version suffixes', () => {
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-flash-latest' }))).toBe(true)
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-flash-preview-09-2025' }))).toBe(true)
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-flash-exp-1234' }))).toBe(true)
})
it('detects gemini-flash-latest alias', () => {
expect(isGemini3FlashModel(createModel({ id: 'gemini-flash-latest' }))).toBe(true)
expect(isGemini3FlashModel(createModel({ id: 'Gemini-Flash-Latest' }))).toBe(true)
})
it('detects gemini-3-flash with uppercase', () => {
expect(isGemini3FlashModel(createModel({ id: 'Gemini-3-Flash' }))).toBe(true)
expect(isGemini3FlashModel(createModel({ id: 'GEMINI-3-FLASH-PREVIEW' }))).toBe(true)
})
it('excludes gemini-3-flash-image models', () => {
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-flash-image-preview' }))).toBe(false)
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-flash-image' }))).toBe(false)
})
it('returns false for non-flash gemini-3 models', () => {
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-pro' }))).toBe(false)
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-pro-preview' }))).toBe(false)
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-pro-image-preview' }))).toBe(false)
})
it('returns false for other gemini models', () => {
expect(isGemini3FlashModel(createModel({ id: 'gemini-2-flash' }))).toBe(false)
expect(isGemini3FlashModel(createModel({ id: 'gemini-2-flash-preview' }))).toBe(false)
expect(isGemini3FlashModel(createModel({ id: 'gemini-2.5-flash-preview-09-2025' }))).toBe(false)
})
it('returns false for null/undefined models', () => {
expect(isGemini3FlashModel(null)).toBe(false)
expect(isGemini3FlashModel(undefined)).toBe(false)
})
})
describe('isGemini3ProModel', () => {
it('detects gemini-3-pro model', () => {
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro' }))).toBe(true)
})
it('detects gemini-3-pro-preview model', () => {
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro-preview' }))).toBe(true)
})
it('detects gemini-3-pro with version suffixes', () => {
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro-latest' }))).toBe(true)
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro-preview-09-2025' }))).toBe(true)
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro-exp-1234' }))).toBe(true)
})
it('detects gemini-pro-latest alias', () => {
expect(isGemini3ProModel(createModel({ id: 'gemini-pro-latest' }))).toBe(true)
expect(isGemini3ProModel(createModel({ id: 'Gemini-Pro-Latest' }))).toBe(true)
})
it('detects gemini-3-pro with uppercase', () => {
expect(isGemini3ProModel(createModel({ id: 'Gemini-3-Pro' }))).toBe(true)
expect(isGemini3ProModel(createModel({ id: 'GEMINI-3-PRO-PREVIEW' }))).toBe(true)
})
it('excludes gemini-3-pro-image models', () => {
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro-image-preview' }))).toBe(false)
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro-image' }))).toBe(false)
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro-image-latest' }))).toBe(false)
})
it('returns false for non-pro gemini-3 models', () => {
expect(isGemini3ProModel(createModel({ id: 'gemini-3-flash' }))).toBe(false)
expect(isGemini3ProModel(createModel({ id: 'gemini-3-flash-preview' }))).toBe(false)
})
it('returns false for other gemini models', () => {
expect(isGemini3ProModel(createModel({ id: 'gemini-2-pro' }))).toBe(false)
expect(isGemini3ProModel(createModel({ id: 'gemini-2.5-pro-preview-09-2025' }))).toBe(false)
})
it('returns false for null/undefined models', () => {
expect(isGemini3ProModel(null)).toBe(false)
expect(isGemini3ProModel(undefined)).toBe(false)
})
})
describe('isZhipuModel', () => {
it('detects Zhipu models by provider', () => {
expect(isZhipuModel(createModel({ provider: 'zhipu' }))).toBe(true)

View File

@ -1791,5 +1791,13 @@ export const SYSTEM_MODELS: Record<SystemProviderId | 'defaultModel', Model[]> =
provider: 'cerebras',
group: 'qwen'
}
],
mimo: [
{
id: 'mimo-v2-flash',
name: 'Mimo V2 Flash',
provider: 'mimo',
group: 'Mimo'
}
]
}

View File

@ -103,6 +103,7 @@ import MicrosoftModelLogo from '@renderer/assets/images/models/microsoft.png'
import MicrosoftModelLogoDark from '@renderer/assets/images/models/microsoft_dark.png'
import MidjourneyModelLogo from '@renderer/assets/images/models/midjourney.png'
import MidjourneyModelLogoDark from '@renderer/assets/images/models/midjourney_dark.png'
import MiMoModelLogo from '@renderer/assets/images/models/mimo.svg'
import {
default as MinicpmModelLogo,
default as MinicpmModelLogoDark
@ -301,7 +302,8 @@ export function getModelLogoById(modelId: string): string | undefined {
bytedance: BytedanceModelLogo,
ling: LingModelLogo,
ring: LingModelLogo,
'(V_1|V_1_TURBO|V_2|V_2A|V_2_TURBO|DESCRIBE|UPSCALE)': IdeogramModelLogo
'(V_1|V_1_TURBO|V_2|V_2A|V_2_TURBO|DESCRIBE|UPSCALE)': IdeogramModelLogo,
mimo: MiMoModelLogo
} as const satisfies Record<string, string>
for (const key in logoMap) {

View File

@ -20,7 +20,7 @@ import {
isOpenAIReasoningModel,
isSupportedReasoningEffortOpenAIModel
} from './openai'
import { GEMINI_FLASH_MODEL_REGEX, isGemini3ThinkingTokenModel } from './utils'
import { GEMINI_FLASH_MODEL_REGEX, isGemini3FlashModel, isGemini3ProModel } from './utils'
import { isTextToImageModel } from './vision'
// Reasoning models
@ -43,15 +43,17 @@ export const MODEL_SUPPORTED_REASONING_EFFORT = {
gpt52pro: ['medium', 'high', 'xhigh'] as const,
grok: ['low', 'high'] as const,
grok4_fast: ['auto'] as const,
gemini: ['low', 'medium', 'high', 'auto'] as const,
gemini3: ['low', 'medium', 'high'] as const,
gemini_pro: ['low', 'medium', 'high', 'auto'] as const,
gemini2_flash: ['low', 'medium', 'high', 'auto'] as const,
gemini2_pro: ['low', 'medium', 'high', 'auto'] as const,
gemini3_flash: ['minimal', 'low', 'medium', 'high'] as const,
gemini3_pro: ['low', 'high'] as const,
qwen: ['low', 'medium', 'high'] as const,
qwen_thinking: ['low', 'medium', 'high'] as const,
doubao: ['auto', 'high'] as const,
doubao_no_auto: ['high'] as const,
doubao_after_251015: ['minimal', 'low', 'medium', 'high'] as const,
hunyuan: ['auto'] as const,
mimo: ['auto'] as const,
zhipu: ['auto'] as const,
perplexity: ['low', 'medium', 'high'] as const,
deepseek_hybrid: ['auto'] as const
@ -72,14 +74,16 @@ export const MODEL_SUPPORTED_OPTIONS: ThinkingOptionConfig = {
gpt52pro: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gpt52pro] as const,
grok: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.grok] as const,
grok4_fast: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.grok4_fast] as const,
gemini: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini] as const,
gemini_pro: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini_pro] as const,
gemini3: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini3] as const,
gemini2_flash: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini2_flash] as const,
gemini2_pro: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini2_pro] as const,
gemini3_flash: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini3_flash] as const,
gemini3_pro: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini3_pro] as const,
qwen: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.qwen] as const,
qwen_thinking: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.qwen_thinking] as const,
doubao: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao] as const,
doubao_no_auto: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao_no_auto] as const,
doubao_after_251015: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao_after_251015] as const,
mimo: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.mimo] as const,
hunyuan: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.hunyuan] as const,
zhipu: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.zhipu] as const,
perplexity: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.perplexity] as const,
@ -100,8 +104,7 @@ const _getThinkModelType = (model: Model): ThinkingModelType => {
const modelId = getLowerBaseModelName(model.id)
if (isOpenAIDeepResearchModel(model)) {
return 'openai_deep_research'
}
if (isGPT51SeriesModel(model)) {
} else if (isGPT51SeriesModel(model)) {
if (modelId.includes('codex')) {
thinkingModelType = 'gpt5_1_codex'
if (isGPT51CodexMaxModel(model)) {
@ -129,16 +132,18 @@ const _getThinkModelType = (model: Model): ThinkingModelType => {
} else if (isGrok4FastReasoningModel(model)) {
thinkingModelType = 'grok4_fast'
} else if (isSupportedThinkingTokenGeminiModel(model)) {
if (GEMINI_FLASH_MODEL_REGEX.test(model.id)) {
thinkingModelType = 'gemini'
if (isGemini3FlashModel(model)) {
thinkingModelType = 'gemini3_flash'
} else if (isGemini3ProModel(model)) {
thinkingModelType = 'gemini3_pro'
} else if (GEMINI_FLASH_MODEL_REGEX.test(model.id)) {
thinkingModelType = 'gemini2_flash'
} else {
thinkingModelType = 'gemini_pro'
thinkingModelType = 'gemini2_pro'
}
if (isGemini3ThinkingTokenModel(model)) {
thinkingModelType = 'gemini3'
}
} else if (isSupportedReasoningEffortGrokModel(model)) thinkingModelType = 'grok'
else if (isSupportedThinkingTokenQwenModel(model)) {
} else if (isSupportedReasoningEffortGrokModel(model)) {
thinkingModelType = 'grok'
} else if (isSupportedThinkingTokenQwenModel(model)) {
if (isQwenAlwaysThinkModel(model)) {
thinkingModelType = 'qwen_thinking'
}
@ -151,10 +156,17 @@ const _getThinkModelType = (model: Model): ThinkingModelType => {
} else {
thinkingModelType = 'doubao_no_auto'
}
} else if (isSupportedThinkingTokenHunyuanModel(model)) thinkingModelType = 'hunyuan'
else if (isSupportedReasoningEffortPerplexityModel(model)) thinkingModelType = 'perplexity'
else if (isSupportedThinkingTokenZhipuModel(model)) thinkingModelType = 'zhipu'
else if (isDeepSeekHybridInferenceModel(model)) thinkingModelType = 'deepseek_hybrid'
} else if (isSupportedThinkingTokenHunyuanModel(model)) {
thinkingModelType = 'hunyuan'
} else if (isSupportedReasoningEffortPerplexityModel(model)) {
thinkingModelType = 'perplexity'
} else if (isSupportedThinkingTokenZhipuModel(model)) {
thinkingModelType = 'zhipu'
} else if (isDeepSeekHybridInferenceModel(model)) {
thinkingModelType = 'deepseek_hybrid'
} else if (isSupportedThinkingTokenMiMoModel(model)) {
thinkingModelType = 'mimo'
}
return thinkingModelType
}
@ -263,7 +275,8 @@ function _isSupportedThinkingTokenModel(model: Model): boolean {
isSupportedThinkingTokenClaudeModel(model) ||
isSupportedThinkingTokenDoubaoModel(model) ||
isSupportedThinkingTokenHunyuanModel(model) ||
isSupportedThinkingTokenZhipuModel(model)
isSupportedThinkingTokenZhipuModel(model) ||
isSupportedThinkingTokenMiMoModel(model)
)
}
@ -561,6 +574,11 @@ export const isSupportedThinkingTokenZhipuModel = (model: Model): boolean => {
return ['glm-4.5', 'glm-4.6'].some((id) => modelId.includes(id))
}
export const isSupportedThinkingTokenMiMoModel = (model: Model): boolean => {
const modelId = getLowerBaseModelName(model.id, '/')
return ['mimo-v2-flash'].some((id) => modelId.includes(id))
}
export const isDeepSeekHybridInferenceModel = (model: Model) => {
const { idResult, nameResult } = withModelIdAndNameAsId(model, (model) => {
const modelId = getLowerBaseModelName(model.id)
@ -599,6 +617,8 @@ export const isZhipuReasoningModel = (model?: Model): boolean => {
return isSupportedThinkingTokenZhipuModel(model) || modelId.includes('glm-z1')
}
export const isMiMoReasoningModel = isSupportedThinkingTokenMiMoModel
export const isStepReasoningModel = (model?: Model): boolean => {
if (!model) {
return false
@ -649,6 +669,7 @@ export function isReasoningModel(model?: Model): boolean {
isDeepSeekHybridInferenceModel(model) ||
isLingReasoningModel(model) ||
isMiniMaxReasoningModel(model) ||
isMiMoReasoningModel(model) ||
modelId.includes('magistral') ||
modelId.includes('pangu-pro-moe') ||
modelId.includes('seed-oss') ||

View File

@ -30,7 +30,8 @@ export const FUNCTION_CALLING_MODELS = [
'kimi-k2(?:-[\\w-]+)?',
'ling-\\w+(?:-[\\w-]+)?',
'ring-\\w+(?:-[\\w-]+)?',
'minimax-m2'
'minimax-m2',
'mimo-v2-flash'
] as const
const FUNCTION_CALLING_EXCLUDED_MODELS = [

View File

@ -267,3 +267,43 @@ export const isGemini3ThinkingTokenModel = (model: Model) => {
const modelId = getLowerBaseModelName(model.id)
return isGemini3Model(model) && !modelId.includes('image')
}
/**
* Check if the model is a Gemini 3 Flash model
* Matches: gemini-3-flash, gemini-3-flash-preview, gemini-3-flash-preview-09-2025, gemini-flash-latest (alias)
* Excludes: gemini-3-flash-image-preview
* @param model - The model to check
* @returns true if the model is a Gemini 3 Flash model
*/
export const isGemini3FlashModel = (model: Model | undefined | null): boolean => {
if (!model) {
return false
}
const modelId = getLowerBaseModelName(model.id)
// Check for gemini-flash-latest alias (currently points to gemini-3-flash, may change in future)
if (modelId === 'gemini-flash-latest') {
return true
}
// Check for gemini-3-flash with optional suffixes, excluding image variants
return /gemini-3-flash(?!-image)(?:-[\w-]+)*$/i.test(modelId)
}
/**
* Check if the model is a Gemini 3 Pro model
* Matches: gemini-3-pro, gemini-3-pro-preview, gemini-3-pro-preview-09-2025, gemini-pro-latest (alias)
* Excludes: gemini-3-pro-image-preview
* @param model - The model to check
* @returns true if the model is a Gemini 3 Pro model
*/
export const isGemini3ProModel = (model: Model | undefined | null): boolean => {
if (!model) {
return false
}
const modelId = getLowerBaseModelName(model.id)
// Check for gemini-pro-latest alias (currently points to gemini-3-pro, may change in future)
if (modelId === 'gemini-pro-latest') {
return true
}
// Check for gemini-3-pro with optional suffixes, excluding image variants
return /gemini-3-pro(?!-image)(?:-[\w-]+)*$/i.test(modelId)
}

View File

@ -31,6 +31,7 @@ import JinaProviderLogo from '@renderer/assets/images/providers/jina.png'
import LanyunProviderLogo from '@renderer/assets/images/providers/lanyun.png'
import LMStudioProviderLogo from '@renderer/assets/images/providers/lmstudio.png'
import LongCatProviderLogo from '@renderer/assets/images/providers/longcat.png'
import MiMoProviderLogo from '@renderer/assets/images/providers/mimo.svg'
import MinimaxProviderLogo from '@renderer/assets/images/providers/minimax.png'
import MistralProviderLogo from '@renderer/assets/images/providers/mistral.png'
import ModelScopeProviderLogo from '@renderer/assets/images/providers/modelscope.png'
@ -695,6 +696,17 @@ export const SYSTEM_PROVIDERS_CONFIG: Record<SystemProviderId, SystemProvider> =
models: SYSTEM_MODELS.cerebras,
isSystem: true,
enabled: false
},
mimo: {
id: 'mimo',
name: 'Xiaomi MiMo',
type: 'openai',
apiKey: '',
apiHost: 'https://api.xiaomimimo.com',
anthropicApiHost: 'https://api.xiaomimimo.com/anthropic',
models: SYSTEM_MODELS.mimo,
isSystem: true,
enabled: false
}
} as const
@ -763,7 +775,8 @@ export const PROVIDER_LOGO_MAP: AtLeast<SystemProviderId, string> = {
huggingface: HuggingfaceProviderLogo,
sophnet: SophnetProviderLogo,
gateway: AIGatewayProviderLogo,
cerebras: CerebrasProviderLogo
cerebras: CerebrasProviderLogo,
mimo: MiMoProviderLogo
} as const
export function getProviderLogo(providerId: string) {
@ -1434,5 +1447,16 @@ export const PROVIDER_URLS: Record<SystemProviderId, ProviderUrls> = {
docs: 'https://inference-docs.cerebras.ai/introduction',
models: 'https://inference-docs.cerebras.ai/models/overview'
}
},
mimo: {
api: {
url: 'https://api.xiaomimimo.com'
},
websites: {
official: 'https://platform.xiaomimimo.com/',
apiKey: 'https://platform.xiaomimimo.com/#/console/usage',
docs: 'https://platform.xiaomimimo.com/#/docs/welcome',
models: 'https://platform.xiaomimimo.com/'
}
}
}

View File

@ -88,7 +88,8 @@ const providerKeyMap = {
huggingface: 'provider.huggingface',
sophnet: 'provider.sophnet',
gateway: 'provider.ai-gateway',
cerebras: 'provider.cerebras'
cerebras: 'provider.cerebras',
mimo: 'provider.mimo'
} as const
/**
@ -330,7 +331,8 @@ const builtInMcpDescriptionKeyMap: Record<BuiltinMCPServerName, string> = {
[BuiltinMCPServerNames.difyKnowledge]: 'settings.mcp.builtinServersDescriptions.dify_knowledge',
[BuiltinMCPServerNames.python]: 'settings.mcp.builtinServersDescriptions.python',
[BuiltinMCPServerNames.didiMCP]: 'settings.mcp.builtinServersDescriptions.didi_mcp',
[BuiltinMCPServerNames.browser]: 'settings.mcp.builtinServersDescriptions.browser'
[BuiltinMCPServerNames.browser]: 'settings.mcp.builtinServersDescriptions.browser',
[BuiltinMCPServerNames.nowledgeMem]: 'settings.mcp.builtinServersDescriptions.nowledge_mem'
} as const
export const getBuiltInMcpServerDescriptionLabel = (key: string): string => {

View File

@ -2643,6 +2643,7 @@
"lanyun": "LANYUN",
"lmstudio": "LM Studio",
"longcat": "LongCat AI",
"mimo": "Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope",
@ -3939,6 +3940,7 @@
"mcp_auto_install": "Automatically install MCP service (beta)",
"memory": "Persistent memory implementation based on a local knowledge graph. This enables the model to remember user-related information across different conversations. Requires configuring the MEMORY_FILE_PATH environment variable.",
"no": "No description",
"nowledge_mem": "Requires Nowledge Mem app running locally. Keeps AI chats, tools, notes, agents, and files in private memory on your computer. Download from https://mem.nowledge.co/",
"python": "Execute Python code in a secure sandbox environment. Run Python with Pyodide, supporting most standard libraries and scientific computing packages",
"sequentialthinking": "A MCP server implementation that provides tools for dynamic and reflective problem solving through structured thinking processes"
},

View File

@ -560,7 +560,7 @@
"medium": "斟酌",
"medium_description": "中强度推理",
"minimal": "微念",
"minimal_description": "最小程度的思考",
"minimal_description": "最小程度的推理",
"off": "关闭",
"off_description": "禁用推理",
"xhigh": "穷究",
@ -2643,6 +2643,7 @@
"lanyun": "蓝耘科技",
"lmstudio": "LM Studio",
"longcat": "龙猫",
"mimo": "Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope 魔搭",
@ -3939,6 +3940,7 @@
"mcp_auto_install": "自动安装 MCP 服务(测试版)",
"memory": "基于本地知识图谱的持久性记忆基础实现。这使得模型能够在不同对话间记住用户的相关信息。需要配置 MEMORY_FILE_PATH 环境变量。",
"no": "无描述",
"nowledge_mem": "需要本地运行 Nowledge Mem 应用。将 AI 对话、工具、笔记、智能体和文件保存在本地计算机的私有记忆中。请从 https://mem.nowledge.co/ 下载",
"python": "在安全的沙盒环境中执行 Python 代码。使用 Pyodide 运行 Python支持大多数标准库和科学计算包",
"sequentialthinking": "一个 MCP 服务器实现,提供了通过结构化思维过程进行动态和反思性问题解决的工具"
},

View File

@ -2643,6 +2643,7 @@
"lanyun": "藍耘",
"lmstudio": "LM Studio",
"longcat": "龍貓",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope 魔搭",
@ -3939,6 +3940,7 @@
"mcp_auto_install": "自動安裝 MCP 服務(測試版)",
"memory": "基於本機知識圖譜的持久性記憶基礎實做。這使得模型能夠在不同對話間記住使用者的相關資訊。需要設定 MEMORY_FILE_PATH 環境變數。",
"no": "無描述",
"nowledge_mem": "需要本機執行 Nowledge Mem 應用程式。將 AI 對話、工具、筆記、代理和檔案保存在電腦上的私人記憶體中。請從 https://mem.nowledge.co/ 下載",
"python": "在安全的沙盒環境中執行 Python 程式碼。使用 Pyodide 執行 Python支援大多數標準函式庫和科學計算套件",
"sequentialthinking": "一個 MCP 伺服器實做,提供了透過結構化思維過程進行動態和反思性問題解決的工具"
},

View File

@ -2643,6 +2643,7 @@
"lanyun": "Lanyun Technologie",
"lmstudio": "LM Studio",
"longcat": "Meißner Riesenhamster",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope",
@ -3939,6 +3940,7 @@
"mcp_auto_install": "MCP-Service automatisch installieren (Beta-Version)",
"memory": "MCP-Server mit persistenter Erinnerungsbasis auf lokalem Wissensgraphen, der Informationen über verschiedene Dialoge hinweg speichert. MEMORY_FILE_PATH-Umgebungsvariable muss konfiguriert werden",
"no": "Keine Beschreibung",
"nowledge_mem": "Erfordert lokal laufende Nowledge Mem App. Speichert KI-Chats, Tools, Notizen, Agenten und Dateien in einem privaten Speicher auf Ihrem Computer. Download unter https://mem.nowledge.co/",
"python": "Python-Code in einem sicheren Sandbox-Umgebung ausführen. Verwendung von Pyodide für Python, Unterstützung für die meisten Standardbibliotheken und wissenschaftliche Pakete",
"sequentialthinking": "MCP-Server-Implementierung mit strukturiertem Denkprozess, der dynamische und reflektierende Problemlösungen ermöglicht"
},

View File

@ -2643,6 +2643,7 @@
"lanyun": "Λανιούν Τεχνολογία",
"lmstudio": "LM Studio",
"longcat": "Τσίρο",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope Magpie",
@ -3939,6 +3940,7 @@
"mcp_auto_install": "Αυτόματη εγκατάσταση υπηρεσίας MCP (προβολή)",
"memory": "Βασική υλοποίηση μόνιμης μνήμης με βάση τοπικό γράφημα γνώσης. Αυτό επιτρέπει στο μοντέλο να θυμάται πληροφορίες σχετικές με τον χρήστη ανάμεσα σε διαφορετικές συνομιλίες. Απαιτείται η ρύθμιση της μεταβλητής περιβάλλοντος MEMORY_FILE_PATH.",
"no": "Χωρίς περιγραφή",
"nowledge_mem": "[to be translated]:Requires Nowledge Mem app running locally. Keeps AI chats, tools, notes, agents, and files in private memory on your computer. Download from https://mem.nowledge.co/",
"python": "Εκτελέστε κώδικα Python σε ένα ασφαλές περιβάλλον sandbox. Χρησιμοποιήστε το Pyodide για να εκτελέσετε Python, υποστηρίζοντας την πλειονότητα των βιβλιοθηκών της τυπικής βιβλιοθήκης και των πακέτων επιστημονικού υπολογισμού",
"sequentialthinking": "ένας εξυπηρετητής MCP που υλοποιείται, παρέχοντας εργαλεία για δυναμική και αναστοχαστική επίλυση προβλημάτων μέσω δομημένων διαδικασιών σκέψης"
},

View File

@ -2643,6 +2643,7 @@
"lanyun": "Tecnología Lanyun",
"lmstudio": "Estudio LM",
"longcat": "Totoro",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "Minimax",
"mistral": "Mistral",
"modelscope": "ModelScope Módulo",
@ -3939,6 +3940,7 @@
"mcp_auto_install": "Instalación automática del servicio MCP (versión beta)",
"memory": "Implementación básica de memoria persistente basada en un grafo de conocimiento local. Esto permite que el modelo recuerde información relevante del usuario entre diferentes conversaciones. Es necesario configurar la variable de entorno MEMORY_FILE_PATH.",
"no": "sin descripción",
"nowledge_mem": "[to be translated]:Requires Nowledge Mem app running locally. Keeps AI chats, tools, notes, agents, and files in private memory on your computer. Download from https://mem.nowledge.co/",
"python": "Ejecuta código Python en un entorno sandbox seguro. Usa Pyodide para ejecutar Python, compatible con la mayoría de las bibliotecas estándar y paquetes de cálculo científico.",
"sequentialthinking": "Una implementación de servidor MCP que proporciona herramientas para la resolución dinámica y reflexiva de problemas mediante un proceso de pensamiento estructurado"
},

View File

@ -2643,6 +2643,7 @@
"lanyun": "Technologie Lan Yun",
"lmstudio": "Studio LM",
"longcat": "Mon voisin Totoro",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope MoDa",
@ -3939,6 +3940,7 @@
"mcp_auto_install": "Installation automatique du service MCP (version bêta)",
"memory": "Implémentation de base de mémoire persistante basée sur un graphe de connaissances local. Cela permet au modèle de se souvenir des informations relatives à l'utilisateur entre différentes conversations. Nécessite la configuration de la variable d'environnement MEMORY_FILE_PATH.",
"no": "sans description",
"nowledge_mem": "[to be translated]:Requires Nowledge Mem app running locally. Keeps AI chats, tools, notes, agents, and files in private memory on your computer. Download from https://mem.nowledge.co/",
"python": "Exécutez du code Python dans un environnement bac à sable sécurisé. Utilisez Pyodide pour exécuter Python, prenant en charge la plupart des bibliothèques standard et des packages de calcul scientifique.",
"sequentialthinking": "Un serveur MCP qui fournit des outils permettant une résolution dynamique et réflexive des problèmes à travers un processus de pensée structuré"
},

View File

@ -2643,6 +2643,7 @@
"lanyun": "LANYUN",
"lmstudio": "LM Studio",
"longcat": "トトロ",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope",
@ -3939,6 +3940,7 @@
"mcp_auto_install": "MCPサービスの自動インストールベータ版",
"memory": "ローカルのナレッジグラフに基づく永続的なメモリの基本的な実装です。これにより、モデルは異なる会話間でユーザーの関連情報を記憶できるようになります。MEMORY_FILE_PATH 環境変数の設定が必要です。",
"no": "説明なし",
"nowledge_mem": "Nowledge Mem アプリをローカルで実行する必要があります。AI チャット、ツール、ート、エージェント、ファイルをコンピューター上のプライベートメモリに保存します。https://mem.nowledge.co/ からダウンロードしてください",
"python": "安全なサンドボックス環境でPythonコードを実行します。Pyodideを使用してPythonを実行し、ほとんどの標準ライブラリと科学計算パッケージをサポートしています。",
"sequentialthinking": "構造化された思考プロセスを通じて動的かつ反省的な問題解決を行うためのツールを提供するMCPサーバーの実装"
},

View File

@ -2643,6 +2643,7 @@
"lanyun": "Lanyun Tecnologia",
"lmstudio": "Estúdio LM",
"longcat": "Totoro",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "Minimax",
"mistral": "Mistral",
"modelscope": "ModelScope MôDá",
@ -3939,6 +3940,7 @@
"mcp_auto_install": "Instalação automática do serviço MCP (beta)",
"memory": "Implementação base de memória persistente baseada em grafos de conhecimento locais. Isso permite que o modelo lembre informações relevantes do utilizador entre diferentes conversas. É necessário configurar a variável de ambiente MEMORY_FILE_PATH.",
"no": "sem descrição",
"nowledge_mem": "Requer a aplicação Nowledge Mem em execução localmente. Mantém conversas de IA, ferramentas, notas, agentes e ficheiros numa memória privada no seu computador. Transfira de https://mem.nowledge.co/",
"python": "Executar código Python num ambiente sandbox seguro. Utilizar Pyodide para executar Python, suportando a maioria das bibliotecas padrão e pacotes de computação científica",
"sequentialthinking": "Uma implementação de servidor MCP que fornece ferramentas para resolução dinâmica e reflexiva de problemas através de um processo de pensamento estruturado"
},

View File

@ -2643,6 +2643,7 @@
"lanyun": "LANYUN",
"lmstudio": "LM Studio",
"longcat": "Тоторо",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope",
@ -3939,6 +3940,7 @@
"mcp_auto_install": "Автоматическая установка службы MCP (бета-версия)",
"memory": "реализация постоянной памяти на основе локального графа знаний. Это позволяет модели запоминать информацию о пользователе между различными диалогами. Требуется настроить переменную среды MEMORY_FILE_PATH.",
"no": "без описания",
"nowledge_mem": "Требуется запущенное локально приложение Nowledge Mem. Хранит чаты ИИ, инструменты, заметки, агентов и файлы в приватной памяти на вашем компьютере. Скачать можно на https://mem.nowledge.co/",
"python": "Выполняйте код Python в безопасной песочнице. Запускайте Python с помощью Pyodide, поддерживается большинство стандартных библиотек и пакетов для научных вычислений",
"sequentialthinking": "MCP серверная реализация, предоставляющая инструменты для динамического и рефлексивного решения проблем посредством структурированного мыслительного процесса"
},

View File

@ -80,7 +80,8 @@ const ANTHROPIC_COMPATIBLE_PROVIDER_IDS = [
SystemProviderIds.minimax,
SystemProviderIds.silicon,
SystemProviderIds.qiniu,
SystemProviderIds.dmxapi
SystemProviderIds.dmxapi,
SystemProviderIds.mimo
] as const
type AnthropicCompatibleProviderId = (typeof ANTHROPIC_COMPATIBLE_PROVIDER_IDS)[number]

View File

@ -74,7 +74,9 @@ export function getDefaultTranslateAssistant(
throw new Error('Unknown target language')
}
const reasoningEffort = getModelSupportedReasoningEffortOptions(model)?.[0]
const supportedOptions = getModelSupportedReasoningEffortOptions(model)
// disable reasoning if it could be disabled, otherwise no configuration
const reasoningEffort = supportedOptions?.includes('none') ? 'none' : 'default'
const settings = {
temperature: 0.7,
reasoning_effort: reasoningEffort,

View File

@ -183,6 +183,16 @@ export const builtinMCPServers: BuiltinMCPServer[] = [
provider: 'CherryAI',
installSource: 'builtin',
isTrusted: true
},
{
id: nanoid(),
name: BuiltinMCPServerNames.nowledgeMem,
reference: 'https://mem.nowledge.co/',
type: 'inMemory',
isActive: false,
provider: 'Nowledge',
installSource: 'builtin',
isTrusted: true
}
] as const

View File

@ -3046,6 +3046,7 @@ const migrateConfig = {
assistant.settings.reasoning_effort = 'default'
}
})
addProvider(state, 'mimo')
logger.info('migrate 187 success')
return state
} catch (error) {

View File

@ -94,14 +94,16 @@ const ThinkModelTypes = [
'gpt52pro',
'grok',
'grok4_fast',
'gemini',
'gemini_pro',
'gemini3',
'gemini2_flash',
'gemini2_pro',
'gemini3_flash',
'gemini3_pro',
'qwen',
'qwen_thinking',
'doubao',
'doubao_no_auto',
'doubao_after_251015',
'mimo',
'hunyuan',
'zhipu',
'perplexity',
@ -751,7 +753,8 @@ export const BuiltinMCPServerNames = {
difyKnowledge: '@cherry/dify-knowledge',
python: '@cherry/python',
didiMCP: '@cherry/didi-mcp',
browser: '@cherry/browser'
browser: '@cherry/browser',
nowledgeMem: '@cherry/nowledge-mem'
} as const
export type BuiltinMCPServerName = (typeof BuiltinMCPServerNames)[keyof typeof BuiltinMCPServerNames]

View File

@ -189,7 +189,8 @@ export const SystemProviderIdSchema = z.enum([
'huggingface',
'sophnet',
'gateway',
'cerebras'
'cerebras',
'mimo'
])
export type SystemProviderId = z.infer<typeof SystemProviderIdSchema>
@ -258,7 +259,8 @@ export const SystemProviderIds = {
longcat: 'longcat',
huggingface: 'huggingface',
gateway: 'gateway',
cerebras: 'cerebras'
cerebras: 'cerebras',
mimo: 'mimo'
} as const satisfies Record<SystemProviderId, SystemProviderId>
type SystemProviderIdTypeMap = typeof SystemProviderIds

View File

@ -1,8 +1,15 @@
import '@testing-library/jest-dom/vitest'
import { createRequire } from 'node:module'
import { styleSheetSerializer } from 'jest-styled-components/serializer'
import { expect, vi } from 'vitest'
const require = createRequire(import.meta.url)
const bufferModule = require('buffer')
if (!bufferModule.SlowBuffer) {
bufferModule.SlowBuffer = bufferModule.Buffer
}
expect.addSnapshotSerializer(styleSheetSerializer)
// Mock LoggerService globally for renderer tests
@ -48,3 +55,29 @@ vi.stubGlobal('api', {
writeWithId: vi.fn().mockResolvedValue(undefined)
}
})
if (typeof globalThis.localStorage === 'undefined' || typeof (globalThis.localStorage as any).getItem !== 'function') {
let store = new Map<string, string>()
const localStorageMock = {
getItem: (key: string) => store.get(key) ?? null,
setItem: (key: string, value: string) => {
store.set(key, String(value))
},
removeItem: (key: string) => {
store.delete(key)
},
clear: () => {
store.clear()
},
key: (index: number) => Array.from(store.keys())[index] ?? null,
get length() {
return store.size
}
}
vi.stubGlobal('localStorage', localStorageMock)
if (typeof window !== 'undefined') {
Object.defineProperty(window, 'localStorage', { value: localStorageMock })
}
}

131
yarn.lock
View File

@ -102,6 +102,18 @@ __metadata:
languageName: node
linkType: hard
"@ai-sdk/anthropic@npm:2.0.56":
version: 2.0.56
resolution: "@ai-sdk/anthropic@npm:2.0.56"
dependencies:
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.19"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/f2b6029c92443f831a2d124420e805d057668003067b1f677a4292d02f27aa3ad533374ea996d77ede7746a42c46fb94a8f2d8c0e7758a4555ea18c8b532052c
languageName: node
linkType: hard
"@ai-sdk/azure@npm:^2.0.87":
version: 2.0.87
resolution: "@ai-sdk/azure@npm:2.0.87"
@ -166,42 +178,42 @@ __metadata:
languageName: node
linkType: hard
"@ai-sdk/google-vertex@npm:^3.0.79":
version: 3.0.79
resolution: "@ai-sdk/google-vertex@npm:3.0.79"
"@ai-sdk/google-vertex@npm:^3.0.94":
version: 3.0.94
resolution: "@ai-sdk/google-vertex@npm:3.0.94"
dependencies:
"@ai-sdk/anthropic": "npm:2.0.49"
"@ai-sdk/google": "npm:2.0.43"
"@ai-sdk/anthropic": "npm:2.0.56"
"@ai-sdk/google": "npm:2.0.49"
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.17"
google-auth-library: "npm:^9.15.0"
"@ai-sdk/provider-utils": "npm:3.0.19"
google-auth-library: "npm:^10.5.0"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/a86949b8d4a855409acdf7dc8d93ad9ea8ccf2bc3849acbe1ecbe4d6d66f06bcb5242f0df8eea24214e78732618b71ec8a019cbbeab16366f9ad3c860c5d8d30
checksum: 10c0/68e2ee9e6525a5e43f90304980e64bf2a4227fd3ce74a7bf17e5ace094ea1bca8f8f18a8cc332a492fee4b912568a768f7479a4eed8148b84e7de1adf4104ad0
languageName: node
linkType: hard
"@ai-sdk/google@npm:2.0.43":
version: 2.0.43
resolution: "@ai-sdk/google@npm:2.0.43"
"@ai-sdk/google@npm:2.0.49":
version: 2.0.49
resolution: "@ai-sdk/google@npm:2.0.49"
dependencies:
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.17"
"@ai-sdk/provider-utils": "npm:3.0.19"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/5a421a9746cf8cbdf3bb7fb49426453a4fe0e354ea55a0123e628afb7acf9bb19959d512c0f8e6d7dbefbfa7e1cef4502fc146149007258a8eeb57743ac5e9e5
checksum: 10c0/f3f8acfcd956edc7d807d22963d5eff0f765418f1f2c7d18615955ccdfcebb4d43cc26ce1f712c6a53572f1d8becc0773311b77b1f1bf1af87d675c5f017d5a4
languageName: node
linkType: hard
"@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.43#~/.yarn/patches/@ai-sdk-google-npm-2.0.43-689ed559b3.patch":
version: 2.0.43
resolution: "@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.43#~/.yarn/patches/@ai-sdk-google-npm-2.0.43-689ed559b3.patch::version=2.0.43&hash=4dde1e"
"@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.49#~/.yarn/patches/@ai-sdk-google-npm-2.0.49-84720f41bd.patch":
version: 2.0.49
resolution: "@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.49#~/.yarn/patches/@ai-sdk-google-npm-2.0.49-84720f41bd.patch::version=2.0.49&hash=406c25"
dependencies:
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.17"
"@ai-sdk/provider-utils": "npm:3.0.19"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/4cfd17e9c47f2b742d8a0b1ca3532b4dc48753088363b74b01a042f63652174fa9a3fbf655a23f823974c673121dffbd2d192bb0c1bf158da4e2bf498fc76527
checksum: 10c0/8d4d881583c2301dce8a4e3066af2ba7d99b30520b6219811f90271c93bf8a07dc23e752fa25ffd0e72c6ec56e97d40d32e04072a362accf7d01a745a2d2a352
languageName: node
linkType: hard
@ -10051,8 +10063,8 @@ __metadata:
"@ai-sdk/anthropic": "npm:^2.0.49"
"@ai-sdk/cerebras": "npm:^1.0.31"
"@ai-sdk/gateway": "npm:^2.0.15"
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.43#~/.yarn/patches/@ai-sdk-google-npm-2.0.43-689ed559b3.patch"
"@ai-sdk/google-vertex": "npm:^3.0.79"
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.49#~/.yarn/patches/@ai-sdk-google-npm-2.0.49-84720f41bd.patch"
"@ai-sdk/google-vertex": "npm:^3.0.94"
"@ai-sdk/huggingface": "npm:^0.0.10"
"@ai-sdk/mistral": "npm:^2.0.24"
"@ai-sdk/openai": "patch:@ai-sdk/openai@npm%3A2.0.85#~/.yarn/patches/@ai-sdk-openai-npm-2.0.85-27483d1d6a.patch"
@ -11246,7 +11258,7 @@ __metadata:
languageName: node
linkType: hard
"buffer-equal-constant-time@npm:1.0.1":
"buffer-equal-constant-time@npm:^1.0.1":
version: 1.0.1
resolution: "buffer-equal-constant-time@npm:1.0.1"
checksum: 10c0/fb2294e64d23c573d0dd1f1e7a466c3e978fe94a4e0f8183937912ca374619773bef8e2aceb854129d2efecbbc515bbd0cc78d2734a3e3031edb0888531bbc8e
@ -15499,6 +15511,18 @@ __metadata:
languageName: node
linkType: hard
"gaxios@npm:^7.0.0":
version: 7.1.3
resolution: "gaxios@npm:7.1.3"
dependencies:
extend: "npm:^3.0.2"
https-proxy-agent: "npm:^7.0.1"
node-fetch: "npm:^3.3.2"
rimraf: "npm:^5.0.1"
checksum: 10c0/a4a1cdf9a392c0c22e9734a40dca5a77a2903f505b939a50f1e68e312458b1289b7993d2f72d011426e89657cae77a3aa9fc62fb140e8ba90a1faa31fdbde4d2
languageName: node
linkType: hard
"gcp-metadata@npm:^6.1.0":
version: 6.1.1
resolution: "gcp-metadata@npm:6.1.1"
@ -15510,6 +15534,17 @@ __metadata:
languageName: node
linkType: hard
"gcp-metadata@npm:^8.0.0":
version: 8.1.2
resolution: "gcp-metadata@npm:8.1.2"
dependencies:
gaxios: "npm:^7.0.0"
google-logging-utils: "npm:^1.0.0"
json-bigint: "npm:^1.0.0"
checksum: 10c0/15a61231a9410dc11c2828d2c9fdc8b0a939f1af746195c44edc6f2ffea0acab52cef3a7b9828069a36fd5d68bda730f7328a415fe42a01258f6e249dfba6908
languageName: node
linkType: hard
"gensync@npm:^1.0.0-beta.2":
version: 1.0.0-beta.2
resolution: "gensync@npm:1.0.0-beta.2"
@ -15733,7 +15768,22 @@ __metadata:
languageName: node
linkType: hard
"google-auth-library@npm:^9.14.2, google-auth-library@npm:^9.15.0, google-auth-library@npm:^9.15.1, google-auth-library@npm:^9.4.2":
"google-auth-library@npm:^10.5.0":
version: 10.5.0
resolution: "google-auth-library@npm:10.5.0"
dependencies:
base64-js: "npm:^1.3.0"
ecdsa-sig-formatter: "npm:^1.0.11"
gaxios: "npm:^7.0.0"
gcp-metadata: "npm:^8.0.0"
google-logging-utils: "npm:^1.0.0"
gtoken: "npm:^8.0.0"
jws: "npm:^4.0.0"
checksum: 10c0/49d3931d20b1f4a4d075216bf5518e2b3396dcf441a8f1952611cf3b6080afb1261c3d32009609047ee4a1cc545269a74b4957e6bba9cce840581df309c4b145
languageName: node
linkType: hard
"google-auth-library@npm:^9.14.2, google-auth-library@npm:^9.15.1, google-auth-library@npm:^9.4.2":
version: 9.15.1
resolution: "google-auth-library@npm:9.15.1"
dependencies:
@ -15754,6 +15804,13 @@ __metadata:
languageName: node
linkType: hard
"google-logging-utils@npm:^1.0.0":
version: 1.1.3
resolution: "google-logging-utils@npm:1.1.3"
checksum: 10c0/e65201c7e96543bd1423b9324013736646b9eed60941e0bfa47b9bfd146d2f09cf3df1c99ca60b7d80a726075263ead049ee72de53372cb8458c3bc55c2c1e59
languageName: node
linkType: hard
"gopd@npm:^1.0.1, gopd@npm:^1.2.0":
version: 1.2.0
resolution: "gopd@npm:1.2.0"
@ -15842,6 +15899,16 @@ __metadata:
languageName: node
linkType: hard
"gtoken@npm:^8.0.0":
version: 8.0.0
resolution: "gtoken@npm:8.0.0"
dependencies:
gaxios: "npm:^7.0.0"
jws: "npm:^4.0.0"
checksum: 10c0/058538e5bbe081d30ada5f1fd34d3a8194357c2e6ecbf7c8a98daeefbf13f7e06c15649c7dace6a1d4cc3bc6dc5483bd484d6d7adc5852021896d7c05c439f37
languageName: node
linkType: hard
"hachure-fill@npm:^0.5.2":
version: 0.5.2
resolution: "hachure-fill@npm:0.5.2"
@ -17178,24 +17245,24 @@ __metadata:
languageName: node
linkType: hard
"jwa@npm:^2.0.0":
version: 2.0.0
resolution: "jwa@npm:2.0.0"
"jwa@npm:^2.0.1":
version: 2.0.1
resolution: "jwa@npm:2.0.1"
dependencies:
buffer-equal-constant-time: "npm:1.0.1"
buffer-equal-constant-time: "npm:^1.0.1"
ecdsa-sig-formatter: "npm:1.0.11"
safe-buffer: "npm:^5.0.1"
checksum: 10c0/6baab823b93c038ba1d2a9e531984dcadbc04e9eb98d171f4901b7a40d2be15961a359335de1671d78cb6d987f07cbe5d350d8143255977a889160c4d90fcc3c
checksum: 10c0/ab3ebc6598e10dc11419d4ed675c9ca714a387481466b10e8a6f3f65d8d9c9237e2826f2505280a739cf4cbcf511cb288eeec22b5c9c63286fc5a2e4f97e78cf
languageName: node
linkType: hard
"jws@npm:^4.0.0":
version: 4.0.0
resolution: "jws@npm:4.0.0"
version: 4.0.1
resolution: "jws@npm:4.0.1"
dependencies:
jwa: "npm:^2.0.0"
jwa: "npm:^2.0.1"
safe-buffer: "npm:^5.0.1"
checksum: 10c0/f1ca77ea5451e8dc5ee219cb7053b8a4f1254a79cb22417a2e1043c1eb8a569ae118c68f24d72a589e8a3dd1824697f47d6bd4fb4bebb93a3bdf53545e721661
checksum: 10c0/6be1ed93023aef570ccc5ea8d162b065840f3ef12f0d1bb3114cade844de7a357d5dc558201d9a65101e70885a6fa56b17462f520e6b0d426195510618a154d0
languageName: node
linkType: hard
@ -22778,7 +22845,7 @@ __metadata:
languageName: node
linkType: hard
"rimraf@npm:^5.0.10":
"rimraf@npm:^5.0.1, rimraf@npm:^5.0.10":
version: 5.0.10
resolution: "rimraf@npm:5.0.10"
dependencies: