mirror of
https://github.com/CherryHQ/cherry-studio.git
synced 2026-01-02 02:09:03 +08:00
Merge remote-tracking branch 'origin/main' into feat/proxy-api-server
This commit is contained in:
commit
c6c7c240a3
@ -134,56 +134,108 @@ artifactBuildCompleted: scripts/artifact-build-completed.js
|
||||
releaseInfo:
|
||||
releaseNotes: |
|
||||
<!--LANG:en-->
|
||||
What's New in v1.7.0-rc.3
|
||||
A New Era of Intelligence with Cherry Studio 1.7.0
|
||||
|
||||
✨ New Features:
|
||||
- Provider: Added Silicon provider support for Anthropic API compatibility
|
||||
- Provider: AIHubMix support for nano banana
|
||||
Today we're releasing Cherry Studio 1.7.0 — our most ambitious update yet, introducing Agent: autonomous AI that thinks, plans, and acts.
|
||||
|
||||
🐛 Bug Fixes:
|
||||
- i18n: Clean up translation tags and untranslated strings
|
||||
- Provider: Fixed Silicon provider code list
|
||||
- Provider: Fixed Poe API reasoning parameters for GPT-5 and reasoning models
|
||||
- Provider: Fixed duplicate /v1 in Anthropic API endpoints
|
||||
- Provider: Fixed Azure provider handling in AI SDK integration
|
||||
- Models: Added Claude Opus 4.5 pattern to THINKING_TOKEN_MAP
|
||||
- Models: Improved Gemini reasoning and message handling
|
||||
- Models: Fixed custom parameters for Gemini models
|
||||
- Models: Fixed qwen-mt-flash text delta support
|
||||
- Models: Fixed Groq verbosity setting
|
||||
- UI: Fixed quota display and quota tips
|
||||
- UI: Fixed web search button condition
|
||||
- Settings: Fixed updateAssistantPreset reducer to properly update preset
|
||||
- Settings: Respect enableMaxTokens setting when maxTokens is not configured
|
||||
- SDK: Fixed header merging logic in AI SDK
|
||||
For years, AI assistants have been reactive — waiting for your commands, responding to your questions. With Agent, we're changing that. Now, AI can truly work alongside you: understanding complex goals, breaking them into steps, and executing them independently.
|
||||
|
||||
⚡ Improvements:
|
||||
- SDK: Upgraded @anthropic-ai/claude-agent-sdk to 0.1.53
|
||||
This is what we've been building toward. And it's just the beginning.
|
||||
|
||||
🤖 Meet Agent
|
||||
Imagine having a brilliant colleague who never sleeps. Give Agent a goal — write a report, analyze data, refactor code — and watch it work. It reasons through problems, breaks them into steps, calls the right tools, and adapts when things change.
|
||||
|
||||
- **Think → Plan → Act**: From goal to execution, fully autonomous
|
||||
- **Deep Reasoning**: Multi-turn thinking that solves real problems
|
||||
- **Tool Mastery**: File operations, web search, code execution, and more
|
||||
- **Skill Plugins**: Extend with custom commands and capabilities
|
||||
- **You Stay in Control**: Real-time approval for sensitive actions
|
||||
- **Full Visibility**: Every thought, every decision, fully transparent
|
||||
|
||||
🌐 Expanding Ecosystem
|
||||
- **New Providers**: HuggingFace, Mistral, CherryIN, AI Gateway, Intel OVMS, Didi MCP
|
||||
- **New Models**: Claude 4.5 Haiku, DeepSeek v3.2, GLM-4.6, Doubao, Ling series
|
||||
- **MCP Integration**: Alibaba Cloud, ModelScope, Higress, MCP.so, TokenFlux and more
|
||||
|
||||
📚 Smarter Knowledge Base
|
||||
- **OpenMinerU**: Self-hosted document processing
|
||||
- **Full-Text Search**: Find anything instantly across your notes
|
||||
- **Enhanced Tool Selection**: Smarter configuration for better AI assistance
|
||||
|
||||
📝 Notes, Reimagined
|
||||
- Full-text search with highlighted results
|
||||
- AI-powered smart rename
|
||||
- Export as image
|
||||
- Auto-wrap for tables
|
||||
|
||||
🖼️ Image & OCR
|
||||
- Intel OVMS painting capabilities
|
||||
- Intel OpenVINO NPU-accelerated OCR
|
||||
|
||||
🌍 Now in 10+ Languages
|
||||
- Added German support
|
||||
- Enhanced internationalization
|
||||
|
||||
⚡ Faster & More Polished
|
||||
- Electron 38 upgrade
|
||||
- New MCP management interface
|
||||
- Dozens of UI refinements
|
||||
|
||||
❤️ Fully Open Source
|
||||
Commercial restrictions removed. Cherry Studio now follows standard AGPL v3 — free for teams of any size.
|
||||
|
||||
The Agent Era is here. We can't wait to see what you'll create.
|
||||
|
||||
<!--LANG:zh-CN-->
|
||||
v1.7.0-rc.3 更新内容
|
||||
Cherry Studio 1.7.0:开启智能新纪元
|
||||
|
||||
✨ 新功能:
|
||||
- 提供商:新增 Silicon 提供商对 Anthropic API 的兼容性支持
|
||||
- 提供商:AIHubMix 支持 nano banana
|
||||
今天,我们正式发布 Cherry Studio 1.7.0 —— 迄今最具雄心的版本,带来全新的 Agent:能够自主思考、规划和行动的 AI。
|
||||
|
||||
🐛 问题修复:
|
||||
- 国际化:清理翻译标签和未翻译字符串
|
||||
- 提供商:修复 Silicon 提供商代码列表
|
||||
- 提供商:修复 Poe API 对 GPT-5 和推理模型的推理参数
|
||||
- 提供商:修复 Anthropic API 端点重复 /v1 问题
|
||||
- 提供商:修复 Azure 提供商在 AI SDK 集成中的处理
|
||||
- 模型:Claude Opus 4.5 添加到 THINKING_TOKEN_MAP
|
||||
- 模型:改进 Gemini 推理和消息处理
|
||||
- 模型:修复 Gemini 模型自定义参数
|
||||
- 模型:修复 qwen-mt-flash text delta 支持
|
||||
- 模型:修复 Groq verbosity 设置
|
||||
- 界面:修复配额显示和配额提示
|
||||
- 界面:修复 Web 搜索按钮条件
|
||||
- 设置:修复 updateAssistantPreset reducer 正确更新 preset
|
||||
- 设置:尊重 enableMaxTokens 设置
|
||||
- SDK:修复 AI SDK 中 header 合并逻辑
|
||||
多年来,AI 助手一直是被动的——等待你的指令,回应你的问题。Agent 改变了这一切。现在,AI 能够真正与你并肩工作:理解复杂目标,将其拆解为步骤,并独立执行。
|
||||
|
||||
⚡ 改进:
|
||||
- SDK:升级 @anthropic-ai/claude-agent-sdk 到 0.1.53
|
||||
这是我们一直在构建的未来。而这,仅仅是开始。
|
||||
|
||||
🤖 认识 Agent
|
||||
想象一位永不疲倦的得力伙伴。给 Agent 一个目标——撰写报告、分析数据、重构代码——然后看它工作。它会推理问题、拆解步骤、调用工具,并在情况变化时灵活应对。
|
||||
|
||||
- **思考 → 规划 → 行动**:从目标到执行,全程自主
|
||||
- **深度推理**:多轮思考,解决真实问题
|
||||
- **工具大师**:文件操作、网络搜索、代码执行,样样精通
|
||||
- **技能插件**:自定义命令,无限扩展
|
||||
- **你掌控全局**:敏感操作,实时审批
|
||||
- **完全透明**:每一步思考,每一个决策,清晰可见
|
||||
|
||||
🌐 生态持续壮大
|
||||
- **新增服务商**:Hugging Face、Mistral、Perplexity、SophNet、AI Gateway、Cerebras AI
|
||||
- **新增模型**:Gemini 3、Gemini 3 Pro(支持图像预览)、GPT-5.1、Claude Opus 4.5
|
||||
- **MCP 集成**:百炼、魔搭、Higress、MCP.so、TokenFlux 等平台
|
||||
|
||||
📚 更智能的知识库
|
||||
- **OpenMinerU**:本地自部署文档处理
|
||||
- **全文搜索**:笔记内容一搜即达
|
||||
- **增强工具选择**:更智能的配置,更好的 AI 协助
|
||||
|
||||
📝 笔记,焕然一新
|
||||
- 全文搜索,结果高亮
|
||||
- AI 智能重命名
|
||||
- 导出为图片
|
||||
- 表格自动换行
|
||||
|
||||
🖼️ 图像与 OCR
|
||||
- Intel OVMS 绘图能力
|
||||
- Intel OpenVINO NPU 加速 OCR
|
||||
|
||||
🌍 支持 10+ 种语言
|
||||
- 新增德语支持
|
||||
- 全面增强国际化
|
||||
|
||||
⚡ 更快、更精致
|
||||
- 升级 Electron 38
|
||||
- 新的 MCP 管理界面
|
||||
- 数十处 UI 细节打磨
|
||||
|
||||
❤️ 完全开源
|
||||
商用限制已移除。Cherry Studio 现遵循标准 AGPL v3 协议——任意规模团队均可自由使用。
|
||||
|
||||
Agent 纪元已至。期待你的创造。
|
||||
<!--LANG:END-->
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "CherryStudio",
|
||||
"version": "1.7.0-rc.3",
|
||||
"version": "1.7.0",
|
||||
"private": true,
|
||||
"description": "A powerful AI assistant for producer.",
|
||||
"main": "./out/main/index.js",
|
||||
@ -62,6 +62,7 @@
|
||||
"test": "vitest run --silent",
|
||||
"test:main": "vitest run --project main",
|
||||
"test:renderer": "vitest run --project renderer",
|
||||
"test:aicore": "vitest run --project aiCore",
|
||||
"test:update": "yarn test:renderer --update",
|
||||
"test:coverage": "vitest run --coverage --silent",
|
||||
"test:ui": "vitest --ui",
|
||||
@ -164,7 +165,7 @@
|
||||
"@modelcontextprotocol/sdk": "^1.17.5",
|
||||
"@mozilla/readability": "^0.6.0",
|
||||
"@notionhq/client": "^2.2.15",
|
||||
"@openrouter/ai-sdk-provider": "^1.2.5",
|
||||
"@openrouter/ai-sdk-provider": "^1.2.8",
|
||||
"@opentelemetry/api": "^1.9.0",
|
||||
"@opentelemetry/core": "2.0.0",
|
||||
"@opentelemetry/exporter-trace-otlp-http": "^0.200.0",
|
||||
|
||||
@ -3,12 +3,13 @@
|
||||
* Provides realistic mock responses for all provider types
|
||||
*/
|
||||
|
||||
import { jsonSchema, type ModelMessage, type Tool } from 'ai'
|
||||
import type { ModelMessage, Tool } from 'ai'
|
||||
import { jsonSchema } from 'ai'
|
||||
|
||||
/**
|
||||
* Standard test messages for all scenarios
|
||||
*/
|
||||
export const testMessages = {
|
||||
export const testMessages: Record<string, ModelMessage[]> = {
|
||||
simple: [{ role: 'user' as const, content: 'Hello, how are you?' }],
|
||||
|
||||
conversation: [
|
||||
@ -45,7 +46,7 @@ export const testMessages = {
|
||||
{ role: 'assistant' as const, content: '15 * 23 = 345' },
|
||||
{ role: 'user' as const, content: 'Now divide that by 5' }
|
||||
]
|
||||
} satisfies Record<string, ModelMessage[]>
|
||||
}
|
||||
|
||||
/**
|
||||
* Standard test tools for tool calling scenarios
|
||||
@ -138,68 +139,17 @@ export const testTools: Record<string, Tool> = {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Mock streaming chunks for different providers
|
||||
*/
|
||||
export const mockStreamingChunks = {
|
||||
text: [
|
||||
{ type: 'text-delta' as const, textDelta: 'Hello' },
|
||||
{ type: 'text-delta' as const, textDelta: ', ' },
|
||||
{ type: 'text-delta' as const, textDelta: 'this ' },
|
||||
{ type: 'text-delta' as const, textDelta: 'is ' },
|
||||
{ type: 'text-delta' as const, textDelta: 'a ' },
|
||||
{ type: 'text-delta' as const, textDelta: 'test.' }
|
||||
],
|
||||
|
||||
withToolCall: [
|
||||
{ type: 'text-delta' as const, textDelta: 'Let me check the weather for you.' },
|
||||
{
|
||||
type: 'tool-call-delta' as const,
|
||||
toolCallType: 'function' as const,
|
||||
toolCallId: 'call_123',
|
||||
toolName: 'getWeather',
|
||||
argsTextDelta: '{"location":'
|
||||
},
|
||||
{
|
||||
type: 'tool-call-delta' as const,
|
||||
toolCallType: 'function' as const,
|
||||
toolCallId: 'call_123',
|
||||
toolName: 'getWeather',
|
||||
argsTextDelta: ' "San Francisco, CA"}'
|
||||
},
|
||||
{
|
||||
type: 'tool-call' as const,
|
||||
toolCallType: 'function' as const,
|
||||
toolCallId: 'call_123',
|
||||
toolName: 'getWeather',
|
||||
args: { location: 'San Francisco, CA' }
|
||||
}
|
||||
],
|
||||
|
||||
withFinish: [
|
||||
{ type: 'text-delta' as const, textDelta: 'Complete response.' },
|
||||
{
|
||||
type: 'finish' as const,
|
||||
finishReason: 'stop' as const,
|
||||
usage: {
|
||||
promptTokens: 10,
|
||||
completionTokens: 5,
|
||||
totalTokens: 15
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
/**
|
||||
* Mock complete responses for non-streaming scenarios
|
||||
* Note: AI SDK v5 uses inputTokens/outputTokens instead of promptTokens/completionTokens
|
||||
*/
|
||||
export const mockCompleteResponses = {
|
||||
simple: {
|
||||
text: 'This is a simple response.',
|
||||
finishReason: 'stop' as const,
|
||||
usage: {
|
||||
promptTokens: 15,
|
||||
completionTokens: 8,
|
||||
inputTokens: 15,
|
||||
outputTokens: 8,
|
||||
totalTokens: 23
|
||||
}
|
||||
},
|
||||
@ -215,8 +165,8 @@ export const mockCompleteResponses = {
|
||||
],
|
||||
finishReason: 'tool-calls' as const,
|
||||
usage: {
|
||||
promptTokens: 25,
|
||||
completionTokens: 12,
|
||||
inputTokens: 25,
|
||||
outputTokens: 12,
|
||||
totalTokens: 37
|
||||
}
|
||||
},
|
||||
@ -225,14 +175,15 @@ export const mockCompleteResponses = {
|
||||
text: 'Response with warnings.',
|
||||
finishReason: 'stop' as const,
|
||||
usage: {
|
||||
promptTokens: 10,
|
||||
completionTokens: 5,
|
||||
inputTokens: 10,
|
||||
outputTokens: 5,
|
||||
totalTokens: 15
|
||||
},
|
||||
warnings: [
|
||||
{
|
||||
type: 'unsupported-setting' as const,
|
||||
message: 'Temperature parameter not supported for this model'
|
||||
setting: 'temperature',
|
||||
details: 'Temperature parameter not supported for this model'
|
||||
}
|
||||
]
|
||||
}
|
||||
@ -285,47 +236,3 @@ export const mockImageResponses = {
|
||||
warnings: []
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Mock error responses
|
||||
*/
|
||||
export const mockErrors = {
|
||||
invalidApiKey: {
|
||||
name: 'APIError',
|
||||
message: 'Invalid API key provided',
|
||||
statusCode: 401
|
||||
},
|
||||
|
||||
rateLimitExceeded: {
|
||||
name: 'RateLimitError',
|
||||
message: 'Rate limit exceeded. Please try again later.',
|
||||
statusCode: 429,
|
||||
headers: {
|
||||
'retry-after': '60'
|
||||
}
|
||||
},
|
||||
|
||||
modelNotFound: {
|
||||
name: 'ModelNotFoundError',
|
||||
message: 'The requested model was not found',
|
||||
statusCode: 404
|
||||
},
|
||||
|
||||
contextLengthExceeded: {
|
||||
name: 'ContextLengthError',
|
||||
message: "This model's maximum context length is 4096 tokens",
|
||||
statusCode: 400
|
||||
},
|
||||
|
||||
timeout: {
|
||||
name: 'TimeoutError',
|
||||
message: 'Request timed out after 30000ms',
|
||||
code: 'ETIMEDOUT'
|
||||
},
|
||||
|
||||
networkError: {
|
||||
name: 'NetworkError',
|
||||
message: 'Network connection failed',
|
||||
code: 'ECONNREFUSED'
|
||||
}
|
||||
}
|
||||
|
||||
35
packages/aiCore/src/__tests__/mocks/ai-sdk-provider.ts
Normal file
35
packages/aiCore/src/__tests__/mocks/ai-sdk-provider.ts
Normal file
@ -0,0 +1,35 @@
|
||||
/**
|
||||
* Mock for @cherrystudio/ai-sdk-provider
|
||||
* This mock is used in tests to avoid importing the actual package
|
||||
*/
|
||||
|
||||
export type CherryInProviderSettings = {
|
||||
apiKey?: string
|
||||
baseURL?: string
|
||||
}
|
||||
|
||||
// oxlint-disable-next-line no-unused-vars
|
||||
export const createCherryIn = (_options?: CherryInProviderSettings) => ({
|
||||
// oxlint-disable-next-line no-unused-vars
|
||||
languageModel: (_modelId: string) => ({
|
||||
specificationVersion: 'v1',
|
||||
provider: 'cherryin',
|
||||
modelId: 'mock-model',
|
||||
doGenerate: async () => ({ text: 'mock response' }),
|
||||
doStream: async () => ({ stream: (async function* () {})() })
|
||||
}),
|
||||
// oxlint-disable-next-line no-unused-vars
|
||||
chat: (_modelId: string) => ({
|
||||
specificationVersion: 'v1',
|
||||
provider: 'cherryin-chat',
|
||||
modelId: 'mock-model',
|
||||
doGenerate: async () => ({ text: 'mock response' }),
|
||||
doStream: async () => ({ stream: (async function* () {})() })
|
||||
}),
|
||||
// oxlint-disable-next-line no-unused-vars
|
||||
textEmbeddingModel: (_modelId: string) => ({
|
||||
specificationVersion: 'v1',
|
||||
provider: 'cherryin',
|
||||
modelId: 'mock-embedding-model'
|
||||
})
|
||||
})
|
||||
9
packages/aiCore/src/__tests__/setup.ts
Normal file
9
packages/aiCore/src/__tests__/setup.ts
Normal file
@ -0,0 +1,9 @@
|
||||
/**
|
||||
* Vitest Setup File
|
||||
* Global test configuration and mocks for @cherrystudio/ai-core package
|
||||
*/
|
||||
|
||||
// Mock Vite SSR helper to avoid Node environment errors
|
||||
;(globalThis as any).__vite_ssr_exportName__ = (_name: string, value: any) => value
|
||||
|
||||
// Note: @cherrystudio/ai-sdk-provider is mocked via alias in vitest.config.ts
|
||||
109
packages/aiCore/src/core/options/__tests__/factory.test.ts
Normal file
109
packages/aiCore/src/core/options/__tests__/factory.test.ts
Normal file
@ -0,0 +1,109 @@
|
||||
import { describe, expect, it } from 'vitest'
|
||||
|
||||
import { createOpenAIOptions, createOpenRouterOptions, mergeProviderOptions } from '../factory'
|
||||
|
||||
describe('mergeProviderOptions', () => {
|
||||
it('deep merges provider options for the same provider', () => {
|
||||
const reasoningOptions = createOpenRouterOptions({
|
||||
reasoning: {
|
||||
enabled: true,
|
||||
effort: 'medium'
|
||||
}
|
||||
})
|
||||
const webSearchOptions = createOpenRouterOptions({
|
||||
plugins: [{ id: 'web', max_results: 5 }]
|
||||
})
|
||||
|
||||
const merged = mergeProviderOptions(reasoningOptions, webSearchOptions)
|
||||
|
||||
expect(merged.openrouter).toEqual({
|
||||
reasoning: {
|
||||
enabled: true,
|
||||
effort: 'medium'
|
||||
},
|
||||
plugins: [{ id: 'web', max_results: 5 }]
|
||||
})
|
||||
})
|
||||
|
||||
it('preserves options from other providers while merging', () => {
|
||||
const openRouter = createOpenRouterOptions({
|
||||
reasoning: { enabled: true }
|
||||
})
|
||||
const openAI = createOpenAIOptions({
|
||||
reasoningEffort: 'low'
|
||||
})
|
||||
const merged = mergeProviderOptions(openRouter, openAI)
|
||||
|
||||
expect(merged.openrouter).toEqual({ reasoning: { enabled: true } })
|
||||
expect(merged.openai).toEqual({ reasoningEffort: 'low' })
|
||||
})
|
||||
|
||||
it('overwrites primitive values with later values', () => {
|
||||
const first = createOpenAIOptions({
|
||||
reasoningEffort: 'low',
|
||||
user: 'user-123'
|
||||
})
|
||||
const second = createOpenAIOptions({
|
||||
reasoningEffort: 'high',
|
||||
maxToolCalls: 5
|
||||
})
|
||||
|
||||
const merged = mergeProviderOptions(first, second)
|
||||
|
||||
expect(merged.openai).toEqual({
|
||||
reasoningEffort: 'high', // overwritten by second
|
||||
user: 'user-123', // preserved from first
|
||||
maxToolCalls: 5 // added from second
|
||||
})
|
||||
})
|
||||
|
||||
it('overwrites arrays with later values instead of merging', () => {
|
||||
const first = createOpenRouterOptions({
|
||||
models: ['gpt-4', 'gpt-3.5-turbo']
|
||||
})
|
||||
const second = createOpenRouterOptions({
|
||||
models: ['claude-3-opus', 'claude-3-sonnet']
|
||||
})
|
||||
|
||||
const merged = mergeProviderOptions(first, second)
|
||||
|
||||
// Array is completely replaced, not merged
|
||||
expect(merged.openrouter?.models).toEqual(['claude-3-opus', 'claude-3-sonnet'])
|
||||
})
|
||||
|
||||
it('deeply merges nested objects while overwriting primitives', () => {
|
||||
const first = createOpenRouterOptions({
|
||||
reasoning: {
|
||||
enabled: true,
|
||||
effort: 'low'
|
||||
},
|
||||
user: 'user-123'
|
||||
})
|
||||
const second = createOpenRouterOptions({
|
||||
reasoning: {
|
||||
effort: 'high',
|
||||
max_tokens: 500
|
||||
},
|
||||
user: 'user-456'
|
||||
})
|
||||
|
||||
const merged = mergeProviderOptions(first, second)
|
||||
|
||||
expect(merged.openrouter).toEqual({
|
||||
reasoning: {
|
||||
enabled: true, // preserved from first
|
||||
effort: 'high', // overwritten by second
|
||||
max_tokens: 500 // added from second
|
||||
},
|
||||
user: 'user-456' // overwritten by second
|
||||
})
|
||||
})
|
||||
|
||||
it('replaces arrays instead of merging them', () => {
|
||||
const first = createOpenRouterOptions({ plugins: [{ id: 'old' }] })
|
||||
const second = createOpenRouterOptions({ plugins: [{ id: 'new' }] })
|
||||
const merged = mergeProviderOptions(first, second)
|
||||
// @ts-expect-error type-check for openrouter options is skipped. see function signature of createOpenRouterOptions
|
||||
expect(merged.openrouter?.plugins).toEqual([{ id: 'new' }])
|
||||
})
|
||||
})
|
||||
@ -26,13 +26,65 @@ export function createGenericProviderOptions<T extends string>(
|
||||
return { [provider]: options } as Record<T, Record<string, any>>
|
||||
}
|
||||
|
||||
type PlainObject = Record<string, any>
|
||||
|
||||
const isPlainObject = (value: unknown): value is PlainObject => {
|
||||
return typeof value === 'object' && value !== null && !Array.isArray(value)
|
||||
}
|
||||
|
||||
function deepMergeObjects<T extends PlainObject>(target: T, source: PlainObject): T {
|
||||
const result: PlainObject = { ...target }
|
||||
Object.entries(source).forEach(([key, value]) => {
|
||||
if (isPlainObject(value) && isPlainObject(result[key])) {
|
||||
result[key] = deepMergeObjects(result[key], value)
|
||||
} else {
|
||||
result[key] = value
|
||||
}
|
||||
})
|
||||
return result as T
|
||||
}
|
||||
|
||||
/**
|
||||
* 合并多个供应商的options
|
||||
* @param optionsMap 包含多个供应商选项的对象
|
||||
* @returns 合并后的TypedProviderOptions
|
||||
* Deep-merge multiple provider-specific options.
|
||||
* Nested objects are recursively merged; primitive values are overwritten.
|
||||
*
|
||||
* When the same key appears in multiple options:
|
||||
* - If both values are plain objects: they are deeply merged (recursive merge)
|
||||
* - If values are primitives/arrays: the later value overwrites the earlier one
|
||||
*
|
||||
* @example
|
||||
* mergeProviderOptions(
|
||||
* { openrouter: { reasoning: { enabled: true, effort: 'low' }, user: 'user-123' } },
|
||||
* { openrouter: { reasoning: { effort: 'high', max_tokens: 500 }, models: ['gpt-4'] } }
|
||||
* )
|
||||
* // Result: {
|
||||
* // openrouter: {
|
||||
* // reasoning: { enabled: true, effort: 'high', max_tokens: 500 },
|
||||
* // user: 'user-123',
|
||||
* // models: ['gpt-4']
|
||||
* // }
|
||||
* // }
|
||||
*
|
||||
* @param optionsMap Objects containing options for multiple providers
|
||||
* @returns Fully merged TypedProviderOptions
|
||||
*/
|
||||
export function mergeProviderOptions(...optionsMap: Partial<TypedProviderOptions>[]): TypedProviderOptions {
|
||||
return Object.assign({}, ...optionsMap)
|
||||
return optionsMap.reduce<TypedProviderOptions>((acc, options) => {
|
||||
if (!options) {
|
||||
return acc
|
||||
}
|
||||
Object.entries(options).forEach(([providerId, providerOptions]) => {
|
||||
if (!providerOptions) {
|
||||
return
|
||||
}
|
||||
if (acc[providerId]) {
|
||||
acc[providerId] = deepMergeObjects(acc[providerId] as PlainObject, providerOptions as PlainObject)
|
||||
} else {
|
||||
acc[providerId] = providerOptions as any
|
||||
}
|
||||
})
|
||||
return acc
|
||||
}, {} as TypedProviderOptions)
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@ -19,15 +19,20 @@ describe('Provider Schemas', () => {
|
||||
expect(Array.isArray(baseProviders)).toBe(true)
|
||||
expect(baseProviders.length).toBeGreaterThan(0)
|
||||
|
||||
// These are the actual base providers defined in schemas.ts
|
||||
const expectedIds = [
|
||||
'openai',
|
||||
'openai-responses',
|
||||
'openai-chat',
|
||||
'openai-compatible',
|
||||
'anthropic',
|
||||
'google',
|
||||
'xai',
|
||||
'azure',
|
||||
'deepseek'
|
||||
'azure-responses',
|
||||
'deepseek',
|
||||
'openrouter',
|
||||
'cherryin',
|
||||
'cherryin-chat'
|
||||
]
|
||||
const actualIds = baseProviders.map((p) => p.id)
|
||||
expectedIds.forEach((id) => {
|
||||
|
||||
@ -232,11 +232,13 @@ describe('RuntimeExecutor.generateImage', () => {
|
||||
|
||||
expect(pluginCallOrder).toEqual(['onRequestStart', 'transformParams', 'transformResult', 'onRequestEnd'])
|
||||
|
||||
// transformParams receives params without model (model is handled separately)
|
||||
// and context with core fields + dynamic fields (requestId, startTime, etc.)
|
||||
expect(testPlugin.transformParams).toHaveBeenCalledWith(
|
||||
{ prompt: 'A test image' },
|
||||
expect.objectContaining({ prompt: 'A test image' }),
|
||||
expect.objectContaining({
|
||||
providerId: 'openai',
|
||||
modelId: 'dall-e-3'
|
||||
model: 'dall-e-3'
|
||||
})
|
||||
)
|
||||
|
||||
@ -273,11 +275,12 @@ describe('RuntimeExecutor.generateImage', () => {
|
||||
|
||||
await executorWithPlugin.generateImage({ model: 'dall-e-3', prompt: 'A test image' })
|
||||
|
||||
// resolveModel receives model id and context with core fields
|
||||
expect(modelResolutionPlugin.resolveModel).toHaveBeenCalledWith(
|
||||
'dall-e-3',
|
||||
expect.objectContaining({
|
||||
providerId: 'openai',
|
||||
modelId: 'dall-e-3'
|
||||
model: 'dall-e-3'
|
||||
})
|
||||
)
|
||||
|
||||
@ -339,12 +342,11 @@ describe('RuntimeExecutor.generateImage', () => {
|
||||
.generateImage({ model: 'invalid-model', prompt: 'A test image' })
|
||||
.catch((error) => error)
|
||||
|
||||
expect(thrownError).toBeInstanceOf(ImageGenerationError)
|
||||
expect(thrownError.message).toContain('Failed to generate image:')
|
||||
// Error is thrown from pluginEngine directly as ImageModelResolutionError
|
||||
expect(thrownError).toBeInstanceOf(ImageModelResolutionError)
|
||||
expect(thrownError.message).toContain('Failed to resolve image model: invalid-model')
|
||||
expect(thrownError.providerId).toBe('openai')
|
||||
expect(thrownError.modelId).toBe('invalid-model')
|
||||
expect(thrownError.cause).toBeInstanceOf(ImageModelResolutionError)
|
||||
expect(thrownError.cause.message).toContain('Failed to resolve image model: invalid-model')
|
||||
})
|
||||
|
||||
it('should handle ImageModelResolutionError without provider', async () => {
|
||||
@ -362,8 +364,9 @@ describe('RuntimeExecutor.generateImage', () => {
|
||||
const apiError = new Error('API request failed')
|
||||
vi.mocked(aiGenerateImage).mockRejectedValue(apiError)
|
||||
|
||||
// Error propagates directly from pluginEngine without wrapping
|
||||
await expect(executor.generateImage({ model: 'dall-e-3', prompt: 'A test image' })).rejects.toThrow(
|
||||
'Failed to generate image:'
|
||||
'API request failed'
|
||||
)
|
||||
})
|
||||
|
||||
@ -376,8 +379,9 @@ describe('RuntimeExecutor.generateImage', () => {
|
||||
vi.mocked(aiGenerateImage).mockRejectedValue(noImageError)
|
||||
vi.mocked(NoImageGeneratedError.isInstance).mockReturnValue(true)
|
||||
|
||||
// Error propagates directly from pluginEngine
|
||||
await expect(executor.generateImage({ model: 'dall-e-3', prompt: 'A test image' })).rejects.toThrow(
|
||||
'Failed to generate image:'
|
||||
'No image generated'
|
||||
)
|
||||
})
|
||||
|
||||
@ -398,15 +402,17 @@ describe('RuntimeExecutor.generateImage', () => {
|
||||
[errorPlugin]
|
||||
)
|
||||
|
||||
// Error propagates directly from pluginEngine
|
||||
await expect(executorWithPlugin.generateImage({ model: 'dall-e-3', prompt: 'A test image' })).rejects.toThrow(
|
||||
'Failed to generate image:'
|
||||
'Generation failed'
|
||||
)
|
||||
|
||||
// onError receives the original error and context with core fields
|
||||
expect(errorPlugin.onError).toHaveBeenCalledWith(
|
||||
error,
|
||||
expect.objectContaining({
|
||||
providerId: 'openai',
|
||||
modelId: 'dall-e-3'
|
||||
model: 'dall-e-3'
|
||||
})
|
||||
)
|
||||
})
|
||||
@ -419,9 +425,10 @@ describe('RuntimeExecutor.generateImage', () => {
|
||||
const abortController = new AbortController()
|
||||
setTimeout(() => abortController.abort(), 10)
|
||||
|
||||
// Error propagates directly from pluginEngine
|
||||
await expect(
|
||||
executor.generateImage({ model: 'dall-e-3', prompt: 'A test image', abortSignal: abortController.signal })
|
||||
).rejects.toThrow('Failed to generate image:')
|
||||
).rejects.toThrow('Operation was aborted')
|
||||
})
|
||||
})
|
||||
|
||||
|
||||
@ -17,10 +17,14 @@ import type { AiPlugin } from '../../plugins'
|
||||
import { globalRegistryManagement } from '../../providers/RegistryManagement'
|
||||
import { RuntimeExecutor } from '../executor'
|
||||
|
||||
// Mock AI SDK
|
||||
vi.mock('ai', () => ({
|
||||
generateText: vi.fn()
|
||||
}))
|
||||
// Mock AI SDK - use importOriginal to keep jsonSchema and other non-mocked exports
|
||||
vi.mock('ai', async (importOriginal) => {
|
||||
const actual = (await importOriginal()) as Record<string, unknown>
|
||||
return {
|
||||
...actual,
|
||||
generateText: vi.fn()
|
||||
}
|
||||
})
|
||||
|
||||
vi.mock('../../providers/RegistryManagement', () => ({
|
||||
globalRegistryManagement: {
|
||||
@ -409,11 +413,12 @@ describe('RuntimeExecutor.generateText', () => {
|
||||
})
|
||||
).rejects.toThrow('Generation failed')
|
||||
|
||||
// onError receives the original error and context with core fields
|
||||
expect(errorPlugin.onError).toHaveBeenCalledWith(
|
||||
error,
|
||||
expect.objectContaining({
|
||||
providerId: 'openai',
|
||||
modelId: 'gpt-4'
|
||||
model: 'gpt-4'
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
@ -11,10 +11,14 @@ import type { AiPlugin } from '../../plugins'
|
||||
import { globalRegistryManagement } from '../../providers/RegistryManagement'
|
||||
import { RuntimeExecutor } from '../executor'
|
||||
|
||||
// Mock AI SDK
|
||||
vi.mock('ai', () => ({
|
||||
streamText: vi.fn()
|
||||
}))
|
||||
// Mock AI SDK - use importOriginal to keep jsonSchema and other non-mocked exports
|
||||
vi.mock('ai', async (importOriginal) => {
|
||||
const actual = (await importOriginal()) as Record<string, unknown>
|
||||
return {
|
||||
...actual,
|
||||
streamText: vi.fn()
|
||||
}
|
||||
})
|
||||
|
||||
vi.mock('../../providers/RegistryManagement', () => ({
|
||||
globalRegistryManagement: {
|
||||
@ -153,7 +157,7 @@ describe('RuntimeExecutor.streamText', () => {
|
||||
describe('Max Tokens Parameter', () => {
|
||||
const maxTokensValues = [10, 50, 100, 500, 1000, 2000, 4000]
|
||||
|
||||
it.each(maxTokensValues)('should support maxTokens=%s', async (maxTokens) => {
|
||||
it.each(maxTokensValues)('should support maxOutputTokens=%s', async (maxOutputTokens) => {
|
||||
const mockStream = {
|
||||
textStream: (async function* () {
|
||||
yield 'Response'
|
||||
@ -168,12 +172,13 @@ describe('RuntimeExecutor.streamText', () => {
|
||||
await executor.streamText({
|
||||
model: 'gpt-4',
|
||||
messages: testMessages.simple,
|
||||
maxOutputTokens: maxTokens
|
||||
maxOutputTokens
|
||||
})
|
||||
|
||||
// Parameters are passed through without transformation
|
||||
expect(streamText).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
maxTokens
|
||||
maxOutputTokens
|
||||
})
|
||||
)
|
||||
})
|
||||
@ -513,11 +518,12 @@ describe('RuntimeExecutor.streamText', () => {
|
||||
})
|
||||
).rejects.toThrow('Stream error')
|
||||
|
||||
// onError receives the original error and context with core fields
|
||||
expect(errorPlugin.onError).toHaveBeenCalledWith(
|
||||
error,
|
||||
expect.objectContaining({
|
||||
providerId: 'openai',
|
||||
modelId: 'gpt-4'
|
||||
model: 'gpt-4'
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
@ -1,12 +1,20 @@
|
||||
import path from 'node:path'
|
||||
import { fileURLToPath } from 'node:url'
|
||||
|
||||
import { defineConfig } from 'vitest/config'
|
||||
|
||||
const __dirname = path.dirname(fileURLToPath(import.meta.url))
|
||||
|
||||
export default defineConfig({
|
||||
test: {
|
||||
globals: true
|
||||
globals: true,
|
||||
setupFiles: [path.resolve(__dirname, './src/__tests__/setup.ts')]
|
||||
},
|
||||
resolve: {
|
||||
alias: {
|
||||
'@': './src'
|
||||
'@': path.resolve(__dirname, './src'),
|
||||
// Mock external packages that may not be available in test environment
|
||||
'@cherrystudio/ai-sdk-provider': path.resolve(__dirname, './src/__tests__/mocks/ai-sdk-provider.ts')
|
||||
}
|
||||
},
|
||||
esbuild: {
|
||||
|
||||
@ -212,8 +212,9 @@ export class ToolCallChunkHandler {
|
||||
description: toolName,
|
||||
type: 'builtin'
|
||||
} as BaseTool
|
||||
} else if ((mcpTool = this.mcpTools.find((t) => t.name === toolName) as MCPTool)) {
|
||||
} else if ((mcpTool = this.mcpTools.find((t) => t.id === toolName) as MCPTool)) {
|
||||
// 如果是客户端执行的 MCP 工具,沿用现有逻辑
|
||||
// toolName is mcpTool.id (registered with id as key in convertMcpToolsToAiSdkTools)
|
||||
logger.info(`[ToolCallChunkHandler] Handling client-side MCP tool: ${toolName}`)
|
||||
// mcpTool = this.mcpTools.find((t) => t.name === toolName) as MCPTool
|
||||
// if (!mcpTool) {
|
||||
|
||||
@ -46,6 +46,7 @@ import type {
|
||||
GeminiSdkRawOutput,
|
||||
GeminiSdkToolCall
|
||||
} from '@renderer/types/sdk'
|
||||
import { getTrailingApiVersion, withoutTrailingApiVersion } from '@renderer/utils'
|
||||
import { isToolUseModeFunction } from '@renderer/utils/assistant'
|
||||
import {
|
||||
geminiFunctionCallToMcpTool,
|
||||
@ -163,6 +164,10 @@ export class GeminiAPIClient extends BaseApiClient<
|
||||
return models
|
||||
}
|
||||
|
||||
override getBaseURL(): string {
|
||||
return withoutTrailingApiVersion(super.getBaseURL())
|
||||
}
|
||||
|
||||
override async getSdkInstance() {
|
||||
if (this.sdkInstance) {
|
||||
return this.sdkInstance
|
||||
@ -188,6 +193,13 @@ export class GeminiAPIClient extends BaseApiClient<
|
||||
if (this.provider.isVertex) {
|
||||
return 'v1'
|
||||
}
|
||||
|
||||
// Extract trailing API version from the URL
|
||||
const trailingVersion = getTrailingApiVersion(this.provider.apiHost || '')
|
||||
if (trailingVersion) {
|
||||
return trailingVersion
|
||||
}
|
||||
|
||||
return 'v1beta'
|
||||
}
|
||||
|
||||
|
||||
@ -7,7 +7,7 @@ import { isAwsBedrockProvider, isVertexProvider } from '@renderer/utils/provider
|
||||
// https://docs.claude.com/en/docs/build-with-claude/extended-thinking#interleaved-thinking
|
||||
const INTERLEAVED_THINKING_HEADER = 'interleaved-thinking-2025-05-14'
|
||||
// https://docs.claude.com/en/docs/build-with-claude/context-windows#1m-token-context-window
|
||||
const CONTEXT_100M_HEADER = 'context-1m-2025-08-07'
|
||||
// const CONTEXT_100M_HEADER = 'context-1m-2025-08-07'
|
||||
// https://docs.cloud.google.com/vertex-ai/generative-ai/docs/partner-models/claude/web-search
|
||||
const WEBSEARCH_HEADER = 'web-search-2025-03-05'
|
||||
|
||||
@ -25,7 +25,9 @@ export function addAnthropicHeaders(assistant: Assistant, model: Model): string[
|
||||
if (isVertexProvider(provider) && assistant.enableWebSearch) {
|
||||
anthropicHeaders.push(WEBSEARCH_HEADER)
|
||||
}
|
||||
anthropicHeaders.push(CONTEXT_100M_HEADER)
|
||||
// We may add it by user preference in assistant.settings instead of always adding it.
|
||||
// See #11540, #11397
|
||||
// anthropicHeaders.push(CONTEXT_100M_HEADER)
|
||||
}
|
||||
return anthropicHeaders
|
||||
}
|
||||
|
||||
@ -10,6 +10,7 @@ import {
|
||||
} from '@ant-design/icons'
|
||||
import { loggerService } from '@logger'
|
||||
import { download } from '@renderer/utils/download'
|
||||
import { convertImageToPng } from '@renderer/utils/image'
|
||||
import type { ImageProps as AntImageProps } from 'antd'
|
||||
import { Dropdown, Image as AntImage, Space } from 'antd'
|
||||
import { Base64 } from 'js-base64'
|
||||
@ -33,39 +34,38 @@ const ImageViewer: React.FC<ImageViewerProps> = ({ src, style, ...props }) => {
|
||||
// 复制图片到剪贴板
|
||||
const handleCopyImage = async (src: string) => {
|
||||
try {
|
||||
let blob: Blob
|
||||
|
||||
if (src.startsWith('data:')) {
|
||||
// 处理 base64 格式的图片
|
||||
const match = src.match(/^data:(image\/\w+);base64,(.+)$/)
|
||||
if (!match) throw new Error('Invalid base64 image format')
|
||||
const mimeType = match[1]
|
||||
const byteArray = Base64.toUint8Array(match[2])
|
||||
const blob = new Blob([byteArray], { type: mimeType })
|
||||
await navigator.clipboard.write([new ClipboardItem({ [mimeType]: blob })])
|
||||
blob = new Blob([byteArray], { type: mimeType })
|
||||
} else if (src.startsWith('file://')) {
|
||||
// 处理本地文件路径
|
||||
const bytes = await window.api.fs.read(src)
|
||||
const mimeType = mime.getType(src) || 'application/octet-stream'
|
||||
const blob = new Blob([bytes], { type: mimeType })
|
||||
await navigator.clipboard.write([
|
||||
new ClipboardItem({
|
||||
[mimeType]: blob
|
||||
})
|
||||
])
|
||||
blob = new Blob([bytes], { type: mimeType })
|
||||
} else {
|
||||
// 处理 URL 格式的图片
|
||||
const response = await fetch(src)
|
||||
const blob = await response.blob()
|
||||
|
||||
await navigator.clipboard.write([
|
||||
new ClipboardItem({
|
||||
[blob.type]: blob
|
||||
})
|
||||
])
|
||||
blob = await response.blob()
|
||||
}
|
||||
|
||||
// 统一转换为 PNG 以确保兼容性(剪贴板 API 不支持 JPEG)
|
||||
const pngBlob = await convertImageToPng(blob)
|
||||
|
||||
const item = new ClipboardItem({
|
||||
'image/png': pngBlob
|
||||
})
|
||||
await navigator.clipboard.write([item])
|
||||
|
||||
window.toast.success(t('message.copy.success'))
|
||||
} catch (error) {
|
||||
logger.error('Failed to copy image:', error as Error)
|
||||
const err = error as Error
|
||||
logger.error(`Failed to copy image: ${err.message}`, { stack: err.stack })
|
||||
window.toast.error(t('message.copy.failed'))
|
||||
}
|
||||
}
|
||||
|
||||
@ -102,10 +102,12 @@ const ThinkingBlock: React.FC<Props> = ({ block }) => {
|
||||
)
|
||||
}
|
||||
|
||||
const normalizeThinkingTime = (value?: number) => (typeof value === 'number' && Number.isFinite(value) ? value : 0)
|
||||
|
||||
const ThinkingTimeSeconds = memo(
|
||||
({ blockThinkingTime, isThinking }: { blockThinkingTime: number; isThinking: boolean }) => {
|
||||
const { t } = useTranslation()
|
||||
const [displayTime, setDisplayTime] = useState(blockThinkingTime)
|
||||
const [displayTime, setDisplayTime] = useState(normalizeThinkingTime(blockThinkingTime))
|
||||
|
||||
const timer = useRef<NodeJS.Timeout | null>(null)
|
||||
|
||||
@ -121,7 +123,7 @@ const ThinkingTimeSeconds = memo(
|
||||
clearInterval(timer.current)
|
||||
timer.current = null
|
||||
}
|
||||
setDisplayTime(blockThinkingTime)
|
||||
setDisplayTime(normalizeThinkingTime(blockThinkingTime))
|
||||
}
|
||||
|
||||
return () => {
|
||||
@ -132,10 +134,10 @@ const ThinkingTimeSeconds = memo(
|
||||
}
|
||||
}, [isThinking, blockThinkingTime])
|
||||
|
||||
const thinkingTimeSeconds = useMemo(
|
||||
() => ((displayTime < 1000 ? 100 : displayTime) / 1000).toFixed(1),
|
||||
[displayTime]
|
||||
)
|
||||
const thinkingTimeSeconds = useMemo(() => {
|
||||
const safeTime = normalizeThinkingTime(displayTime)
|
||||
return ((safeTime < 1000 ? 100 : safeTime) / 1000).toFixed(1)
|
||||
}, [displayTime])
|
||||
|
||||
return isThinking
|
||||
? t('chat.thinking', {
|
||||
|
||||
@ -255,6 +255,20 @@ describe('ThinkingBlock', () => {
|
||||
unmount()
|
||||
})
|
||||
})
|
||||
|
||||
it('should clamp invalid thinking times to a safe default', () => {
|
||||
const testCases = [undefined, Number.NaN, Number.POSITIVE_INFINITY]
|
||||
|
||||
testCases.forEach((thinking_millsec) => {
|
||||
const block = createThinkingBlock({
|
||||
thinking_millsec: thinking_millsec as any,
|
||||
status: MessageBlockStatus.SUCCESS
|
||||
})
|
||||
const { unmount } = renderThinkingBlock(block)
|
||||
expect(getThinkingTimeText()).toHaveTextContent('0.1s')
|
||||
unmount()
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('collapse behavior', () => {
|
||||
|
||||
@ -7,11 +7,13 @@ import {
|
||||
formatApiKeys,
|
||||
formatAzureOpenAIApiHost,
|
||||
formatVertexApiHost,
|
||||
getTrailingApiVersion,
|
||||
hasAPIVersion,
|
||||
maskApiKey,
|
||||
routeToEndpoint,
|
||||
splitApiKeyString,
|
||||
validateApiHost
|
||||
validateApiHost,
|
||||
withoutTrailingApiVersion
|
||||
} from '../api'
|
||||
|
||||
vi.mock('@renderer/store', () => {
|
||||
@ -305,4 +307,90 @@ describe('api', () => {
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
describe('getTrailingApiVersion', () => {
|
||||
it('extracts trailing API version from URL', () => {
|
||||
expect(getTrailingApiVersion('https://api.example.com/v1')).toBe('v1')
|
||||
expect(getTrailingApiVersion('https://api.example.com/v2')).toBe('v2')
|
||||
})
|
||||
|
||||
it('extracts trailing API version with alpha/beta suffix', () => {
|
||||
expect(getTrailingApiVersion('https://api.example.com/v2alpha')).toBe('v2alpha')
|
||||
expect(getTrailingApiVersion('https://api.example.com/v3beta')).toBe('v3beta')
|
||||
})
|
||||
|
||||
it('extracts trailing API version with trailing slash', () => {
|
||||
expect(getTrailingApiVersion('https://api.example.com/v1/')).toBe('v1')
|
||||
expect(getTrailingApiVersion('https://api.example.com/v2beta/')).toBe('v2beta')
|
||||
})
|
||||
|
||||
it('returns undefined when API version is in the middle of path', () => {
|
||||
expect(getTrailingApiVersion('https://api.example.com/v1/chat')).toBeUndefined()
|
||||
expect(getTrailingApiVersion('https://api.example.com/v1/completions')).toBeUndefined()
|
||||
})
|
||||
|
||||
it('returns undefined when no trailing version exists', () => {
|
||||
expect(getTrailingApiVersion('https://api.example.com')).toBeUndefined()
|
||||
expect(getTrailingApiVersion('https://api.example.com/api')).toBeUndefined()
|
||||
})
|
||||
|
||||
it('extracts trailing version from complex URLs', () => {
|
||||
expect(getTrailingApiVersion('https://api.example.com/service/v1')).toBe('v1')
|
||||
expect(getTrailingApiVersion('https://gateway.ai.cloudflare.com/v1/xxx/google-ai-studio/v1beta')).toBe('v1beta')
|
||||
})
|
||||
|
||||
it('only extracts the trailing version when multiple versions exist', () => {
|
||||
expect(getTrailingApiVersion('https://api.example.com/v1/service/v2')).toBe('v2')
|
||||
expect(
|
||||
getTrailingApiVersion('https://gateway.ai.cloudflare.com/v1/xxxxxx/google-ai-studio/google-ai-studio/v1beta')
|
||||
).toBe('v1beta')
|
||||
})
|
||||
|
||||
it('returns undefined for empty string', () => {
|
||||
expect(getTrailingApiVersion('')).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
describe('withoutTrailingApiVersion', () => {
|
||||
it('removes trailing API version from URL', () => {
|
||||
expect(withoutTrailingApiVersion('https://api.example.com/v1')).toBe('https://api.example.com')
|
||||
expect(withoutTrailingApiVersion('https://api.example.com/v2')).toBe('https://api.example.com')
|
||||
})
|
||||
|
||||
it('removes trailing API version with alpha/beta suffix', () => {
|
||||
expect(withoutTrailingApiVersion('https://api.example.com/v2alpha')).toBe('https://api.example.com')
|
||||
expect(withoutTrailingApiVersion('https://api.example.com/v3beta')).toBe('https://api.example.com')
|
||||
})
|
||||
|
||||
it('removes trailing API version with trailing slash', () => {
|
||||
expect(withoutTrailingApiVersion('https://api.example.com/v1/')).toBe('https://api.example.com')
|
||||
expect(withoutTrailingApiVersion('https://api.example.com/v2beta/')).toBe('https://api.example.com')
|
||||
})
|
||||
|
||||
it('does not remove API version in the middle of path', () => {
|
||||
expect(withoutTrailingApiVersion('https://api.example.com/v1/chat')).toBe('https://api.example.com/v1/chat')
|
||||
expect(withoutTrailingApiVersion('https://api.example.com/v1/completions')).toBe(
|
||||
'https://api.example.com/v1/completions'
|
||||
)
|
||||
})
|
||||
|
||||
it('returns URL unchanged when no trailing version exists', () => {
|
||||
expect(withoutTrailingApiVersion('https://api.example.com')).toBe('https://api.example.com')
|
||||
expect(withoutTrailingApiVersion('https://api.example.com/api')).toBe('https://api.example.com/api')
|
||||
})
|
||||
|
||||
it('handles complex URLs with version at the end', () => {
|
||||
expect(withoutTrailingApiVersion('https://api.example.com/service/v1')).toBe('https://api.example.com/service')
|
||||
})
|
||||
|
||||
it('handles URLs with multiple versions but only removes the trailing one', () => {
|
||||
expect(withoutTrailingApiVersion('https://api.example.com/v1/service/v2')).toBe(
|
||||
'https://api.example.com/v1/service'
|
||||
)
|
||||
})
|
||||
|
||||
it('returns empty string unchanged', () => {
|
||||
expect(withoutTrailingApiVersion('')).toBe('')
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@ -3,11 +3,13 @@ export {
|
||||
formatAzureOpenAIApiHost,
|
||||
formatVertexApiHost,
|
||||
getAiSdkBaseUrl,
|
||||
getTrailingApiVersion,
|
||||
hasAPIVersion,
|
||||
routeToEndpoint,
|
||||
SUPPORTED_ENDPOINT_LIST,
|
||||
SUPPORTED_IMAGE_ENDPOINT_LIST,
|
||||
validateApiHost,
|
||||
withoutTrailingApiVersion,
|
||||
withoutTrailingSlash
|
||||
} from '@shared/api'
|
||||
|
||||
|
||||
@ -566,3 +566,54 @@ export const makeSvgSizeAdaptive = (element: Element): Element => {
|
||||
|
||||
return element
|
||||
}
|
||||
|
||||
/**
|
||||
* 将图片 Blob 转换为 PNG 格式的 Blob
|
||||
* @param blob 原始图片 Blob
|
||||
* @returns Promise<Blob> 转换后的 PNG Blob
|
||||
*/
|
||||
export const convertImageToPng = async (blob: Blob): Promise<Blob> => {
|
||||
if (blob.type === 'image/png') {
|
||||
return blob
|
||||
}
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const img = new Image()
|
||||
const url = URL.createObjectURL(blob)
|
||||
|
||||
img.onload = () => {
|
||||
try {
|
||||
const canvas = document.createElement('canvas')
|
||||
canvas.width = img.width
|
||||
canvas.height = img.height
|
||||
const ctx = canvas.getContext('2d')
|
||||
|
||||
if (!ctx) {
|
||||
URL.revokeObjectURL(url)
|
||||
reject(new Error('Failed to get canvas context'))
|
||||
return
|
||||
}
|
||||
|
||||
ctx.drawImage(img, 0, 0)
|
||||
canvas.toBlob((pngBlob) => {
|
||||
URL.revokeObjectURL(url)
|
||||
if (pngBlob) {
|
||||
resolve(pngBlob)
|
||||
} else {
|
||||
reject(new Error('Failed to convert image to png'))
|
||||
}
|
||||
}, 'image/png')
|
||||
} catch (error) {
|
||||
URL.revokeObjectURL(url)
|
||||
reject(error)
|
||||
}
|
||||
}
|
||||
|
||||
img.onerror = () => {
|
||||
URL.revokeObjectURL(url)
|
||||
reject(new Error('Failed to load image for conversion'))
|
||||
}
|
||||
|
||||
img.src = url
|
||||
})
|
||||
}
|
||||
|
||||
@ -90,7 +90,8 @@ export function openAIToolsToMcpTool(
|
||||
return undefined
|
||||
}
|
||||
const tools = mcpTools.filter((mcpTool) => {
|
||||
return mcpTool.id === toolName || mcpTool.name === toolName
|
||||
// toolName is mcpTool.id (registered with id as function name)
|
||||
return mcpTool.id === toolName
|
||||
})
|
||||
if (tools.length > 1) {
|
||||
logger.warn(`Multiple MCP Tools found for tool call: ${toolName}`)
|
||||
|
||||
@ -254,6 +254,17 @@ const HomeWindow: FC<{ draggable?: boolean }> = ({ draggable = true }) => {
|
||||
|
||||
let blockId: string | null = null
|
||||
let thinkingBlockId: string | null = null
|
||||
let thinkingStartTime: number | null = null
|
||||
|
||||
const resolveThinkingDuration = (duration?: number) => {
|
||||
if (typeof duration === 'number' && Number.isFinite(duration)) {
|
||||
return duration
|
||||
}
|
||||
if (thinkingStartTime !== null) {
|
||||
return Math.max(0, performance.now() - thinkingStartTime)
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
setIsLoading(true)
|
||||
setIsOutputted(false)
|
||||
@ -291,6 +302,7 @@ const HomeWindow: FC<{ draggable?: boolean }> = ({ draggable = true }) => {
|
||||
case ChunkType.THINKING_START:
|
||||
{
|
||||
setIsOutputted(true)
|
||||
thinkingStartTime = performance.now()
|
||||
if (thinkingBlockId) {
|
||||
store.dispatch(
|
||||
updateOneBlock({ id: thinkingBlockId, changes: { status: MessageBlockStatus.STREAMING } })
|
||||
@ -315,9 +327,13 @@ const HomeWindow: FC<{ draggable?: boolean }> = ({ draggable = true }) => {
|
||||
{
|
||||
setIsOutputted(true)
|
||||
if (thinkingBlockId) {
|
||||
if (thinkingStartTime === null) {
|
||||
thinkingStartTime = performance.now()
|
||||
}
|
||||
const thinkingDuration = resolveThinkingDuration(chunk.thinking_millsec)
|
||||
throttledBlockUpdate(thinkingBlockId, {
|
||||
content: chunk.text,
|
||||
thinking_millsec: chunk.thinking_millsec
|
||||
thinking_millsec: thinkingDuration
|
||||
})
|
||||
}
|
||||
}
|
||||
@ -325,14 +341,17 @@ const HomeWindow: FC<{ draggable?: boolean }> = ({ draggable = true }) => {
|
||||
case ChunkType.THINKING_COMPLETE:
|
||||
{
|
||||
if (thinkingBlockId) {
|
||||
const thinkingDuration = resolveThinkingDuration(chunk.thinking_millsec)
|
||||
cancelThrottledBlockUpdate(thinkingBlockId)
|
||||
store.dispatch(
|
||||
updateOneBlock({
|
||||
id: thinkingBlockId,
|
||||
changes: { status: MessageBlockStatus.SUCCESS, thinking_millsec: chunk.thinking_millsec }
|
||||
changes: { status: MessageBlockStatus.SUCCESS, thinking_millsec: thinkingDuration }
|
||||
})
|
||||
)
|
||||
}
|
||||
thinkingStartTime = null
|
||||
thinkingBlockId = null
|
||||
}
|
||||
break
|
||||
case ChunkType.TEXT_START:
|
||||
@ -404,6 +423,8 @@ const HomeWindow: FC<{ draggable?: boolean }> = ({ draggable = true }) => {
|
||||
if (!isAborted) {
|
||||
throw new Error(chunk.error.message)
|
||||
}
|
||||
thinkingStartTime = null
|
||||
thinkingBlockId = null
|
||||
}
|
||||
//fall through
|
||||
case ChunkType.BLOCK_COMPLETE:
|
||||
|
||||
@ -41,8 +41,19 @@ export const processMessages = async (
|
||||
|
||||
let textBlockId: string | null = null
|
||||
let thinkingBlockId: string | null = null
|
||||
let thinkingStartTime: number | null = null
|
||||
let textBlockContent: string = ''
|
||||
|
||||
const resolveThinkingDuration = (duration?: number) => {
|
||||
if (typeof duration === 'number' && Number.isFinite(duration)) {
|
||||
return duration
|
||||
}
|
||||
if (thinkingStartTime !== null) {
|
||||
return Math.max(0, performance.now() - thinkingStartTime)
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
const assistantMessage = getAssistantMessage({
|
||||
assistant,
|
||||
topic
|
||||
@ -79,6 +90,7 @@ export const processMessages = async (
|
||||
switch (chunk.type) {
|
||||
case ChunkType.THINKING_START:
|
||||
{
|
||||
thinkingStartTime = performance.now()
|
||||
if (thinkingBlockId) {
|
||||
store.dispatch(
|
||||
updateOneBlock({ id: thinkingBlockId, changes: { status: MessageBlockStatus.STREAMING } })
|
||||
@ -102,9 +114,13 @@ export const processMessages = async (
|
||||
case ChunkType.THINKING_DELTA:
|
||||
{
|
||||
if (thinkingBlockId) {
|
||||
if (thinkingStartTime === null) {
|
||||
thinkingStartTime = performance.now()
|
||||
}
|
||||
const thinkingDuration = resolveThinkingDuration(chunk.thinking_millsec)
|
||||
throttledBlockUpdate(thinkingBlockId, {
|
||||
content: chunk.text,
|
||||
thinking_millsec: chunk.thinking_millsec
|
||||
thinking_millsec: thinkingDuration
|
||||
})
|
||||
}
|
||||
onStream()
|
||||
@ -113,6 +129,7 @@ export const processMessages = async (
|
||||
case ChunkType.THINKING_COMPLETE:
|
||||
{
|
||||
if (thinkingBlockId) {
|
||||
const thinkingDuration = resolveThinkingDuration(chunk.thinking_millsec)
|
||||
cancelThrottledBlockUpdate(thinkingBlockId)
|
||||
store.dispatch(
|
||||
updateOneBlock({
|
||||
@ -120,12 +137,13 @@ export const processMessages = async (
|
||||
changes: {
|
||||
content: chunk.text,
|
||||
status: MessageBlockStatus.SUCCESS,
|
||||
thinking_millsec: chunk.thinking_millsec
|
||||
thinking_millsec: thinkingDuration
|
||||
}
|
||||
})
|
||||
)
|
||||
thinkingBlockId = null
|
||||
}
|
||||
thinkingStartTime = null
|
||||
}
|
||||
break
|
||||
case ChunkType.TEXT_START:
|
||||
@ -190,6 +208,7 @@ export const processMessages = async (
|
||||
case ChunkType.ERROR:
|
||||
{
|
||||
const blockId = textBlockId || thinkingBlockId
|
||||
thinkingStartTime = null
|
||||
if (blockId) {
|
||||
store.dispatch(
|
||||
updateOneBlock({
|
||||
|
||||
@ -284,6 +284,54 @@ describe('processMessages', () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe('thinking timer fallback', () => {
|
||||
it('should use local timer when thinking_millsec is missing', async () => {
|
||||
const nowValues = [1000, 1500, 2000]
|
||||
let nowIndex = 0
|
||||
const performanceSpy = vi.spyOn(performance, 'now').mockImplementation(() => {
|
||||
const value = nowValues[Math.min(nowIndex, nowValues.length - 1)]
|
||||
nowIndex += 1
|
||||
return value
|
||||
})
|
||||
|
||||
const mockChunks = [
|
||||
{ type: ChunkType.THINKING_START },
|
||||
{ type: ChunkType.THINKING_DELTA, text: 'Thinking...' },
|
||||
{ type: ChunkType.THINKING_COMPLETE, text: 'Done thinking' },
|
||||
{ type: ChunkType.TEXT_START },
|
||||
{ type: ChunkType.TEXT_COMPLETE, text: 'Final answer' },
|
||||
{ type: ChunkType.BLOCK_COMPLETE }
|
||||
]
|
||||
|
||||
vi.mocked(fetchChatCompletion).mockImplementation(async ({ onChunkReceived }: any) => {
|
||||
for (const chunk of mockChunks) {
|
||||
await onChunkReceived(chunk)
|
||||
}
|
||||
})
|
||||
|
||||
await processMessages(
|
||||
mockAssistant,
|
||||
mockTopic,
|
||||
'test prompt',
|
||||
mockSetAskId,
|
||||
mockOnStream,
|
||||
mockOnFinish,
|
||||
mockOnError
|
||||
)
|
||||
|
||||
const thinkingDeltaCall = vi.mocked(throttledBlockUpdate).mock.calls.find(([id]) => id === 'thinking-block-1')
|
||||
const deltaPayload = thinkingDeltaCall?.[1] as { thinking_millsec?: number } | undefined
|
||||
expect(deltaPayload?.thinking_millsec).toBe(500)
|
||||
|
||||
const thinkingCompleteUpdate = vi
|
||||
.mocked(updateOneBlock)
|
||||
.mock.calls.find(([payload]) => (payload as any)?.changes?.thinking_millsec !== undefined)
|
||||
expect((thinkingCompleteUpdate?.[0] as any)?.changes?.thinking_millsec).toBe(1000)
|
||||
|
||||
performanceSpy.mockRestore()
|
||||
})
|
||||
})
|
||||
|
||||
describe('stream with exceptions', () => {
|
||||
it('should handle error chunks properly', async () => {
|
||||
const mockError = new Error('Stream processing error')
|
||||
|
||||
@ -2,15 +2,15 @@
|
||||
"extends": "@electron-toolkit/tsconfig/tsconfig.node.json",
|
||||
"include": [
|
||||
"electron.vite.config.*",
|
||||
"src/main/**/*",
|
||||
"src/preload/**/*",
|
||||
"src/main/env.d.ts",
|
||||
"src/renderer/src/types/*",
|
||||
"packages/shared/**/*",
|
||||
"packages/aiCore/src/**/*",
|
||||
"scripts",
|
||||
"packages/mcp-trace/**/*",
|
||||
"src/main/**/*",
|
||||
"src/main/env.d.ts",
|
||||
"src/preload/**/*",
|
||||
"src/renderer/src/services/traceApi.ts",
|
||||
"src/renderer/src/types/*",
|
||||
"packages/aiCore/src/**/*",
|
||||
"packages/mcp-trace/**/*",
|
||||
"packages/shared/**/*",
|
||||
"packages/ai-sdk-provider/**/*"
|
||||
],
|
||||
"compilerOptions": {
|
||||
|
||||
@ -1,16 +1,16 @@
|
||||
{
|
||||
"extends": "@electron-toolkit/tsconfig/tsconfig.web.json",
|
||||
"include": [
|
||||
"src/renderer/src/**/*",
|
||||
"src/preload/*.d.ts",
|
||||
"local/src/renderer/**/*",
|
||||
"packages/shared/**/*",
|
||||
"tests/__mocks__/**/*",
|
||||
"packages/mcp-trace/**/*",
|
||||
"packages/aiCore/src/**/*",
|
||||
"src/renderer/src/**/*",
|
||||
"src/main/integration/cherryai/index.js",
|
||||
"src/preload/*.d.ts",
|
||||
"tests/__mocks__/**/*",
|
||||
"packages/aiCore/src/**/*",
|
||||
"packages/ai-sdk-provider/**/*",
|
||||
"packages/extension-table-plus/**/*",
|
||||
"packages/ai-sdk-provider/**/*"
|
||||
"packages/mcp-trace/**/*",
|
||||
"packages/shared/**/*",
|
||||
],
|
||||
"compilerOptions": {
|
||||
"composite": true,
|
||||
|
||||
@ -44,6 +44,18 @@ export default defineConfig({
|
||||
environment: 'node',
|
||||
include: ['scripts/**/*.{test,spec}.{ts,tsx}', 'scripts/**/__tests__/**/*.{test,spec}.{ts,tsx}']
|
||||
}
|
||||
},
|
||||
// aiCore 包单元测试配置
|
||||
{
|
||||
extends: 'packages/aiCore/vitest.config.ts',
|
||||
test: {
|
||||
name: 'aiCore',
|
||||
environment: 'node',
|
||||
include: [
|
||||
'packages/aiCore/**/*.{test,spec}.{ts,tsx}',
|
||||
'packages/aiCore/**/__tests__/**/*.{test,spec}.{ts,tsx}'
|
||||
]
|
||||
}
|
||||
}
|
||||
],
|
||||
// 全局共享配置
|
||||
|
||||
10
yarn.lock
10
yarn.lock
@ -5044,15 +5044,15 @@ __metadata:
|
||||
languageName: node
|
||||
linkType: hard
|
||||
|
||||
"@openrouter/ai-sdk-provider@npm:^1.2.5":
|
||||
version: 1.2.5
|
||||
resolution: "@openrouter/ai-sdk-provider@npm:1.2.5"
|
||||
"@openrouter/ai-sdk-provider@npm:^1.2.8":
|
||||
version: 1.2.8
|
||||
resolution: "@openrouter/ai-sdk-provider@npm:1.2.8"
|
||||
dependencies:
|
||||
"@openrouter/sdk": "npm:^0.1.8"
|
||||
peerDependencies:
|
||||
ai: ^5.0.0
|
||||
zod: ^3.24.1 || ^v4
|
||||
checksum: 10c0/f422f767ff8fcba2bb2fca32e5e2df163abae3c754f98416830654c5135db3aed5d4f941bfa0005109d202053a2e6a4a6b997940eb154ac964c87dd85dbe82e1
|
||||
checksum: 10c0/a1508d8d538f601f0b7f5f96da32ddbd3c156742a20b427742963d8ac2cee26ce857ad7c64df743efce632b1602b19c81dcd03ebc24ae5a371211a65ead1c181
|
||||
languageName: node
|
||||
linkType: hard
|
||||
|
||||
@ -10050,7 +10050,7 @@ __metadata:
|
||||
"@mozilla/readability": "npm:^0.6.0"
|
||||
"@napi-rs/system-ocr": "patch:@napi-rs/system-ocr@npm%3A1.0.2#~/.yarn/patches/@napi-rs-system-ocr-npm-1.0.2-59e7a78e8b.patch"
|
||||
"@notionhq/client": "npm:^2.2.15"
|
||||
"@openrouter/ai-sdk-provider": "npm:^1.2.5"
|
||||
"@openrouter/ai-sdk-provider": "npm:^1.2.8"
|
||||
"@opentelemetry/api": "npm:^1.9.0"
|
||||
"@opentelemetry/core": "npm:2.0.0"
|
||||
"@opentelemetry/exporter-trace-otlp-http": "npm:^0.200.0"
|
||||
|
||||
Loading…
Reference in New Issue
Block a user