Compare commits

...

31 Commits

Author SHA1 Message Date
suyao
f20db1ac84
chore: improve comment 2025-12-18 20:37:49 +08:00
suyao
1723d72b29
refacto: reasoning cache implementation and update import paths 2025-12-18 20:25:44 +08:00
suyao
9a72a8df2c
refactor: rename directory 2025-12-18 20:17:39 +08:00
suyao
ae2712c963
Merge remote-tracking branch 'origin/main' into feat/proxy-api-server 2025-12-18 18:19:10 +08:00
GeekMr
42260710d8
fix(azure): restore deployment-based URLs for non-v1 apiVersion and add tests (#11966)
* fix: support Azure OpenAI deployment URLs

* test: stabilize renderer setup

---------

Co-authored-by: William Wang <WilliamOnline1721@hotmail.com>
2025-12-18 18:12:26 +08:00
suyao
9c1f538f15
test(unified-messages): add unit tests for convertAnthropicToolsToAiSdk and convertAnthropicToAiMessages functions 2025-12-18 17:04:32 +08:00
suyao
c877a3c4a5
test(aisdkToAnthropicSSE): add comprehensive tests for AiSdkToAnthropicSSE event processing and utility functions 2025-12-18 16:58:19 +08:00
suyao
90ed074ecd
test(token): export CountTokensInput interface and estimateTokenCount function for improved token estimation 2025-12-18 16:26:47 +08:00
suyao
45d404e127
feat(tokens): enhance token estimation and refactor count_tokens endpoint for improved handling 2025-12-18 16:19:17 +08:00
suyao
2c910322f8
feat(token): enhance token estimation using tokenx library for improved accuracy and image support 2025-12-18 15:59:27 +08:00
suyao
5304b585b9
chore: improve comments 2025-12-18 15:50:23 +08:00
suyao
e89af9042c
test(schema): add comprehensive tests for jsonSchemaToZod function 2025-12-18 15:39:21 +08:00
suyao
08777e0746
chore: lint 2025-12-18 15:26:23 +08:00
suyao
f4a1eeed0e
Merge remote-tracking branch 'origin/main' into feat/proxy-api-server 2025-12-18 15:22:39 +08:00
suyao
eb57f50cfe
feat(model): enhance parseModelId to handle identifiers without provider prefix and improve edge case handling 2025-12-18 15:19:59 +08:00
suyao
4173fcbb98
feat(model): add parseModelId function to handle model identifiers with colons 2025-12-18 15:16:34 +08:00
kangfenmao
5e8646c6a5 fix: update API path for image generation requests in OpenAIBaseClient 2025-12-18 14:45:30 +08:00
Phantom
7e93e8b9b2
feat(gemini): add support for Gemini 3 Flash and Pro model detection (#11984)
* feat(gemini): update model types and add support for gemini3 variants

add new model type identifiers for gemini3 flash and pro variants
implement utility functions to detect gemini3 flash and pro models
update reasoning configuration and tests for new gemini variants

* docs(i18n): update chinese translation for minimal_description

* chore: update @ai-sdk/google and @ai-sdk/google-vertex dependencies

- Update @ai-sdk/google to version 2.0.49 with patch for model path fix
- Update @ai-sdk/google-vertex to version 3.0.94 with updated dependencies

* feat(gemini): add thinking level mapping for Gemini 3 models

Implement mapping between reasoning effort options and Gemini's thinking levels. Enable thinking config for Gemini 3 models to support advanced reasoning features.

* chore: update yarn.lock with patched @ai-sdk/google dependency

* test(reasoning): update tests for Gemini model type classification and reasoning options

Update test cases to reflect new Gemini model type classifications (gemini2_flash, gemini3_flash, gemini2_pro, gemini3_pro) and their corresponding reasoning effort options. Add tests for Gemini 3 models and adjust existing ones to match current behavior.

* docs(reasoning): remove outdated TODO comment about model support
2025-12-18 14:35:36 +08:00
suyao
b33e595955
Merge remote-tracking branch 'origin/main' into feat/proxy-api-server 2025-12-18 14:10:42 +08:00
SuYao
eb7a2cc85a
feat: add support for Xiaomi MiMo model (#11961)
* feat: add support for Xiaomi MiMo model

- Implemented support for the MiMo model in reasoning logic.
- Added MiMo model configuration in default models.
- Included MiMo logos for both models and providers.
- Updated provider configurations to include Xiaomi MiMo.
- Enhanced reasoning effort and options to accommodate MiMo.
- Added migration logic for state management to include MiMo.
- Updated versioning in store to reflect changes.

* chore(i18n): add specific provider name

* fix(provider): add xiaomi mimo anthropic apihost

* chore: url

* fix: add tool use capability
2025-12-18 13:49:09 +08:00
dependabot[bot]
fd6986076a
chore(deps): bump jws from 4.0.0 to 4.0.1 (#11977)
Bumps [jws](https://github.com/brianloveswords/node-jws) from 4.0.0 to 4.0.1.
- [Release notes](https://github.com/brianloveswords/node-jws/releases)
- [Changelog](https://github.com/auth0/node-jws/blob/master/CHANGELOG.md)
- [Commits](https://github.com/brianloveswords/node-jws/compare/v4.0.0...v4.0.1)

---
updated-dependencies:
- dependency-name: jws
  dependency-version: 4.0.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-18 13:34:39 +08:00
LiuVaayne
6309cc179d
feat(mcp): add Nowledge Mem builtin MCP server (#11875)
*  feat(mcp): add Nowledge Mem builtin MCP server

Add @cherry/nowLedgeMem as a new builtin MCP server that connects
to local Nowledge Mem service via HTTP at 127.0.0.1:14242/mcp.

- Add nowLedgeMem to BuiltinMCPServerNames type definitions
- Add HTTP transport handling in MCPService with APP header
- Add server config to builtinMCPServers array
- Add i18n translations (en-us, zh-cn, zh-tw)

* Fix Nowledge Mem server name typos across codebase

* 🌐 i18n: add missing translations for Nowledge Mem and Git Bash settings

Translate [to be translated] markers across 8 locale files:
- zh-tw, de-de, fr-fr, es-es, pt-pt, ru-ru: nowledgeMem description
- fr-fr, es-es, pt-pt, ru-ru, el-gr, ja-jp: xhigh reasoning chain option
- el-gr, ja-jp: Git Bash configuration strings

* 🐛 fix: address PR review comments for Nowledge Mem MCP

- Fix log message typo: use server.name instead of hardcoded "NowLedgeMem"
- Rename i18n key from "nowledgeMem" to "nowledge_mem" for consistency
- Update descriptions to warn about external dependency requirement
2025-12-18 13:34:06 +08:00
SuYao
c04529a23c
refactor: improve budget calculation logic (#11973)
* refactor: improve budget calculation logic

* Update src/renderer/src/aiCore/utils/__tests__/reasoning.test.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/renderer/src/aiCore/utils/__tests__/reasoning.test.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [WIP] Address feedback on budget calculation logic refactor (#11974)

* Initial plan

* fix: revert budget calculation to linear interpolation formula

Reverted the budget calculation in getAnthropicThinkingBudget from
`tokenLimit.max * effortRatio` back to the original linear interpolation
formula `(tokenLimit.max - tokenLimit.min) * effortRatio + tokenLimit.min`.

The new formula was causing lower budgets for all effort ratios (e.g.,
LOW effort changed from 2609 to 1638 tokens, a 37% reduction). The linear
interpolation formula ensures budgets range from min (at effortRatio=0) to
max (at effortRatio=1), matching the behavior in other parts of the codebase
(lines 221, 597).

Updated tests to reflect the correct expected values with the linear
interpolation formula.

Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

* fix(test): reasoning

* fix: test

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2025-12-18 13:30:41 +08:00
George·Dong
0f1b3afa72
feat: 添加火山引擎 Doubao-Seed-1.8 模型支持 (#11972)
- 新增模型定义: doubao-seed-1-8-251215
- 支持思考模式: reasoning_effort (minimal/low/medium/high)
- 支持 Function Call
- 支持图像理解 (Vision)
- 更新正则表达式支持 seed-1.8 变体
- 添加完整测试覆盖

修改文件:
- src/renderer/src/config/models/default.ts
- src/renderer/src/config/models/reasoning.ts
- src/renderer/src/aiCore/utils/reasoning.ts
- src/renderer/src/config/models/vision.ts
- src/renderer/src/config/models/tooluse.ts
- src/renderer/src/config/models/__tests__/reasoning.test.ts
2025-12-18 13:30:23 +08:00
Phantom
0cf0072b51
feat: add default reasoning effort option to resolve confusion between undefined and none (#11942)
* feat(reasoning): add default reasoning effort option and update i18n

Add 'default' reasoning effort option to all reasoning models to represent no additional configuration. Update translations for new option and modify reasoning logic to handle default case. Also update store version and migration for new reasoning_effort field.

Update test cases and reasoning configuration to include default option. Add new lightbulb question icon for default reasoning state.

* fix(ThinkingButton): correct isThinkingEnabled condition to exclude 'default'

The condition now properly disables thinking when effort is 'default' to match intended behavior. Click thinking button will not switch reasoning effort to 'none'.

* refactor(types): improve reasoning_effort_cache documentation

Update comments to clarify the purpose and future direction of reasoning_effort_cache
Remove TODO and replace with FIXME suggesting external cache service

* feat(i18n): add reasoning effort descriptions and update thinking button logic

add descriptions for reasoning effort options in multiple languages
move reasoning effort label maps to component for better maintainability

* fix(aiCore): handle default reasoning_effort value consistently across providers

Ensure consistent behavior when reasoning_effort is 'default' or undefined by returning empty object

* test(reasoning): fix failing tests after 'default' option introduction

Fixed two test cases that were failing after the introduction of the 'default'
reasoning effort option:

1. getAnthropicReasoningParams test: Updated to explicitly set reasoning_effort
   to 'none' instead of empty settings, as undefined/empty now represents
   'default' behavior (no configuration override)

2. getGeminiReasoningParams test: Similarly updated to set reasoning_effort
   to 'none' for the disabled thinking test case

This aligns with the new semantic where:
- undefined/'default' = use model's default behavior (returns {})
- 'none' = explicitly disable reasoning (returns disabled config)
2025-12-18 13:00:23 +08:00
beyondkmp
150bb3e3a0
fix: auto-discover and persist Git Bash path on Windows for scoop (#11921)
* feat: auto-discover and persist Git Bash path on Windows

- Add autoDiscoverGitBash function to find and cache Git Bash path when needed
- Modify System_CheckGitBash IPC handler to auto-discover and persist path
- Update Claude Code service with fallback auto-discovery mechanism
- Git Bash path is now cached after first discovery, improving UX for Windows users

* udpate

* fix: remove redundant validation of auto-discovered Git Bash path

The autoDiscoverGitBash function already returns a validated path, so calling validateGitBashPath again is unnecessary.

Co-Authored-By: Claude <noreply@anthropic.com>

* udpate

* test: add unit tests for autoDiscoverGitBash function

Add comprehensive test coverage for autoDiscoverGitBash including:
- Discovery with no existing config path
- Validation of existing config paths
- Handling of invalid existing paths
- Config persistence verification
- Real-world scenarios (standard Git, portable Git, user-configured paths)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove unnecessary async keyword from System_CheckGitBash handler

The handler doesn't use await since autoDiscoverGitBash is synchronous.
Removes async for consistency with other IPC handlers.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: rename misleading test to match actual behavior

Renamed "should not call configManager.set multiple times on single discovery"
to "should persist on each discovery when config remains undefined" to
accurately describe that each call to autoDiscoverGitBash persists when
the config mock returns undefined.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: use generic type parameter instead of type assertion

Replace `as string | undefined` with `get<string | undefined>()` for
better type safety when retrieving GitBashPath from config.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: simplify Git Bash path resolution in Claude Code service

Remove redundant validateGitBashPath call since autoDiscoverGitBash
already handles validation of configured paths before attempting
discovery. Also remove unused ConfigKeys and configManager imports.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: attempt auto-discovery when configured Git Bash path is invalid

Previously, if a user had an invalid configured path (e.g., Git was
moved or uninstalled), autoDiscoverGitBash would return null without
attempting to find a valid installation. Now it logs a warning and
attempts auto-discovery, providing a better user experience by
automatically fixing invalid configurations.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: ensure CLAUDE_CODE_GIT_BASH_PATH env var takes precedence over config

Previously, if a valid config path existed, the environment variable
CLAUDE_CODE_GIT_BASH_PATH was never checked. Now the precedence order is:

1. CLAUDE_CODE_GIT_BASH_PATH env var (highest - runtime override)
2. Configured path from settings
3. Auto-discovery via findGitBash

This allows users to temporarily override the configured path without
modifying their persistent settings.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: improve code quality and test robustness

- Remove duplicate logging in Claude Code service (autoDiscoverGitBash logs internally)
- Simplify Git Bash path initialization with ternary expression
- Add afterEach cleanup to restore original env vars in tests
- Extract mockExistingPaths helper to reduce test code duplication

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* feat: track Git Bash path source to distinguish manual vs auto-discovered

- Add GitBashPathSource type and GitBashPathInfo interface to shared constants
- Add GitBashPathSource config key to persist path origin ('manual' | 'auto')
- Update autoDiscoverGitBash to mark discovered paths as 'auto'
- Update setGitBashPath IPC to mark user-set paths as 'manual'
- Add getGitBashPathInfo API to retrieve path with source info
- Update AgentModal UI to show different text based on source:
  - Manual: "Using custom path" with clear button
  - Auto: "Auto-discovered" without clear button

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor: simplify Git Bash config UI as form field

- Replace large Alert components with compact form field
- Use static isWin constant instead of async platform detection
- Show Git Bash field only on Windows with auto-fill support
- Disable save button when Git Bash path is missing on Windows
- Add "Auto-discovered" hint for auto-detected paths
- Remove hasGitBash state, simplify checkGitBash logic

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* ui: add explicit select button for Git Bash path

Replace click-on-input interaction with a dedicated "Select" button
for clearer UX

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor: simplify Git Bash UI by removing clear button

- Remove handleClearGitBash function (no longer needed)
- Remove clear button from UI (auto-discover fills value, user can re-select)
- Remove auto-discovered hint (SourceHint)
- Remove unused SourceHint styled component

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: add reset button to restore auto-discovered Git Bash path

- Add handleResetGitBash to clear manual setting and re-run auto-discovery
- Show "Reset" button only when source is 'manual'
- Show "Auto-discovered" hint when path was found automatically
- User can re-select if auto-discovered path is not suitable

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: re-run auto-discovery when resetting Git Bash path

When setGitBashPath(null) is called (reset), now automatically
re-runs autoDiscoverGitBash() to restore the auto-discovered path.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat(i18n): add Git Bash config translations

Add translations for:
- autoDiscoveredHint: hint text for auto-discovered paths
- placeholder: input placeholder for bash.exe selection
- tooltip: help tooltip text
- error.required: validation error message

Supported languages: en-US, zh-CN, zh-TW

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* update i18n

* fix: auto-discover Git Bash when getting path info

When getGitBashPathInfo() is called and no path is configured,
automatically trigger autoDiscoverGitBash() first. This handles
the upgrade scenario from old versions that don't have Git Bash
path configured.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-18 09:57:23 +08:00
kangfenmao
739096deca chore(release): v1.7.5
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-17 23:13:51 +08:00
LiuVaayne
1d5dafa325
refactor: rewrite filesystem MCP server with improved tool set (#11937)
* refactor: rewrite filesystem MCP server with new tool set

- Replace existing filesystem MCP with modular architecture
- Implement 6 new tools: glob, ls, grep, read, write, delete
- Add comprehensive TypeScript types and Zod schemas
- Maintain security with path validation and allowed directories
- Improve error handling and user feedback
- Add result limits for performance (100 files/matches max)
- Format output with clear, helpful messages
- Keep backward compatibility with existing import patterns

BREAKING CHANGE: Tools renamed from snake_case to lowercase
- read_file → read
- write_file → write
- list_directory → ls
- search_files → glob
- New tools: grep, delete
- Removed: edit_file, create_directory, directory_tree, move_file, get_file_info

* 🐛 fix: remove filesystem allowed directories restriction

* 🐛 fix: relax binary detection for text files

*  feat: add edit tool with fuzzy matching to filesystem MCP server

- Add edit tool with 9 fallback replacers from opencode for robust
  string replacement (SimpleReplacer, LineTrimmedReplacer,
  BlockAnchorReplacer, WhitespaceNormalizedReplacer, etc.)
- Add Levenshtein distance algorithm for similarity matching
- Improve descriptions for all tools (read, write, glob, grep, ls, delete)
  following opencode patterns for better LLM guidance
- Register edit tool in server and export from tools index

* ♻️ refactor: replace allowedDirectories with baseDir in filesystem MCP server

- Change server to use single baseDir (from WORKSPACE_ROOT env or userData/workspace default)
- Remove list_allowed_directories tool as restriction mechanism is removed
- Add ripgrep integration for faster grep searches with JS fallback
- Simplify validatePath() by removing allowlist checks
- Display paths relative to baseDir in tool outputs

* 📝 docs: standardize filesystem MCP server tool descriptions

- Unify description format to bullet-point style across all tools
- Add absolute path requirement to ls, glob, grep schemas and descriptions
- Update glob and grep to output absolute paths instead of relative paths
- Add missing error case documentation for edit tool (old_string === new_string)
- Standardize optional path parameter descriptions

* ♻️ refactor: use ripgrep for glob tool and extract shared utilities

- Extract shared ripgrep utilities (runRipgrep, getRipgrepAddonPath) to types.ts
- Rewrite glob tool to use `rg --files --glob` for reliable file matching
- Update grep tool to import shared ripgrep utilities

* 🐛 fix: handle ripgrep exit code 2 with valid results in glob tool

- Process ripgrep stdout when content exists, regardless of exit code
- Exit code 2 can indicate partial errors while still returning valid results
- Remove fallback directory listing (had buggy regex for root-level files)
- Update tool description to clarify patterns without "/" match at any depth

* 🔥 chore: remove filesystem.ts.backup file

Remove unnecessary backup file from mcpServers directory

* 🐛 fix: use correct default workspace path in filesystem MCP server

Change default baseDir from userData/workspace to userData/Data/Workspace
to match the app's data storage convention (Data/Files, Data/Notes, etc.)

Addresses PR #11937 review feedback.

* 🐛 fix: pass WORKSPACE_ROOT to FileSystemServer constructor

The envs object passed to createInMemoryMCPServer was not being used
for the filesystem server. Now WORKSPACE_ROOT is passed as a constructor
parameter, following the same pattern as other MCP servers.

* \feat: add link to documentation for MCP server configuration requirement

Wrap the configuration requirement tag in a link to the documentation for better user guidance on MCP server settings.

---------

Co-authored-by: kangfenmao <kangfenmao@qq.com>
2025-12-17 23:08:42 +08:00
Phantom
bdfda7afb1
fix: correct typo in Gemini 3 Pro Image Preview model name (#11969) 2025-12-17 22:27:17 +08:00
kangfenmao
ef25eef0eb feat(knowledge): use prompt injection for forced knowledge base search
Change the default knowledge base retrieval behavior from tool call to prompt injection mode.
This provides faster response times when knowledge base search is forced.
Intent recognition mode (tool call) is still available as an opt-in option.

- Remove toolChoiceMiddleware for forced knowledge base search
- Add prompt injection for knowledge base references in KnowledgeService
- Move transformMessagesAndFetch to ApiService, delete OrchestrateService
- Export getMessageContent from searchOrchestrationPlugin
- Add setCitationBlockId callback to citationCallbacks
- Default knowledgeRecognition to 'off' (prompt mode)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-17 22:14:20 +08:00
亢奋猫
c676a93595
fix(installer): auto-install VC++ Redistributable without user prompt (#11927) 2025-12-17 19:23:56 +08:00
96 changed files with 6161 additions and 1378 deletions

View File

@ -1,5 +1,5 @@
diff --git a/dist/index.js b/dist/index.js
index 51ce7e423934fb717cb90245cdfcdb3dae6780e6..0f7f7009e2f41a79a8669d38c8a44867bbff5e1f 100644
index d004b415c5841a1969705823614f395265ea5a8a..6b1e0dad4610b0424393ecc12e9114723bbe316b 100644
--- a/dist/index.js
+++ b/dist/index.js
@@ -474,7 +474,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
@ -12,7 +12,7 @@ index 51ce7e423934fb717cb90245cdfcdb3dae6780e6..0f7f7009e2f41a79a8669d38c8a44867
// src/google-generative-ai-options.ts
diff --git a/dist/index.mjs b/dist/index.mjs
index f4b77e35c0cbfece85a3ef0d4f4e67aa6dde6271..8d2fecf8155a226006a0bde72b00b6036d4014b6 100644
index 1780dd2391b7f42224a0b8048c723d2f81222c44..1f12ed14399d6902107ce9b435d7d8e6cc61e06b 100644
--- a/dist/index.mjs
+++ b/dist/index.mjs
@@ -480,7 +480,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
@ -24,3 +24,14 @@ index f4b77e35c0cbfece85a3ef0d4f4e67aa6dde6271..8d2fecf8155a226006a0bde72b00b603
}
// src/google-generative-ai-options.ts
@@ -1909,8 +1909,7 @@ function createGoogleGenerativeAI(options = {}) {
}
var google = createGoogleGenerativeAI();
export {
- VERSION,
createGoogleGenerativeAI,
- google
+ google, VERSION
};
//# sourceMappingURL=index.mjs.map
\ No newline at end of file

View File

@ -12,8 +12,13 @@
; https://github.com/electron-userland/electron-builder/issues/1122
!ifndef BUILD_UNINSTALLER
; Check VC++ Redistributable based on architecture stored in $1
Function checkVCRedist
ReadRegDWORD $0 HKLM "SOFTWARE\Microsoft\VisualStudio\14.0\VC\Runtimes\x64" "Installed"
${If} $1 == "arm64"
ReadRegDWORD $0 HKLM "SOFTWARE\Microsoft\VisualStudio\14.0\VC\Runtimes\ARM64" "Installed"
${Else}
ReadRegDWORD $0 HKLM "SOFTWARE\Microsoft\VisualStudio\14.0\VC\Runtimes\x64" "Installed"
${EndIf}
FunctionEnd
Function checkArchitectureCompatibility
@ -97,29 +102,47 @@
Call checkVCRedist
${If} $0 != "1"
MessageBox MB_YESNO "\
NOTE: ${PRODUCT_NAME} requires $\r$\n\
'Microsoft Visual C++ Redistributable'$\r$\n\
to function properly.$\r$\n$\r$\n\
Download and install now?" /SD IDYES IDYES InstallVCRedist IDNO DontInstall
InstallVCRedist:
inetc::get /CAPTION " " /BANNER "Downloading Microsoft Visual C++ Redistributable..." "https://aka.ms/vs/17/release/vc_redist.x64.exe" "$TEMP\vc_redist.x64.exe"
ExecWait "$TEMP\vc_redist.x64.exe /install /norestart"
;IfErrors InstallError ContinueInstall ; vc_redist exit code is unreliable :(
Call checkVCRedist
${If} $0 == "1"
Goto ContinueInstall
${EndIf}
; VC++ is required - install automatically since declining would abort anyway
; Select download URL based on system architecture (stored in $1)
${If} $1 == "arm64"
StrCpy $2 "https://aka.ms/vs/17/release/vc_redist.arm64.exe"
StrCpy $3 "$TEMP\vc_redist.arm64.exe"
${Else}
StrCpy $2 "https://aka.ms/vs/17/release/vc_redist.x64.exe"
StrCpy $3 "$TEMP\vc_redist.x64.exe"
${EndIf}
;InstallError:
MessageBox MB_ICONSTOP "\
There was an unexpected error installing$\r$\n\
Microsoft Visual C++ Redistributable.$\r$\n\
The installation of ${PRODUCT_NAME} cannot continue."
DontInstall:
inetc::get /CAPTION " " /BANNER "Downloading Microsoft Visual C++ Redistributable..." \
$2 $3 /END
Pop $0 ; Get download status from inetc::get
${If} $0 != "OK"
MessageBox MB_ICONSTOP|MB_YESNO "\
Failed to download Microsoft Visual C++ Redistributable.$\r$\n$\r$\n\
Error: $0$\r$\n$\r$\n\
Would you like to open the download page in your browser?$\r$\n\
$2" IDYES openDownloadUrl IDNO skipDownloadUrl
openDownloadUrl:
ExecShell "open" $2
skipDownloadUrl:
Abort
${EndIf}
ExecWait "$3 /install /quiet /norestart"
; Note: vc_redist exit code is unreliable, verify via registry check instead
Call checkVCRedist
${If} $0 != "1"
MessageBox MB_ICONSTOP|MB_YESNO "\
Microsoft Visual C++ Redistributable installation failed.$\r$\n$\r$\n\
Would you like to open the download page in your browser?$\r$\n\
$2$\r$\n$\r$\n\
The installation of ${PRODUCT_NAME} cannot continue." IDYES openInstallUrl IDNO skipInstallUrl
openInstallUrl:
ExecShell "open" $2
skipInstallUrl:
Abort
${EndIf}
${EndIf}
ContinueInstall:
Pop $4
Pop $3
Pop $2

View File

@ -134,54 +134,38 @@ artifactBuildCompleted: scripts/artifact-build-completed.js
releaseInfo:
releaseNotes: |
<!--LANG:en-->
Cherry Studio 1.7.4 - New Browser MCP & Model Updates
Cherry Studio 1.7.5 - Filesystem MCP Overhaul & Topic Management
This release adds a powerful browser automation MCP server, new web search provider, and model support updates.
This release features a completely rewritten filesystem MCP server, new batch topic management, and improved assistant management.
✨ New Features
- [MCP] Add @cherry/browser CDP MCP server with session management for browser automation
- [Web Search] Add ExaMCP free web search provider (no API key required)
- [Model] Support GPT 5.2 series models
- [Model] Add capabilities support for Doubao Seed Code models (tool calling, reasoning, vision)
🔧 Improvements
- [Translate] Add reasoning effort option to translate service
- [i18n] Improve zh-TW Traditional Chinese locale
- [Settings] Update MCP Settings layout and styling
- [MCP] Rewrite filesystem MCP server with improved tool set (glob, ls, grep, read, write, edit, delete)
- [Topics] Add topic manage mode for batch delete and move operations with search functionality
- [Assistants] Merge import/subscribe popups and add export to assistant management
- [Knowledge] Use prompt injection for forced knowledge base search (faster response times)
- [Settings] Add tool use mode setting (prompt/function) to default assistant settings
🐛 Bug Fixes
- [Chat] Fix line numbers being wrongly copied from code blocks
- [Translate] Fix default to first supported reasoning effort when translating
- [Chat] Fix preserve thinking block in assistant messages
- [Web Search] Fix max search result limit
- [Embedding] Fix embedding dimensions retrieval for ModernAiProvider
- [Chat] Fix token calculation in prompt tool use plugin
- [Model] Fix Ollama provider options for Qwen model support
- [UI] Fix Chat component marginRight calculation for improved layout
- [Model] Correct typo in Gemini 3 Pro Image Preview model name
- [Installer] Auto-install VC++ Redistributable without user prompt
- [Notes] Fix notes directory validation and default path reset for cross-platform restore
- [OAuth] Bind OAuth callback server to localhost (127.0.0.1) for security
<!--LANG:zh-CN-->
Cherry Studio 1.7.4 - 新增浏览器 MCP 与模型更新
Cherry Studio 1.7.5 - 文件系统 MCP 重构与话题管理
本次更新新增强大的浏览器自动化 MCP 服务器、新的网页搜索提供商以及模型支持更新
本次更新完全重写了文件系统 MCP 服务器,新增批量话题管理功能,并改进了助手管理
✨ 新功能
- [MCP] 新增 @cherry/browser CDP MCP 服务器,支持会话管理的浏览器自动化
- [网页搜索] 新增 ExaMCP 免费网页搜索提供商(无需 API 密钥)
- [模型] 支持 GPT 5.2 系列模型
- [模型] 为豆包 Seed Code 模型添加能力支持(工具调用、推理、视觉)
🔧 功能改进
- [翻译] 为翻译服务添加推理强度选项
- [国际化] 改进繁体中文zh-TW本地化
- [设置] 优化 MCP 设置布局和样式
- [MCP] 重写文件系统 MCP 服务器提供改进的工具集glob、ls、grep、read、write、edit、delete
- [话题] 新增话题管理模式,支持批量删除和移动操作,带搜索功能
- [助手] 合并导入/订阅弹窗,并在助手管理中添加导出功能
- [知识库] 使用提示词注入进行强制知识库搜索(响应更快)
- [设置] 在默认助手设置中添加工具使用模式设置prompt/function
🐛 问题修复
- [聊天] 修复代码块中行号被错误复制的问题
- [翻译] 修复翻译时默认使用第一个支持的推理强度
- [聊天] 修复助手消息中思考块的保留问题
- [网页搜索] 修复最大搜索结果数限制
- [嵌入] 修复 ModernAiProvider 嵌入维度获取问题
- [聊天] 修复提示词工具使用插件的 token 计算问题
- [模型] 修复 Ollama 提供商对 Qwen 模型的支持选项
- [界面] 修复聊天组件右边距计算以改善布局
- [模型] 修正 Gemini 3 Pro Image Preview 模型名称的拼写错误
- [安装程序] 自动安装 VC++ 运行库,无需用户确认
- [笔记] 修复跨平台恢复场景下的笔记目录验证和默认路径重置逻辑
- [OAuth] 将 OAuth 回调服务器绑定到 localhost (127.0.0.1) 以提高安全性
<!--LANG:END-->

View File

@ -1,6 +1,6 @@
{
"name": "CherryStudio",
"version": "1.7.4",
"version": "1.7.5",
"private": true,
"description": "A powerful AI assistant for producer.",
"main": "./out/main/index.js",
@ -114,8 +114,8 @@
"@ai-sdk/anthropic": "^2.0.49",
"@ai-sdk/cerebras": "^1.0.31",
"@ai-sdk/gateway": "^2.0.15",
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.43#~/.yarn/patches/@ai-sdk-google-npm-2.0.43-689ed559b3.patch",
"@ai-sdk/google-vertex": "^3.0.79",
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.49#~/.yarn/patches/@ai-sdk-google-npm-2.0.49-84720f41bd.patch",
"@ai-sdk/google-vertex": "^3.0.94",
"@ai-sdk/huggingface": "^0.0.10",
"@ai-sdk/mistral": "^2.0.24",
"@ai-sdk/openai": "patch:@ai-sdk/openai@npm%3A2.0.85#~/.yarn/patches/@ai-sdk-openai-npm-2.0.85-27483d1d6a.patch",
@ -416,7 +416,8 @@
"@langchain/openai@npm:>=0.2.0 <0.7.0": "patch:@langchain/openai@npm%3A1.0.0#~/.yarn/patches/@langchain-openai-npm-1.0.0-474d0ad9d4.patch",
"@ai-sdk/openai@npm:^2.0.42": "patch:@ai-sdk/openai@npm%3A2.0.85#~/.yarn/patches/@ai-sdk-openai-npm-2.0.85-27483d1d6a.patch",
"@ai-sdk/google@npm:^2.0.40": "patch:@ai-sdk/google@npm%3A2.0.40#~/.yarn/patches/@ai-sdk-google-npm-2.0.40-47e0eeee83.patch",
"@ai-sdk/openai-compatible@npm:^1.0.27": "patch:@ai-sdk/openai-compatible@npm%3A1.0.27#~/.yarn/patches/@ai-sdk-openai-compatible-npm-1.0.27-06f74278cf.patch"
"@ai-sdk/openai-compatible@npm:^1.0.27": "patch:@ai-sdk/openai-compatible@npm%3A1.0.27#~/.yarn/patches/@ai-sdk-openai-compatible-npm-1.0.27-06f74278cf.patch",
"@ai-sdk/google@npm:2.0.49": "patch:@ai-sdk/google@npm%3A2.0.49#~/.yarn/patches/@ai-sdk-google-npm-2.0.49-84720f41bd.patch"
},
"packageManager": "yarn@4.9.1",
"lint-staged": {

View File

@ -244,6 +244,7 @@ export enum IpcChannel {
System_GetCpuName = 'system:getCpuName',
System_CheckGitBash = 'system:checkGitBash',
System_GetGitBashPath = 'system:getGitBashPath',
System_GetGitBashPathInfo = 'system:getGitBashPathInfo',
System_SetGitBashPath = 'system:setGitBashPath',
// DevTools

View File

@ -488,3 +488,11 @@ export const MACOS_TERMINALS_WITH_COMMANDS: TerminalConfigWithCommand[] = [
// resources/scripts should be maintained manually
export const HOME_CHERRY_DIR = '.cherrystudio'
// Git Bash path configuration types
export type GitBashPathSource = 'manual' | 'auto'
export interface GitBashPathInfo {
path: string | null
source: GitBashPathSource | null
}

View File

@ -14,7 +14,7 @@ import {
isWithTrailingSharp,
routeToEndpoint,
withoutTrailingSlash
} from '../api'
} from '../utils/url'
import {
isAnthropicProvider,
isAzureOpenAIProvider,

View File

@ -9,8 +9,8 @@ import { formatPrivateKey, hasProviderConfig, ProviderConfigFactory } from '@che
import { defaultAppHeaders } from '@shared/utils'
import { isEmpty } from 'lodash'
import { routeToEndpoint } from '../api'
import { isOllamaProvider } from './detection'
import { routeToEndpoint } from '../utils/url'
import { isAzureOpenAIProvider, isOllamaProvider } from './detection'
import { getAiSdkProviderId } from './mapping'
import type { MinimalProvider } from './types'
import { SystemProviderIds } from './types'
@ -210,6 +210,16 @@ export function providerToAiSdkConfig(
extraOptions.mode = 'chat'
}
if (isAzureOpenAIProvider(provider)) {
const apiVersion = provider.apiVersion?.trim()
if (apiVersion) {
extraOptions.apiVersion = apiVersion
if (!['preview', 'v1'].includes(apiVersion)) {
extraOptions.useDeploymentBasedUrls = true
}
}
}
// Handle AWS Bedrock
if (aiSdkProviderId === 'bedrock') {
const bedrockConfig = context.getAwsBedrockConfig?.()

View File

@ -100,7 +100,8 @@ export const SystemProviderIdSchema = z.enum([
'huggingface',
'sophnet',
'gateway',
'cerebras'
'cerebras',
'mimo'
])
export type SystemProviderId = z.infer<typeof SystemProviderIdSchema>
@ -169,7 +170,8 @@ export const SystemProviderIds = {
longcat: 'longcat',
huggingface: 'huggingface',
gateway: 'gateway',
cerebras: 'cerebras'
cerebras: 'cerebras',
mimo: 'mimo'
} as const satisfies Record<SystemProviderId, SystemProviderId>
export type SystemProviderIdTypeMap = typeof SystemProviderIds

View File

@ -1 +1,3 @@
export { defaultAppHeaders } from './headers'
export { getBaseModelName, getLowerBaseModelName } from './naming'
export * from './url'

View File

@ -38,7 +38,7 @@ import type {
import { loggerService } from '@logger'
import { type FinishReason, type LanguageModelUsage, type TextStreamPart, type ToolSet } from 'ai'
import { googleReasoningCache, openRouterReasoningCache } from '../../services/CacheService'
import { googleReasoningCache, openRouterReasoningCache } from '../services/reasoning-cache'
const logger = loggerService.withContext('AiSdkToAnthropicSSE')

View File

@ -0,0 +1,536 @@
import type { RawMessageStreamEvent } from '@anthropic-ai/sdk/resources/messages'
import type { FinishReason, LanguageModelUsage, TextStreamPart, ToolSet } from 'ai'
import { describe, expect, it, vi } from 'vitest'
import { AiSdkToAnthropicSSE, formatSSEDone, formatSSEEvent } from '../AiSdkToAnthropicSSE'
const createTextDelta = (text: string, id = 'text_0'): TextStreamPart<ToolSet> => ({
type: 'text-delta',
id,
text
})
const createTextStart = (id = 'text_0'): TextStreamPart<ToolSet> => ({
type: 'text-start',
id
})
const createTextEnd = (id = 'text_0'): TextStreamPart<ToolSet> => ({
type: 'text-end',
id
})
const createFinish = (
finishReason: FinishReason | undefined = 'stop',
totalUsage?: Partial<LanguageModelUsage>
): TextStreamPart<ToolSet> => {
const defaultUsage: LanguageModelUsage = {
inputTokens: 0,
outputTokens: 0,
totalTokens: 0
}
const event: TextStreamPart<ToolSet> = {
type: 'finish',
finishReason: finishReason || 'stop',
totalUsage: { ...defaultUsage, ...totalUsage }
}
return event
}
// Helper to create stream
function createMockStream(events: readonly TextStreamPart<ToolSet>[]) {
return new ReadableStream<TextStreamPart<ToolSet>>({
start(controller) {
for (const event of events) {
controller.enqueue(event)
}
controller.close()
}
})
}
describe('AiSdkToAnthropicSSE', () => {
describe('Text Processing', () => {
it('should emit message_start and process text-delta events', async () => {
const events: RawMessageStreamEvent[] = []
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
onEvent: (event) => events.push(event)
})
// Create a mock stream with text events
const stream = createMockStream([createTextDelta('Hello'), createTextDelta(' world'), createFinish('stop')])
await adapter.processStream(stream)
// Verify message_start
expect(events[0]).toMatchObject({
type: 'message_start',
message: {
role: 'assistant',
model: 'test:model'
}
})
// Verify content_block_start for text
expect(events[1]).toMatchObject({
type: 'content_block_start',
content_block: { type: 'text' }
})
// Verify text deltas
expect(events[2]).toMatchObject({
type: 'content_block_delta',
delta: { type: 'text_delta', text: 'Hello' }
})
expect(events[3]).toMatchObject({
type: 'content_block_delta',
delta: { type: 'text_delta', text: ' world' }
})
// Verify content_block_stop
expect(events[4]).toMatchObject({
type: 'content_block_stop'
})
// Verify message_delta with stop_reason
expect(events[5]).toMatchObject({
type: 'message_delta',
delta: { stop_reason: 'end_turn' }
})
// Verify message_stop
expect(events[6]).toMatchObject({
type: 'message_stop'
})
})
it('should handle text-start and text-end events', async () => {
const events: RawMessageStreamEvent[] = []
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
onEvent: (event) => events.push(event)
})
const stream = createMockStream([
createTextStart(),
createTextDelta('Test'),
createTextEnd(),
createFinish('stop')
])
await adapter.processStream(stream)
// Should have content_block_start, delta, and content_block_stop
const blockEvents = events.filter((e) => e.type.startsWith('content_block'))
expect(blockEvents.length).toBeGreaterThanOrEqual(3)
})
it('should auto-start text block if not explicitly started', async () => {
const events: RawMessageStreamEvent[] = []
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
onEvent: (event) => events.push(event)
})
const stream = createMockStream([createTextDelta('Auto-started'), createFinish('stop')])
await adapter.processStream(stream)
// Should automatically emit content_block_start
expect(events.some((e) => e.type === 'content_block_start')).toBe(true)
})
})
describe('Tool Call Processing', () => {
it('should emit tool_use block for tool-call events', async () => {
const events: RawMessageStreamEvent[] = []
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
onEvent: (event) => events.push(event)
})
const stream = createMockStream([
{
type: 'tool-call',
toolCallId: 'call_123',
toolName: 'get_weather',
input: { location: 'SF' }
},
createFinish('tool-calls')
])
await adapter.processStream(stream)
// Find tool_use block events
const blockStart = events.find((e) => {
if (e.type === 'content_block_start') {
return e.content_block.type === 'tool_use'
}
return false
})
expect(blockStart).toBeDefined()
if (blockStart && blockStart.type === 'content_block_start') {
expect(blockStart.content_block).toMatchObject({
type: 'tool_use',
id: 'call_123',
name: 'get_weather'
})
}
// Should emit input_json_delta
const delta = events.find((e) => {
if (e.type === 'content_block_delta') {
return e.delta.type === 'input_json_delta'
}
return false
})
expect(delta).toBeDefined()
// Should have stop_reason as tool_use
const messageDelta = events.find((e) => e.type === 'message_delta')
if (messageDelta && messageDelta.type === 'message_delta') {
expect(messageDelta.delta.stop_reason).toBe('tool_use')
}
})
it('should not create duplicate tool blocks', async () => {
const events: RawMessageStreamEvent[] = []
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
onEvent: (event) => events.push(event)
})
const toolCallEvent: TextStreamPart<ToolSet> = {
type: 'tool-call',
toolCallId: 'call_123',
toolName: 'test_tool',
input: {}
}
const stream = createMockStream([toolCallEvent, toolCallEvent, createFinish()])
await adapter.processStream(stream)
// Should only have one tool_use block
const toolBlocks = events.filter((e) => {
if (e.type === 'content_block_start') {
return e.content_block.type === 'tool_use'
}
return false
})
expect(toolBlocks.length).toBe(1)
})
})
describe('Reasoning/Thinking Processing', () => {
it('should emit thinking block for reasoning events', async () => {
const events: RawMessageStreamEvent[] = []
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
onEvent: (event) => events.push(event)
})
const stream = createMockStream([
{ type: 'reasoning-start', id: 'reason_1' },
{ type: 'reasoning-delta', id: 'reason_1', text: 'Thinking...' },
{ type: 'reasoning-end', id: 'reason_1' },
createFinish()
])
await adapter.processStream(stream)
// Find thinking block events
const blockStart = events.find((e) => {
if (e.type === 'content_block_start') {
return e.content_block.type === 'thinking'
}
return false
})
expect(blockStart).toBeDefined()
// Should emit thinking_delta
const delta = events.find((e) => {
if (e.type === 'content_block_delta') {
return e.delta.type === 'thinking_delta'
}
return false
})
expect(delta).toBeDefined()
if (delta && delta.type === 'content_block_delta' && delta.delta.type === 'thinking_delta') {
expect(delta.delta.thinking).toBe('Thinking...')
}
})
it('should handle multiple thinking blocks', async () => {
const events: RawMessageStreamEvent[] = []
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
onEvent: (event) => events.push(event)
})
const stream = createMockStream([
{ type: 'reasoning-start', id: 'reason_1' },
{ type: 'reasoning-delta', id: 'reason_1', text: 'First thought' },
{ type: 'reasoning-start', id: 'reason_2' },
{ type: 'reasoning-delta', id: 'reason_2', text: 'Second thought' },
{ type: 'reasoning-end', id: 'reason_1' },
{ type: 'reasoning-end', id: 'reason_2' },
createFinish()
])
await adapter.processStream(stream)
// Should have two thinking blocks
const thinkingBlocks = events.filter((e) => {
if (e.type === 'content_block_start') {
return e.content_block.type === 'thinking'
}
return false
})
expect(thinkingBlocks.length).toBe(2)
})
})
describe('Finish Reasons', () => {
it('should map finish reasons correctly', async () => {
const testCases: Array<{
aiSdkReason: FinishReason
expectedReason: string
}> = [
{ aiSdkReason: 'stop', expectedReason: 'end_turn' },
{ aiSdkReason: 'length', expectedReason: 'max_tokens' },
{ aiSdkReason: 'tool-calls', expectedReason: 'tool_use' },
{ aiSdkReason: 'content-filter', expectedReason: 'refusal' }
]
for (const { aiSdkReason, expectedReason } of testCases) {
const events: RawMessageStreamEvent[] = []
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
onEvent: (event) => events.push(event)
})
const stream = createMockStream([createFinish(aiSdkReason)])
await adapter.processStream(stream)
const messageDelta = events.find((e) => e.type === 'message_delta')
if (messageDelta && messageDelta.type === 'message_delta') {
expect(messageDelta.delta.stop_reason).toBe(expectedReason)
}
}
})
})
describe('Usage Tracking', () => {
it('should track token usage', async () => {
const events: RawMessageStreamEvent[] = []
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
inputTokens: 100,
onEvent: (event) => events.push(event)
})
const stream = createMockStream([
createTextDelta('Hello'),
createFinish('stop', {
inputTokens: 100,
outputTokens: 50,
cachedInputTokens: 20
})
])
await adapter.processStream(stream)
const messageDelta = events.find((e) => e.type === 'message_delta')
if (messageDelta && messageDelta.type === 'message_delta') {
expect(messageDelta.usage).toMatchObject({
input_tokens: 100,
output_tokens: 50,
cache_creation_input_tokens: 20
})
}
})
})
describe('Non-Streaming Response', () => {
it('should build complete message for non-streaming', async () => {
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
onEvent: vi.fn()
})
const stream = createMockStream([
createTextDelta('Hello world'),
{
type: 'tool-call',
toolCallId: 'call_1',
toolName: 'test',
input: { arg: 'value' }
},
createFinish('tool-calls', { inputTokens: 10, outputTokens: 20 })
])
await adapter.processStream(stream)
const response = adapter.buildNonStreamingResponse()
expect(response).toMatchObject({
type: 'message',
role: 'assistant',
model: 'test:model',
stop_reason: 'tool_use'
})
expect(response.content).toHaveLength(2)
expect(response.content[0]).toMatchObject({
type: 'text',
text: 'Hello world'
})
expect(response.content[1]).toMatchObject({
type: 'tool_use',
id: 'call_1',
name: 'test',
input: { arg: 'value' }
})
expect(response.usage).toMatchObject({
input_tokens: 10,
output_tokens: 20
})
})
})
describe('Error Handling', () => {
it('should throw on error events', async () => {
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
onEvent: vi.fn()
})
const testError = new Error('Test error')
const stream = createMockStream([{ type: 'error', error: testError }])
await expect(adapter.processStream(stream)).rejects.toThrow('Test error')
})
})
describe('Edge Cases', () => {
it('should handle empty stream', async () => {
const events: RawMessageStreamEvent[] = []
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
onEvent: (event) => events.push(event)
})
const stream = new ReadableStream<TextStreamPart<ToolSet>>({
start(controller) {
controller.close()
}
})
await adapter.processStream(stream)
// Should still emit message_start, message_delta, and message_stop
expect(events.some((e) => e.type === 'message_start')).toBe(true)
expect(events.some((e) => e.type === 'message_delta')).toBe(true)
expect(events.some((e) => e.type === 'message_stop')).toBe(true)
})
it('should handle empty text deltas', async () => {
const events: RawMessageStreamEvent[] = []
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
onEvent: (event) => events.push(event)
})
const stream = createMockStream([createTextDelta(''), createTextDelta(''), createFinish()])
await adapter.processStream(stream)
// Should not emit deltas for empty text
const deltas = events.filter((e) => e.type === 'content_block_delta')
expect(deltas.length).toBe(0)
})
})
describe('Utility Functions', () => {
it('should format SSE events correctly', () => {
const event: RawMessageStreamEvent = {
type: 'message_start',
message: {
id: 'msg_123',
type: 'message',
role: 'assistant',
content: [],
model: 'test',
stop_reason: null,
stop_sequence: null,
usage: {
input_tokens: 10,
output_tokens: 0,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 0,
server_tool_use: null
}
}
}
const formatted = formatSSEEvent(event)
expect(formatted).toContain('event: message_start')
expect(formatted).toContain('data: ')
expect(formatted).toContain('"type":"message_start"')
expect(formatted.endsWith('\n\n')).toBe(true)
})
it('should format SSE done marker correctly', () => {
const done = formatSSEDone()
expect(done).toBe('data: [DONE]\n\n')
})
})
describe('Message ID', () => {
it('should use provided message ID', () => {
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
messageId: 'custom_msg_123',
onEvent: vi.fn()
})
expect(adapter.getMessageId()).toBe('custom_msg_123')
})
it('should generate message ID if not provided', () => {
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
onEvent: vi.fn()
})
const messageId = adapter.getMessageId()
expect(messageId).toMatch(/^msg_/)
})
})
describe('Input Tokens', () => {
it('should allow setting input tokens', () => {
const events: RawMessageStreamEvent[] = []
const adapter = new AiSdkToAnthropicSSE({
model: 'test:model',
onEvent: (event) => events.push(event)
})
adapter.setInputTokens(500)
const stream = createMockStream([createFinish()])
return adapter.processStream(stream).then(() => {
const messageStart = events.find((e) => e.type === 'message_start')
if (messageStart && messageStart.type === 'message_start') {
expect(messageStart.message.usage.input_tokens).toBe(500)
}
})
})
})
})

View File

@ -0,0 +1,393 @@
import { describe, expect, it } from 'vitest'
import { estimateTokenCount } from '../messages'
describe('estimateTokenCount', () => {
describe('Text Content', () => {
it('should estimate tokens for simple string content', () => {
const input = {
messages: [
{
role: 'user' as const,
content: 'Hello, world!'
}
]
}
const tokens = estimateTokenCount(input)
// Should include text tokens + role overhead (3)
expect(tokens).toBeGreaterThan(3)
expect(tokens).toBeLessThan(20)
})
it('should estimate tokens for multiple messages', () => {
const input = {
messages: [
{ role: 'user' as const, content: 'First message' },
{ role: 'assistant' as const, content: 'Second message' },
{ role: 'user' as const, content: 'Third message' }
]
}
const tokens = estimateTokenCount(input)
// Should include text tokens + role overhead (3 per message = 9)
expect(tokens).toBeGreaterThan(9)
})
it('should estimate tokens for text content blocks', () => {
const input = {
messages: [
{
role: 'user' as const,
content: [
{ type: 'text' as const, text: 'Hello' },
{ type: 'text' as const, text: 'World' }
]
}
]
}
const tokens = estimateTokenCount(input)
expect(tokens).toBeGreaterThan(3)
})
it('should handle empty messages array', () => {
const input = {
messages: []
}
const tokens = estimateTokenCount(input)
expect(tokens).toBe(0)
})
it('should handle messages with empty content', () => {
const input = {
messages: [{ role: 'user' as const, content: '' }]
}
const tokens = estimateTokenCount(input)
// Should only have role overhead (3)
expect(tokens).toBe(3)
})
})
describe('System Messages', () => {
it('should estimate tokens for string system message', () => {
const input = {
messages: [{ role: 'user' as const, content: 'Hello' }],
system: 'You are a helpful assistant.'
}
const tokens = estimateTokenCount(input)
// Should include system tokens + message tokens + role overhead
expect(tokens).toBeGreaterThan(3)
})
it('should estimate tokens for system content blocks', () => {
const input = {
messages: [{ role: 'user' as const, content: 'Hello' }],
system: [
{ type: 'text' as const, text: 'System instruction 1' },
{ type: 'text' as const, text: 'System instruction 2' }
]
}
const tokens = estimateTokenCount(input)
expect(tokens).toBeGreaterThan(3)
})
})
describe('Image Content', () => {
it('should estimate tokens for base64 images', () => {
// Create a fake base64 string (400 characters = ~300 bytes when decoded)
const fakeBase64 = 'A'.repeat(400)
const input = {
messages: [
{
role: 'user' as const,
content: [
{
type: 'image' as const,
source: {
type: 'base64' as const,
media_type: 'image/png' as const,
data: fakeBase64
}
}
]
}
]
}
const tokens = estimateTokenCount(input)
// Should estimate based on data size: 400 * 0.75 / 100 = 3 tokens + role overhead (3)
expect(tokens).toBeGreaterThan(3)
expect(tokens).toBeLessThan(10)
})
it('should estimate tokens for URL images', () => {
const input = {
messages: [
{
role: 'user' as const,
content: [
{
type: 'image' as const,
source: {
type: 'url' as const,
url: 'https://example.com/image.png'
}
}
]
}
]
}
const tokens = estimateTokenCount(input)
// Should use default estimate: 1000 + role overhead (3)
expect(tokens).toBe(1003)
})
it('should estimate tokens for mixed text and image content', () => {
const input = {
messages: [
{
role: 'user' as const,
content: [
{ type: 'text' as const, text: 'What is in this image?' },
{
type: 'image' as const,
source: {
type: 'url' as const,
url: 'https://example.com/image.png'
}
}
]
}
]
}
const tokens = estimateTokenCount(input)
// Should include text tokens + 1000 (image) + role overhead (3)
expect(tokens).toBeGreaterThan(1003)
})
})
describe('Tool Content', () => {
it('should estimate tokens for tool_use blocks', () => {
const input = {
messages: [
{
role: 'assistant' as const,
content: [
{
type: 'tool_use' as const,
id: 'tool_123',
name: 'get_weather',
input: { location: 'San Francisco', unit: 'celsius' }
}
]
}
]
}
const tokens = estimateTokenCount(input)
// Should include: tool name tokens + input JSON tokens + 10 (overhead) + 3 (role)
expect(tokens).toBeGreaterThan(13)
})
it('should estimate tokens for tool_result blocks with string content', () => {
const input = {
messages: [
{
role: 'user' as const,
content: [
{
type: 'tool_result' as const,
tool_use_id: 'tool_123',
content: 'The weather in San Francisco is 18°C and sunny.'
}
]
}
]
}
const tokens = estimateTokenCount(input)
// Should include: content tokens + 10 (overhead) + 3 (role)
expect(tokens).toBeGreaterThan(13)
})
it('should estimate tokens for tool_result blocks with array content', () => {
const input = {
messages: [
{
role: 'user' as const,
content: [
{
type: 'tool_result' as const,
tool_use_id: 'tool_123',
content: [
{ type: 'text' as const, text: 'Result 1' },
{ type: 'text' as const, text: 'Result 2' }
]
}
]
}
]
}
const tokens = estimateTokenCount(input)
// Should include: text tokens + 10 (overhead) + 3 (role)
expect(tokens).toBeGreaterThan(13)
})
it('should handle tool_use without input', () => {
const input = {
messages: [
{
role: 'assistant' as const,
content: [
{
type: 'tool_use' as const,
id: 'tool_123',
name: 'no_input_tool',
input: {}
}
]
}
]
}
const tokens = estimateTokenCount(input)
// Should include: tool name tokens + 10 (overhead) + 3 (role)
expect(tokens).toBeGreaterThan(13)
})
})
describe('Complex Scenarios', () => {
it('should estimate tokens for multi-turn conversation with various content types', () => {
const input = {
messages: [
{
role: 'user' as const,
content: [
{ type: 'text' as const, text: 'Analyze this image' },
{
type: 'image' as const,
source: {
type: 'url' as const,
url: 'https://example.com/chart.png'
}
}
]
},
{
role: 'assistant' as const,
content: [
{
type: 'tool_use' as const,
id: 'tool_1',
name: 'analyze_image',
input: { url: 'https://example.com/chart.png' }
}
]
},
{
role: 'user' as const,
content: [
{
type: 'tool_result' as const,
tool_use_id: 'tool_1',
content: 'The chart shows sales data for Q4 2024.'
}
]
},
{
role: 'assistant' as const,
content: 'Based on the analysis, the sales trend is positive.'
}
],
system: 'You are a data analyst assistant.'
}
const tokens = estimateTokenCount(input)
// Should include:
// - System message tokens
// - Message 1: text + image (1000) + 3
// - Message 2: tool_use + 10 + 3
// - Message 3: tool_result + 10 + 3
// - Message 4: text + 3
expect(tokens).toBeGreaterThan(1032) // At least 1000 (image) + 32 (overhead)
})
it('should handle very long text content', () => {
const longText = 'word '.repeat(1000) // ~5000 characters
const input = {
messages: [{ role: 'user' as const, content: longText }]
}
const tokens = estimateTokenCount(input)
// Should estimate based on text length using tokenx
expect(tokens).toBeGreaterThan(1000)
})
it('should handle multiple images in single message', () => {
const input = {
messages: [
{
role: 'user' as const,
content: [
{
type: 'image' as const,
source: { type: 'url' as const, url: 'https://example.com/1.png' }
},
{
type: 'image' as const,
source: { type: 'url' as const, url: 'https://example.com/2.png' }
},
{
type: 'image' as const,
source: { type: 'url' as const, url: 'https://example.com/3.png' }
}
]
}
]
}
const tokens = estimateTokenCount(input)
// Should estimate: 3 * 1000 (images) + 3 (role)
expect(tokens).toBe(3003)
})
})
describe('Edge Cases', () => {
it('should handle undefined system message', () => {
const input = {
messages: [{ role: 'user' as const, content: 'Hello' }],
system: undefined
}
const tokens = estimateTokenCount(input)
expect(tokens).toBeGreaterThan(0)
})
it('should handle empty system message', () => {
const input = {
messages: [{ role: 'user' as const, content: 'Hello' }],
system: ''
}
const tokens = estimateTokenCount(input)
expect(tokens).toBeGreaterThan(0)
})
it('should handle content blocks with missing text', () => {
const input = {
messages: [
{
role: 'user' as const,
content: [{ type: 'text' as const, text: undefined as any }]
}
]
}
const tokens = estimateTokenCount(input)
// Should only have role overhead
expect(tokens).toBe(3)
})
it('should handle empty content array', () => {
const input = {
messages: [
{
role: 'user' as const,
content: []
}
]
}
const tokens = estimateTokenCount(input)
// Should only have role overhead
expect(tokens).toBe(3)
})
})
})

View File

@ -1,10 +1,11 @@
import type { MessageCreateParams } from '@anthropic-ai/sdk/resources'
import { loggerService } from '@logger'
import { buildSharedMiddlewares, type SharedMiddlewareConfig } from '@shared/middleware'
import { buildSharedMiddlewares, type SharedMiddlewareConfig } from '@shared/ai-sdk-middlewares'
import { getAiSdkProviderId } from '@shared/provider'
import type { Provider } from '@types'
import type { Request, Response } from 'express'
import express from 'express'
import { approximateTokenSize } from 'tokenx'
import { messagesService } from '../services/messages'
import { generateUnifiedMessage, streamUnifiedMessages } from '../services/unified-messages'
@ -45,25 +46,25 @@ const providerRouter = express.Router({ mergeParams: true })
/**
* Estimate token count from messages
* Simple approximation: ~4 characters per token for English text
* Uses tokenx library for accurate token estimation and supports images, tools
*/
interface CountTokensInput {
messages: Array<{ role: string; content: string | Array<{ type: string; text?: string }> }>
system?: string | Array<{ type: string; text?: string }>
export interface CountTokensInput {
messages: MessageCreateParams['messages']
system?: MessageCreateParams['system']
}
function estimateTokenCount(input: CountTokensInput): number {
export function estimateTokenCount(input: CountTokensInput): number {
const { messages, system } = input
let totalChars = 0
let totalTokens = 0
// Count system message tokens
// Count system message tokens using tokenx
if (system) {
if (typeof system === 'string') {
totalChars += system.length
totalTokens += approximateTokenSize(system)
} else if (Array.isArray(system)) {
for (const block of system) {
if (block.type === 'text' && block.text) {
totalChars += block.text.length
totalTokens += approximateTokenSize(block.text)
}
}
}
@ -72,20 +73,55 @@ function estimateTokenCount(input: CountTokensInput): number {
// Count message tokens
for (const msg of messages) {
if (typeof msg.content === 'string') {
totalChars += msg.content.length
totalTokens += approximateTokenSize(msg.content)
} else if (Array.isArray(msg.content)) {
for (const block of msg.content) {
if (block.type === 'text' && block.text) {
totalChars += block.text.length
totalTokens += approximateTokenSize(block.text)
} else if (block.type === 'image') {
// Image token estimation (consistent with TokenService)
if (block.source.type === 'base64') {
// Base64 images: estimate from data length
const dataSize = block.source.data.length * 0.75 // base64 to bytes
totalTokens += Math.floor(dataSize / 100)
} else {
// URL images: use default estimate
totalTokens += 1000
}
} else if (block.type === 'tool_use') {
// Tool use token estimation: name + input JSON
if (block.name) {
totalTokens += approximateTokenSize(block.name)
}
if (block.input) {
const inputJson = JSON.stringify(block.input)
totalTokens += approximateTokenSize(inputJson)
}
// Add overhead for tool use structure
totalTokens += 10
} else if (block.type === 'tool_result') {
// Tool result token estimation
if (typeof block.content === 'string') {
totalTokens += approximateTokenSize(block.content)
} else if (Array.isArray(block.content)) {
for (const item of block.content) {
if (typeof item === 'string') {
totalTokens += approximateTokenSize(item)
} else if (item.type === 'text' && item.text) {
totalTokens += approximateTokenSize(item.text)
}
}
}
// Add overhead for tool result structure
totalTokens += 10
}
}
}
// Add overhead for role
totalChars += 10
// Add role overhead
totalTokens += 3
}
// Estimate tokens (~4 chars per token, with some overhead)
return Math.ceil(totalChars / 4) + messages.length * 3
return totalTokens
}
// Helper function for basic request validation
@ -108,6 +144,70 @@ async function validateRequestBody(req: Request): Promise<{ valid: boolean; erro
return { valid: true }
}
/**
* Shared handler for count_tokens endpoint
* Validates request and returns token count estimation
*/
async function handleCountTokens(
req: Request,
res: Response,
options: {
requireModel?: boolean
logContext?: Record<string, any>
} = {}
): Promise<Response> {
try {
const { model, messages, system } = req.body
const { requireModel = false, logContext = {} } = options
// Validate model parameter if required
if (requireModel && !model) {
return res.status(400).json({
type: 'error',
error: {
type: 'invalid_request_error',
message: 'model parameter is required'
}
})
}
// Validate messages parameter
if (!messages || !Array.isArray(messages)) {
return res.status(400).json({
type: 'error',
error: {
type: 'invalid_request_error',
message: 'messages parameter is required'
}
})
}
// Estimate token count
const estimatedTokens = estimateTokenCount({ messages, system })
// Log with context
logger.debug('Token count estimated', {
model,
messageCount: messages.length,
estimatedTokens,
...logContext
})
return res.json({
input_tokens: estimatedTokens
})
} catch (error: any) {
logger.error('Token counting error', { error })
return res.status(500).json({
type: 'error',
error: {
type: 'api_error',
message: error.message || 'Internal server error'
}
})
}
}
interface HandleMessageProcessingOptions {
res: Response
provider: Provider
@ -631,91 +731,17 @@ providerRouter.post('/', async (req: Request, res: Response) => {
* description: Bad request
*/
router.post('/count_tokens', async (req: Request, res: Response) => {
try {
const { model, messages, system } = req.body
if (!model) {
return res.status(400).json({
type: 'error',
error: {
type: 'invalid_request_error',
message: 'model parameter is required'
}
})
}
if (!messages || !Array.isArray(messages)) {
return res.status(400).json({
type: 'error',
error: {
type: 'invalid_request_error',
message: 'messages parameter is required'
}
})
}
const estimatedTokens = estimateTokenCount({ messages, system })
logger.debug('Token count estimated', {
model,
messageCount: messages.length,
estimatedTokens
})
return res.json({
input_tokens: estimatedTokens
})
} catch (error: any) {
logger.error('Token counting error', { error })
return res.status(500).json({
type: 'error',
error: {
type: 'api_error',
message: error.message || 'Internal server error'
}
})
}
return handleCountTokens(req, res, { requireModel: true })
})
/**
* Provider-specific count_tokens endpoint
*/
providerRouter.post('/count_tokens', async (req: Request, res: Response) => {
try {
const { model, messages, system } = req.body
if (!messages || !Array.isArray(messages)) {
return res.status(400).json({
type: 'error',
error: {
type: 'invalid_request_error',
message: 'messages parameter is required'
}
})
}
const estimatedTokens = estimateTokenCount({ messages, system })
logger.debug('Token count estimated (provider route)', {
providerId: req.params.provider,
model,
messageCount: messages.length,
estimatedTokens
})
return res.json({
input_tokens: estimatedTokens
})
} catch (error: any) {
logger.error('Token counting error', { error })
return res.status(500).json({
type: 'error',
error: {
type: 'api_error',
message: error.message || 'Internal server error'
}
})
}
return handleCountTokens(req, res, {
requireModel: false,
logContext: { providerId: req.params.provider }
})
})
export { providerRouter as messagesProviderRoutes, router as messagesRoutes }

View File

@ -0,0 +1,340 @@
import { describe, expect, it } from 'vitest'
import * as z from 'zod'
import { type JsonSchemaLike, jsonSchemaToZod } from '../unified-messages'
describe('jsonSchemaToZod', () => {
describe('Basic Types', () => {
it('should convert string type', () => {
const schema: JsonSchemaLike = { type: 'string' }
const result = jsonSchemaToZod(schema)
expect(result).toBeInstanceOf(z.ZodString)
expect(result.safeParse('hello').success).toBe(true)
expect(result.safeParse(123).success).toBe(false)
})
it('should convert string with minLength', () => {
const schema: JsonSchemaLike = { type: 'string', minLength: 3 }
const result = jsonSchemaToZod(schema)
expect(result.safeParse('ab').success).toBe(false)
expect(result.safeParse('abc').success).toBe(true)
})
it('should convert string with maxLength', () => {
const schema: JsonSchemaLike = { type: 'string', maxLength: 5 }
const result = jsonSchemaToZod(schema)
expect(result.safeParse('hello').success).toBe(true)
expect(result.safeParse('hello world').success).toBe(false)
})
it('should convert string with pattern', () => {
const schema: JsonSchemaLike = { type: 'string', pattern: '^[0-9]+$' }
const result = jsonSchemaToZod(schema)
expect(result.safeParse('123').success).toBe(true)
expect(result.safeParse('abc').success).toBe(false)
})
it('should convert number type', () => {
const schema: JsonSchemaLike = { type: 'number' }
const result = jsonSchemaToZod(schema)
expect(result).toBeInstanceOf(z.ZodNumber)
expect(result.safeParse(42).success).toBe(true)
expect(result.safeParse(3.14).success).toBe(true)
expect(result.safeParse('42').success).toBe(false)
})
it('should convert integer type', () => {
const schema: JsonSchemaLike = { type: 'integer' }
const result = jsonSchemaToZod(schema)
expect(result.safeParse(42).success).toBe(true)
expect(result.safeParse(3.14).success).toBe(false)
})
it('should convert number with minimum', () => {
const schema: JsonSchemaLike = { type: 'number', minimum: 10 }
const result = jsonSchemaToZod(schema)
expect(result.safeParse(5).success).toBe(false)
expect(result.safeParse(10).success).toBe(true)
expect(result.safeParse(15).success).toBe(true)
})
it('should convert number with maximum', () => {
const schema: JsonSchemaLike = { type: 'number', maximum: 100 }
const result = jsonSchemaToZod(schema)
expect(result.safeParse(50).success).toBe(true)
expect(result.safeParse(100).success).toBe(true)
expect(result.safeParse(150).success).toBe(false)
})
it('should convert boolean type', () => {
const schema: JsonSchemaLike = { type: 'boolean' }
const result = jsonSchemaToZod(schema)
expect(result).toBeInstanceOf(z.ZodBoolean)
expect(result.safeParse(true).success).toBe(true)
expect(result.safeParse(false).success).toBe(true)
expect(result.safeParse('true').success).toBe(false)
})
it('should convert null type', () => {
const schema: JsonSchemaLike = { type: 'null' }
const result = jsonSchemaToZod(schema)
expect(result).toBeInstanceOf(z.ZodNull)
expect(result.safeParse(null).success).toBe(true)
expect(result.safeParse(undefined).success).toBe(false)
})
})
describe('Enum Types', () => {
it('should convert string enum', () => {
const schema: JsonSchemaLike = { enum: ['red', 'green', 'blue'] }
const result = jsonSchemaToZod(schema)
expect(result.safeParse('red').success).toBe(true)
expect(result.safeParse('green').success).toBe(true)
expect(result.safeParse('yellow').success).toBe(false)
})
it('should convert non-string enum with literals', () => {
const schema: JsonSchemaLike = { enum: [1, 2, 3] }
const result = jsonSchemaToZod(schema)
expect(result.safeParse(1).success).toBe(true)
expect(result.safeParse(2).success).toBe(true)
expect(result.safeParse(4).success).toBe(false)
})
it('should convert single value enum', () => {
const schema: JsonSchemaLike = { enum: ['only'] }
const result = jsonSchemaToZod(schema)
expect(result.safeParse('only').success).toBe(true)
expect(result.safeParse('other').success).toBe(false)
})
it('should convert mixed enum', () => {
const schema: JsonSchemaLike = { enum: ['text', 1, true] }
const result = jsonSchemaToZod(schema)
expect(result.safeParse('text').success).toBe(true)
expect(result.safeParse(1).success).toBe(true)
expect(result.safeParse(true).success).toBe(true)
expect(result.safeParse(false).success).toBe(false)
})
})
describe('Array Types', () => {
it('should convert array of strings', () => {
const schema: JsonSchemaLike = {
type: 'array',
items: { type: 'string' }
}
const result = jsonSchemaToZod(schema)
expect(result.safeParse(['a', 'b']).success).toBe(true)
expect(result.safeParse([1, 2]).success).toBe(false)
})
it('should convert array without items (unknown)', () => {
const schema: JsonSchemaLike = { type: 'array' }
const result = jsonSchemaToZod(schema)
expect(result.safeParse([]).success).toBe(true)
expect(result.safeParse(['a', 1, true]).success).toBe(true)
})
it('should convert array with minItems', () => {
const schema: JsonSchemaLike = {
type: 'array',
items: { type: 'number' },
minItems: 2
}
const result = jsonSchemaToZod(schema)
expect(result.safeParse([1]).success).toBe(false)
expect(result.safeParse([1, 2]).success).toBe(true)
})
it('should convert array with maxItems', () => {
const schema: JsonSchemaLike = {
type: 'array',
items: { type: 'number' },
maxItems: 3
}
const result = jsonSchemaToZod(schema)
expect(result.safeParse([1, 2, 3]).success).toBe(true)
expect(result.safeParse([1, 2, 3, 4]).success).toBe(false)
})
})
describe('Object Types', () => {
it('should convert simple object', () => {
const schema: JsonSchemaLike = {
type: 'object',
properties: {
name: { type: 'string' },
age: { type: 'number' }
}
}
const result = jsonSchemaToZod(schema)
expect(result.safeParse({ name: 'John', age: 30 }).success).toBe(true)
expect(result.safeParse({ name: 'John', age: '30' }).success).toBe(false)
})
it('should handle required fields', () => {
const schema: JsonSchemaLike = {
type: 'object',
properties: {
name: { type: 'string' },
age: { type: 'number' }
},
required: ['name']
}
const result = jsonSchemaToZod(schema)
expect(result.safeParse({ name: 'John', age: 30 }).success).toBe(true)
expect(result.safeParse({ age: 30 }).success).toBe(false)
expect(result.safeParse({ name: 'John' }).success).toBe(true)
})
it('should convert empty object', () => {
const schema: JsonSchemaLike = { type: 'object' }
const result = jsonSchemaToZod(schema)
expect(result.safeParse({}).success).toBe(true)
})
it('should convert nested objects', () => {
const schema: JsonSchemaLike = {
type: 'object',
properties: {
user: {
type: 'object',
properties: {
name: { type: 'string' },
email: { type: 'string' }
}
}
}
}
const result = jsonSchemaToZod(schema)
expect(result.safeParse({ user: { name: 'John', email: 'john@example.com' } }).success).toBe(true)
expect(result.safeParse({ user: { name: 'John' } }).success).toBe(true)
})
})
describe('Union Types', () => {
it('should convert union type (type array)', () => {
const schema: JsonSchemaLike = { type: ['string', 'null'] }
const result = jsonSchemaToZod(schema)
expect(result.safeParse('hello').success).toBe(true)
expect(result.safeParse(null).success).toBe(true)
expect(result.safeParse(123).success).toBe(false)
})
it('should convert single type array', () => {
const schema: JsonSchemaLike = { type: ['string'] }
const result = jsonSchemaToZod(schema)
expect(result.safeParse('hello').success).toBe(true)
expect(result.safeParse(123).success).toBe(false)
})
it('should convert multiple union types', () => {
const schema: JsonSchemaLike = { type: ['string', 'number', 'boolean'] }
const result = jsonSchemaToZod(schema)
expect(result.safeParse('text').success).toBe(true)
expect(result.safeParse(42).success).toBe(true)
expect(result.safeParse(true).success).toBe(true)
expect(result.safeParse(null).success).toBe(false)
})
})
describe('Description Handling', () => {
it('should preserve description for string', () => {
const schema: JsonSchemaLike = {
type: 'string',
description: 'A user name'
}
const result = jsonSchemaToZod(schema)
expect(result.description).toBe('A user name')
})
it('should preserve description for enum', () => {
const schema: JsonSchemaLike = {
enum: ['red', 'green', 'blue'],
description: 'Available colors'
}
const result = jsonSchemaToZod(schema)
expect(result.description).toBe('Available colors')
})
it('should preserve description for object', () => {
const schema: JsonSchemaLike = {
type: 'object',
description: 'User object',
properties: {
name: { type: 'string' }
}
}
const result = jsonSchemaToZod(schema)
expect(result.description).toBe('User object')
})
})
describe('Edge Cases', () => {
it('should handle unknown type', () => {
const schema: JsonSchemaLike = { type: 'unknown-type' as any }
const result = jsonSchemaToZod(schema)
expect(result).toBeInstanceOf(z.ZodType)
expect(result.safeParse(anything).success).toBe(true)
})
it('should handle schema without type', () => {
const schema: JsonSchemaLike = {}
const result = jsonSchemaToZod(schema)
expect(result).toBeInstanceOf(z.ZodType)
expect(result.safeParse(anything).success).toBe(true)
})
it('should handle complex nested schema', () => {
const schema: JsonSchemaLike = {
type: 'object',
properties: {
items: {
type: 'array',
items: {
type: 'object',
properties: {
id: { type: 'integer' },
name: { type: 'string' },
tags: {
type: 'array',
items: { type: 'string' }
}
},
required: ['id']
}
}
}
}
const result = jsonSchemaToZod(schema)
const validData = {
items: [
{ id: 1, name: 'Item 1', tags: ['tag1', 'tag2'] },
{ id: 2, tags: [] }
]
}
expect(result.safeParse(validData).success).toBe(true)
const invalidData = {
items: [{ name: 'No ID' }]
}
expect(result.safeParse(invalidData).success).toBe(false)
})
})
describe('OpenRouter Model IDs', () => {
it('should handle model identifier format with colons', () => {
const schema: JsonSchemaLike = {
type: 'string',
enum: ['openrouter:anthropic/claude-3.5-sonnet:free', 'openrouter:gpt-4:paid']
}
const result = jsonSchemaToZod(schema)
expect(result.safeParse('openrouter:anthropic/claude-3.5-sonnet:free').success).toBe(true)
expect(result.safeParse('openrouter:gpt-4:paid').success).toBe(true)
expect(result.safeParse('other').success).toBe(false)
})
})
})
const anything = Math.random() > 0.5 ? 'string' : Math.random() > 0.5 ? 123 : { a: true }

View File

@ -0,0 +1,795 @@
import type { MessageCreateParams } from '@anthropic-ai/sdk/resources/messages'
import { describe, expect, it } from 'vitest'
import { convertAnthropicToAiMessages, convertAnthropicToolsToAiSdk } from '../unified-messages'
describe('unified-messages', () => {
describe('convertAnthropicToolsToAiSdk', () => {
it('should return undefined for empty tools array', () => {
const result = convertAnthropicToolsToAiSdk([])
expect(result).toBeUndefined()
})
it('should return undefined for undefined tools', () => {
const result = convertAnthropicToolsToAiSdk(undefined)
expect(result).toBeUndefined()
})
it('should convert simple tool with string schema', () => {
const anthropicTools: MessageCreateParams['tools'] = [
{
type: 'custom',
name: 'get_weather',
description: 'Get current weather',
input_schema: {
type: 'object',
properties: {
location: { type: 'string' }
},
required: ['location']
}
}
]
const result = convertAnthropicToolsToAiSdk(anthropicTools)
expect(result).toBeDefined()
expect(result).toHaveProperty('get_weather')
expect(result!.get_weather).toHaveProperty('description', 'Get current weather')
})
it('should convert multiple tools', () => {
const anthropicTools: MessageCreateParams['tools'] = [
{
type: 'custom',
name: 'tool1',
description: 'First tool',
input_schema: {
type: 'object',
properties: {}
}
},
{
type: 'custom',
name: 'tool2',
description: 'Second tool',
input_schema: {
type: 'object',
properties: {}
}
}
]
const result = convertAnthropicToolsToAiSdk(anthropicTools)
expect(result).toBeDefined()
expect(Object.keys(result!)).toHaveLength(2)
expect(result).toHaveProperty('tool1')
expect(result).toHaveProperty('tool2')
})
it('should convert tool with complex schema', () => {
const anthropicTools: MessageCreateParams['tools'] = [
{
type: 'custom',
name: 'search',
description: 'Search for information',
input_schema: {
type: 'object',
properties: {
query: { type: 'string', minLength: 1 },
limit: { type: 'integer', minimum: 1, maximum: 100 },
filters: {
type: 'array',
items: { type: 'string' }
}
},
required: ['query']
}
}
]
const result = convertAnthropicToolsToAiSdk(anthropicTools)
expect(result).toBeDefined()
expect(result).toHaveProperty('search')
})
it('should skip bash_20250124 tool type', () => {
const anthropicTools: MessageCreateParams['tools'] = [
{
type: 'bash_20250124',
name: 'bash'
},
{
type: 'custom',
name: 'regular_tool',
description: 'A regular tool',
input_schema: {
type: 'object',
properties: {}
}
}
]
const result = convertAnthropicToolsToAiSdk(anthropicTools)
expect(result).toBeDefined()
expect(Object.keys(result!)).toHaveLength(1)
expect(result).toHaveProperty('regular_tool')
expect(result).not.toHaveProperty('bash')
})
it('should handle tool with no description', () => {
const anthropicTools: MessageCreateParams['tools'] = [
{
type: 'custom',
name: 'no_desc_tool',
input_schema: {
type: 'object',
properties: {}
}
}
]
const result = convertAnthropicToolsToAiSdk(anthropicTools)
expect(result).toBeDefined()
expect(result).toHaveProperty('no_desc_tool')
expect(result!.no_desc_tool).toHaveProperty('description', '')
})
})
describe('convertAnthropicToAiMessages', () => {
describe('System Messages', () => {
it('should convert string system message', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
system: 'You are a helpful assistant.',
messages: [
{
role: 'user',
content: 'Hello'
}
]
}
const result = convertAnthropicToAiMessages(params)
expect(result).toHaveLength(2)
expect(result[0]).toEqual({
role: 'system',
content: 'You are a helpful assistant.'
})
})
it('should convert array system message', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
system: [
{ type: 'text', text: 'Instruction 1' },
{ type: 'text', text: 'Instruction 2' }
],
messages: [
{
role: 'user',
content: 'Hello'
}
]
}
const result = convertAnthropicToAiMessages(params)
expect(result[0]).toEqual({
role: 'system',
content: 'Instruction 1\nInstruction 2'
})
})
it('should handle no system message', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'user',
content: 'Hello'
}
]
}
const result = convertAnthropicToAiMessages(params)
expect(result[0].role).toBe('user')
})
})
describe('Text Messages', () => {
it('should convert simple string message', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'user',
content: 'Hello, world!'
}
]
}
const result = convertAnthropicToAiMessages(params)
expect(result).toHaveLength(1)
expect(result[0]).toEqual({
role: 'user',
content: 'Hello, world!'
})
})
it('should convert text block array', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'First part' },
{ type: 'text', text: 'Second part' }
]
}
]
}
const result = convertAnthropicToAiMessages(params)
expect(result).toHaveLength(1)
expect(result[0].role).toBe('user')
expect(Array.isArray(result[0].content)).toBe(true)
if (Array.isArray(result[0].content)) {
expect(result[0].content).toHaveLength(2)
expect(result[0].content[0]).toEqual({ type: 'text', text: 'First part' })
expect(result[0].content[1]).toEqual({ type: 'text', text: 'Second part' })
}
})
it('should convert assistant message', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'user',
content: 'Hello'
},
{
role: 'assistant',
content: 'Hi there!'
}
]
}
const result = convertAnthropicToAiMessages(params)
expect(result).toHaveLength(2)
expect(result[1]).toEqual({
role: 'assistant',
content: 'Hi there!'
})
})
})
describe('Image Messages', () => {
it('should convert base64 image', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'user',
content: [
{
type: 'image',
source: {
type: 'base64',
media_type: 'image/png',
data: 'iVBORw0KGgo='
}
}
]
}
]
}
const result = convertAnthropicToAiMessages(params)
expect(result).toHaveLength(1)
expect(Array.isArray(result[0].content)).toBe(true)
if (Array.isArray(result[0].content)) {
expect(result[0].content).toHaveLength(1)
const imagePart = result[0].content[0]
if (imagePart.type === 'image') {
expect(imagePart.image).toBe('data:image/png;base64,iVBORw0KGgo=')
}
}
})
it('should convert URL image', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'user',
content: [
{
type: 'image',
source: {
type: 'url',
url: 'https://example.com/image.png'
}
}
]
}
]
}
const result = convertAnthropicToAiMessages(params)
if (Array.isArray(result[0].content)) {
const imagePart = result[0].content[0]
if (imagePart.type === 'image') {
expect(imagePart.image).toBe('https://example.com/image.png')
}
}
})
it('should convert mixed text and image content', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'Look at this:' },
{
type: 'image',
source: {
type: 'url',
url: 'https://example.com/pic.jpg'
}
}
]
}
]
}
const result = convertAnthropicToAiMessages(params)
if (Array.isArray(result[0].content)) {
expect(result[0].content).toHaveLength(2)
expect(result[0].content[0].type).toBe('text')
expect(result[0].content[1].type).toBe('image')
}
})
})
describe('Tool Messages', () => {
it('should convert tool_use block', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'user',
content: 'What is the weather?'
},
{
role: 'assistant',
content: [
{
type: 'tool_use',
id: 'call_123',
name: 'get_weather',
input: { location: 'San Francisco' }
}
]
}
]
}
const result = convertAnthropicToAiMessages(params)
expect(result).toHaveLength(2)
const assistantMsg = result[1]
expect(assistantMsg.role).toBe('assistant')
if (Array.isArray(assistantMsg.content)) {
expect(assistantMsg.content).toHaveLength(1)
const toolCall = assistantMsg.content[0]
if (toolCall.type === 'tool-call') {
expect(toolCall.toolName).toBe('get_weather')
expect(toolCall.toolCallId).toBe('call_123')
expect(toolCall.input).toEqual({ location: 'San Francisco' })
}
}
})
it('should convert tool_result with string content', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'assistant',
content: [
{
type: 'tool_use',
id: 'call_123',
name: 'get_weather',
input: {}
}
]
},
{
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: 'call_123',
content: 'Temperature is 72°F'
}
]
}
]
}
const result = convertAnthropicToAiMessages(params)
const toolMsg = result[1]
expect(toolMsg.role).toBe('tool')
if (Array.isArray(toolMsg.content)) {
expect(toolMsg.content).toHaveLength(1)
const toolResult = toolMsg.content[0]
if (toolResult.type === 'tool-result') {
expect(toolResult.toolCallId).toBe('call_123')
expect(toolResult.toolName).toBe('get_weather')
if (toolResult.output.type === 'text') {
expect(toolResult.output.value).toBe('Temperature is 72°F')
}
}
}
})
it('should convert tool_result with array content', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'assistant',
content: [
{
type: 'tool_use',
id: 'call_456',
name: 'analyze',
input: {}
}
]
},
{
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: 'call_456',
content: [
{ type: 'text', text: 'Result part 1' },
{ type: 'text', text: 'Result part 2' }
]
}
]
}
]
}
const result = convertAnthropicToAiMessages(params)
const toolMsg = result[1]
if (Array.isArray(toolMsg.content)) {
const toolResult = toolMsg.content[0]
if (toolResult.type === 'tool-result' && toolResult.output.type === 'content') {
expect(toolResult.output.value).toHaveLength(2)
expect(toolResult.output.value[0]).toEqual({ type: 'text', text: 'Result part 1' })
expect(toolResult.output.value[1]).toEqual({ type: 'text', text: 'Result part 2' })
}
}
})
it('should convert tool_result with image content', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'assistant',
content: [
{
type: 'tool_use',
id: 'call_789',
name: 'screenshot',
input: {}
}
]
},
{
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: 'call_789',
content: [
{
type: 'image',
source: {
type: 'base64',
media_type: 'image/png',
data: 'abc123'
}
}
]
}
]
}
]
}
const result = convertAnthropicToAiMessages(params)
const toolMsg = result[1]
if (Array.isArray(toolMsg.content)) {
const toolResult = toolMsg.content[0]
if (toolResult.type === 'tool-result' && toolResult.output.type === 'content') {
expect(toolResult.output.value).toHaveLength(1)
const media = toolResult.output.value[0]
if (media.type === 'media') {
expect(media.data).toBe('abc123')
expect(media.mediaType).toBe('image/png')
}
}
}
})
it('should handle multiple tool calls', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'assistant',
content: [
{
type: 'tool_use',
id: 'call_1',
name: 'tool1',
input: {}
},
{
type: 'tool_use',
id: 'call_2',
name: 'tool2',
input: {}
}
]
}
]
}
const result = convertAnthropicToAiMessages(params)
if (Array.isArray(result[0].content)) {
expect(result[0].content).toHaveLength(2)
expect(result[0].content[0].type).toBe('tool-call')
expect(result[0].content[1].type).toBe('tool-call')
}
})
})
describe('Thinking Content', () => {
it('should convert thinking block to reasoning', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'assistant',
content: [
{
type: 'thinking',
thinking: 'Let me analyze this...',
signature: 'sig123'
},
{
type: 'text',
text: 'Here is my answer'
}
]
}
]
}
const result = convertAnthropicToAiMessages(params)
if (Array.isArray(result[0].content)) {
expect(result[0].content).toHaveLength(2)
const reasoning = result[0].content[0]
if (reasoning.type === 'reasoning') {
expect(reasoning.text).toBe('Let me analyze this...')
}
const text = result[0].content[1]
if (text.type === 'text') {
expect(text.text).toBe('Here is my answer')
}
}
})
it('should convert redacted_thinking to reasoning', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'assistant',
content: [
{
type: 'redacted_thinking',
data: '[Redacted]'
}
]
}
]
}
const result = convertAnthropicToAiMessages(params)
if (Array.isArray(result[0].content)) {
expect(result[0].content).toHaveLength(1)
const reasoning = result[0].content[0]
if (reasoning.type === 'reasoning') {
expect(reasoning.text).toBe('[Redacted]')
}
}
})
})
describe('Multi-turn Conversations', () => {
it('should handle complete conversation flow', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
system: 'You are a helpful assistant.',
messages: [
{
role: 'user',
content: 'What is the weather in SF?'
},
{
role: 'assistant',
content: [
{
type: 'tool_use',
id: 'weather_call',
name: 'get_weather',
input: { location: 'SF' }
}
]
},
{
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: 'weather_call',
content: '72°F and sunny'
}
]
},
{
role: 'assistant',
content: 'The weather in San Francisco is 72°F and sunny.'
}
]
}
const result = convertAnthropicToAiMessages(params)
expect(result).toHaveLength(5)
expect(result[0].role).toBe('system')
expect(result[1].role).toBe('user')
expect(result[2].role).toBe('assistant')
expect(result[3].role).toBe('tool')
expect(result[4].role).toBe('assistant')
})
})
describe('Edge Cases', () => {
it('should handle empty content array for user', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'user',
content: []
}
]
}
const result = convertAnthropicToAiMessages(params)
expect(result).toHaveLength(0)
})
it('should handle empty content array for assistant', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'assistant',
content: []
}
]
}
const result = convertAnthropicToAiMessages(params)
expect(result).toHaveLength(0)
})
it('should handle tool_result without matching tool_use', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: 'unknown_call',
content: 'Some result'
}
]
}
]
}
const result = convertAnthropicToAiMessages(params)
expect(result).toHaveLength(1)
if (Array.isArray(result[0].content)) {
const toolResult = result[0].content[0]
if (toolResult.type === 'tool-result') {
expect(toolResult.toolName).toBe('unknown')
}
}
})
it('should handle tool_result with empty content', () => {
const params: MessageCreateParams = {
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'assistant',
content: [
{
type: 'tool_use',
id: 'call_empty',
name: 'empty_tool',
input: {}
}
]
},
{
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: 'call_empty'
}
]
}
]
}
const result = convertAnthropicToAiMessages(params)
const toolMsg = result[1]
if (Array.isArray(toolMsg.content)) {
const toolResult = toolMsg.content[0]
if (toolResult.type === 'tool-result' && toolResult.output.type === 'text') {
expect(toolResult.output.value).toBe('')
}
}
})
})
})
})

View File

@ -0,0 +1,45 @@
/**
* Reasoning Cache Service
*
* Manages reasoning-related caching for AI providers that support thinking/reasoning modes.
* This includes Google Gemini's thought signatures and OpenRouter's reasoning details.
*/
import type { ReasoningDetailUnion } from '@main/apiServer/adapters/openrouter'
import { CacheService } from '@main/services/CacheService'
/**
* Interface for reasoning cache
*/
export interface IReasoningCache<T> {
set(key: string, value: T): void
get(key: string): T | undefined
}
/**
* Cache duration: 30 minutes
* Reasoning data is typically only needed within a short conversation context
*/
const REASONING_CACHE_DURATION = 30 * 60 * 1000
/**
* Google Gemini reasoning cache
*
* Stores thought signatures for Gemini 3 models to handle multi-turn conversations
* where the model needs to maintain thinking context across tool calls.
*/
export const googleReasoningCache: IReasoningCache<string> = {
set: (key, value) => CacheService.set(`google-reasoning:${key}`, value, REASONING_CACHE_DURATION),
get: (key) => CacheService.get(`google-reasoning:${key}`) || undefined
}
/**
* OpenRouter reasoning cache
*
* Stores reasoning details from OpenRouter responses to preserve thinking tokens
* and reasoning metadata across the conversation flow.
*/
export const openRouterReasoningCache: IReasoningCache<ReasoningDetailUnion[]> = {
set: (key, value) => CacheService.set(`openrouter-reasoning:${key}`, value, REASONING_CACHE_DURATION),
get: (key) => CacheService.get(`openrouter-reasoning:${key}`) || undefined
}

View File

@ -1,7 +1,7 @@
import type { AnthropicProviderOptions } from '@ai-sdk/anthropic'
import type { GoogleGenerativeAIProviderOptions } from '@ai-sdk/google'
import type { OpenAIResponsesProviderOptions } from '@ai-sdk/openai'
import type { LanguageModelV2Middleware, LanguageModelV2ToolResultOutput } from '@ai-sdk/provider'
import type { JSONSchema7, LanguageModelV2Middleware, LanguageModelV2ToolResultOutput } from '@ai-sdk/provider'
import type { ProviderOptions, ReasoningPart, ToolCallPart, ToolResultPart } from '@ai-sdk/provider-utils'
import type {
ImageBlockParam,
@ -18,7 +18,7 @@ import anthropicService from '@main/services/AnthropicService'
import copilotService from '@main/services/CopilotService'
import { reduxService } from '@main/services/ReduxService'
import type { OpenRouterProviderOptions } from '@openrouter/ai-sdk-provider'
import { isGemini3ModelId } from '@shared/middleware'
import { isGemini3ModelId } from '@shared/ai-sdk-middlewares'
import {
type AiSdkConfig,
type AiSdkConfigContext,
@ -42,7 +42,7 @@ import { net } from 'electron'
import type { Response } from 'express'
import * as z from 'zod'
import { googleReasoningCache, openRouterReasoningCache } from '../../services/CacheService'
import { googleReasoningCache, openRouterReasoningCache } from './reasoning-cache'
const logger = loggerService.withContext('UnifiedMessagesService')
@ -143,18 +143,20 @@ function convertAnthropicToolResultToAiSdk(
return { type: 'content', value: values }
}
// Type alias for JSON Schema (compatible with recursive calls)
type JsonSchemaLike = AnthropicTool.InputSchema | Record<string, unknown>
/**
* JSON Schema type for tool input schemas
*/
export type JsonSchemaLike = JSONSchema7
/**
* Convert JSON Schema to Zod schema
* This avoids non-standard fields like input_examples that Anthropic doesn't support
* TODO: Anthropic/beta support input_examples
*/
function jsonSchemaToZod(schema: JsonSchemaLike): z.ZodTypeAny {
const s = schema as Record<string, unknown>
const schemaType = s.type as string | string[] | undefined
const enumValues = s.enum as unknown[] | undefined
const description = s.description as string | undefined
export function jsonSchemaToZod(schema: JsonSchemaLike): z.ZodTypeAny {
const schemaType = schema.type
const enumValues = schema.enum
const description = schema.description
// Handle enum first
if (enumValues && Array.isArray(enumValues) && enumValues.length > 0) {
@ -173,7 +175,13 @@ function jsonSchemaToZod(schema: JsonSchemaLike): z.ZodTypeAny {
// Handle union types (type: ["string", "null"])
if (Array.isArray(schemaType)) {
const schemas = schemaType.map((t) => jsonSchemaToZod({ ...s, type: t, enum: undefined }))
const schemas = schemaType.map((t) =>
jsonSchemaToZod({
...schema,
type: t,
enum: undefined
})
)
if (schemas.length === 1) {
return schemas[0]
}
@ -184,17 +192,17 @@ function jsonSchemaToZod(schema: JsonSchemaLike): z.ZodTypeAny {
switch (schemaType) {
case 'string': {
let zodString = z.string()
if (typeof s.minLength === 'number') zodString = zodString.min(s.minLength)
if (typeof s.maxLength === 'number') zodString = zodString.max(s.maxLength)
if (typeof s.pattern === 'string') zodString = zodString.regex(new RegExp(s.pattern))
if (typeof schema.minLength === 'number') zodString = zodString.min(schema.minLength)
if (typeof schema.maxLength === 'number') zodString = zodString.max(schema.maxLength)
if (typeof schema.pattern === 'string') zodString = zodString.regex(new RegExp(schema.pattern))
return description ? zodString.describe(description) : zodString
}
case 'number':
case 'integer': {
let zodNumber = schemaType === 'integer' ? z.number().int() : z.number()
if (typeof s.minimum === 'number') zodNumber = zodNumber.min(s.minimum)
if (typeof s.maximum === 'number') zodNumber = zodNumber.max(s.maximum)
if (typeof schema.minimum === 'number') zodNumber = zodNumber.min(schema.minimum)
if (typeof schema.maximum === 'number') zodNumber = zodNumber.max(schema.maximum)
return description ? zodNumber.describe(description) : zodNumber
}
@ -207,24 +215,33 @@ function jsonSchemaToZod(schema: JsonSchemaLike): z.ZodTypeAny {
return z.null()
case 'array': {
const items = s.items as Record<string, unknown> | undefined
let zodArray = items ? z.array(jsonSchemaToZod(items)) : z.array(z.unknown())
if (typeof s.minItems === 'number') zodArray = zodArray.min(s.minItems)
if (typeof s.maxItems === 'number') zodArray = zodArray.max(s.maxItems)
const items = schema.items
let zodArray: z.ZodArray<z.ZodTypeAny>
if (items && typeof items === 'object' && !Array.isArray(items)) {
zodArray = z.array(jsonSchemaToZod(items as JsonSchemaLike))
} else {
zodArray = z.array(z.unknown())
}
if (typeof schema.minItems === 'number') zodArray = zodArray.min(schema.minItems)
if (typeof schema.maxItems === 'number') zodArray = zodArray.max(schema.maxItems)
return description ? zodArray.describe(description) : zodArray
}
case 'object': {
const properties = s.properties as Record<string, Record<string, unknown>> | undefined
const required = (s.required as string[]) || []
const properties = schema.properties
const required = schema.required || []
// Always use z.object() to ensure "properties" field is present in output schema
// OpenAI requires explicit properties field even for empty objects
const shape: Record<string, z.ZodTypeAny> = {}
if (properties) {
if (properties && typeof properties === 'object') {
for (const [key, propSchema] of Object.entries(properties)) {
const zodProp = jsonSchemaToZod(propSchema)
shape[key] = required.includes(key) ? zodProp : zodProp.optional()
if (typeof propSchema === 'boolean') {
shape[key] = propSchema ? z.unknown() : z.never()
} else {
const zodProp = jsonSchemaToZod(propSchema as JsonSchemaLike)
shape[key] = required.includes(key) ? zodProp : zodProp.optional()
}
}
}
@ -238,7 +255,9 @@ function jsonSchemaToZod(schema: JsonSchemaLike): z.ZodTypeAny {
}
}
function convertAnthropicToolsToAiSdk(tools: MessageCreateParams['tools']): Record<string, AiSdkTool> | undefined {
export function convertAnthropicToolsToAiSdk(
tools: MessageCreateParams['tools']
): Record<string, AiSdkTool> | undefined {
if (!tools || tools.length === 0) return undefined
const aiSdkTools: Record<string, AiSdkTool> = {}
@ -246,7 +265,8 @@ function convertAnthropicToolsToAiSdk(tools: MessageCreateParams['tools']): Reco
if (anthropicTool.type === 'bash_20250124') continue
const toolDef = anthropicTool as AnthropicTool
const rawSchema = toolDef.input_schema
const schema = jsonSchemaToZod(rawSchema)
// Convert Anthropic's InputSchema to JSONSchema7-compatible format
const schema = jsonSchemaToZod(rawSchema as JsonSchemaLike)
// Use tool() with inputSchema (AI SDK v5 API)
const aiTool = tool({
@ -259,7 +279,7 @@ function convertAnthropicToolsToAiSdk(tools: MessageCreateParams['tools']): Reco
return Object.keys(aiSdkTools).length > 0 ? aiSdkTools : undefined
}
function convertAnthropicToAiMessages(params: MessageCreateParams): ModelMessage[] {
export function convertAnthropicToAiMessages(params: MessageCreateParams): ModelMessage[] {
const messages: ModelMessage[] = []
// System message

View File

@ -6,7 +6,14 @@ import { loggerService } from '@logger'
import { isLinux, isMac, isPortable, isWin } from '@main/constant'
import { generateSignature } from '@main/integration/cherryai'
import anthropicService from '@main/services/AnthropicService'
import { findGitBash, getBinaryPath, isBinaryExists, runInstallScript, validateGitBashPath } from '@main/utils/process'
import {
autoDiscoverGitBash,
getBinaryPath,
getGitBashPathInfo,
isBinaryExists,
runInstallScript,
validateGitBashPath
} from '@main/utils/process'
import { handleZoomFactor } from '@main/utils/zoom'
import type { SpanEntity, TokenUsage } from '@mcp-trace/trace-core'
import type { UpgradeChannel } from '@shared/config/constant'
@ -499,9 +506,8 @@ export function registerIpc(mainWindow: BrowserWindow, app: Electron.App) {
}
try {
const customPath = configManager.get(ConfigKeys.GitBashPath) as string | undefined
const bashPath = findGitBash(customPath)
// Use autoDiscoverGitBash to handle auto-discovery and persistence
const bashPath = autoDiscoverGitBash()
if (bashPath) {
logger.info('Git Bash is available', { path: bashPath })
return true
@ -524,13 +530,22 @@ export function registerIpc(mainWindow: BrowserWindow, app: Electron.App) {
return customPath ?? null
})
// Returns { path, source } where source is 'manual' | 'auto' | null
ipcMain.handle(IpcChannel.System_GetGitBashPathInfo, () => {
return getGitBashPathInfo()
})
ipcMain.handle(IpcChannel.System_SetGitBashPath, (_, newPath: string | null) => {
if (!isWin) {
return false
}
if (!newPath) {
// Clear manual setting and re-run auto-discovery
configManager.set(ConfigKeys.GitBashPath, null)
configManager.set(ConfigKeys.GitBashPathSource, null)
// Re-run auto-discovery to restore auto-discovered path if available
autoDiscoverGitBash()
return true
}
@ -539,7 +554,9 @@ export function registerIpc(mainWindow: BrowserWindow, app: Electron.App) {
return false
}
// Set path with 'manual' source
configManager.set(ConfigKeys.GitBashPath, validated)
configManager.set(ConfigKeys.GitBashPathSource, 'manual')
return true
})

View File

@ -36,7 +36,7 @@ export function createInMemoryMCPServer(
return new FetchServer().server
}
case BuiltinMCPServerNames.filesystem: {
return new FileSystemServer(args).server
return new FileSystemServer(envs.WORKSPACE_ROOT).server
}
case BuiltinMCPServerNames.difyKnowledge: {
const difyKey = envs.DIFY_KEY

View File

@ -1,652 +0,0 @@
// port https://github.com/modelcontextprotocol/servers/blob/main/src/filesystem/index.ts
import { loggerService } from '@logger'
import { Server } from '@modelcontextprotocol/sdk/server/index.js'
import { CallToolRequestSchema, ListToolsRequestSchema } from '@modelcontextprotocol/sdk/types.js'
import { createTwoFilesPatch } from 'diff'
import fs from 'fs/promises'
import { minimatch } from 'minimatch'
import os from 'os'
import path from 'path'
import * as z from 'zod'
const logger = loggerService.withContext('MCP:FileSystemServer')
// Normalize all paths consistently
function normalizePath(p: string): string {
return path.normalize(p)
}
function expandHome(filepath: string): string {
if (filepath.startsWith('~/') || filepath === '~') {
return path.join(os.homedir(), filepath.slice(1))
}
return filepath
}
// Security utilities
async function validatePath(allowedDirectories: string[], requestedPath: string): Promise<string> {
const expandedPath = expandHome(requestedPath)
const absolute = path.isAbsolute(expandedPath)
? path.resolve(expandedPath)
: path.resolve(process.cwd(), expandedPath)
const normalizedRequested = normalizePath(absolute)
// Check if path is within allowed directories
const isAllowed = allowedDirectories.some((dir) => normalizedRequested.startsWith(dir))
if (!isAllowed) {
throw new Error(
`Access denied - path outside allowed directories: ${absolute} not in ${allowedDirectories.join(', ')}`
)
}
// Handle symlinks by checking their real path
try {
const realPath = await fs.realpath(absolute)
const normalizedReal = normalizePath(realPath)
const isRealPathAllowed = allowedDirectories.some((dir) => normalizedReal.startsWith(dir))
if (!isRealPathAllowed) {
throw new Error('Access denied - symlink target outside allowed directories')
}
return realPath
} catch (error) {
// For new files that don't exist yet, verify parent directory
const parentDir = path.dirname(absolute)
try {
const realParentPath = await fs.realpath(parentDir)
const normalizedParent = normalizePath(realParentPath)
const isParentAllowed = allowedDirectories.some((dir) => normalizedParent.startsWith(dir))
if (!isParentAllowed) {
throw new Error('Access denied - parent directory outside allowed directories')
}
return absolute
} catch {
throw new Error(`Parent directory does not exist: ${parentDir}`)
}
}
}
// Schema definitions
const ReadFileArgsSchema = z.object({
path: z.string()
})
const ReadMultipleFilesArgsSchema = z.object({
paths: z.array(z.string())
})
const WriteFileArgsSchema = z.object({
path: z.string(),
content: z.string()
})
const EditOperation = z.object({
oldText: z.string().describe('Text to search for - must match exactly'),
newText: z.string().describe('Text to replace with')
})
const EditFileArgsSchema = z.object({
path: z.string(),
edits: z.array(EditOperation),
dryRun: z.boolean().default(false).describe('Preview changes using git-style diff format')
})
const CreateDirectoryArgsSchema = z.object({
path: z.string()
})
const ListDirectoryArgsSchema = z.object({
path: z.string()
})
const DirectoryTreeArgsSchema = z.object({
path: z.string()
})
const MoveFileArgsSchema = z.object({
source: z.string(),
destination: z.string()
})
const SearchFilesArgsSchema = z.object({
path: z.string(),
pattern: z.string(),
excludePatterns: z.array(z.string()).optional().default([])
})
const GetFileInfoArgsSchema = z.object({
path: z.string()
})
interface FileInfo {
size: number
created: Date
modified: Date
accessed: Date
isDirectory: boolean
isFile: boolean
permissions: string
}
// Tool implementations
async function getFileStats(filePath: string): Promise<FileInfo> {
const stats = await fs.stat(filePath)
return {
size: stats.size,
created: stats.birthtime,
modified: stats.mtime,
accessed: stats.atime,
isDirectory: stats.isDirectory(),
isFile: stats.isFile(),
permissions: stats.mode.toString(8).slice(-3)
}
}
async function searchFiles(
allowedDirectories: string[],
rootPath: string,
pattern: string,
excludePatterns: string[] = []
): Promise<string[]> {
const results: string[] = []
async function search(currentPath: string) {
const entries = await fs.readdir(currentPath, { withFileTypes: true })
for (const entry of entries) {
const fullPath = path.join(currentPath, entry.name)
try {
// Validate each path before processing
await validatePath(allowedDirectories, fullPath)
// Check if path matches any exclude pattern
const relativePath = path.relative(rootPath, fullPath)
const shouldExclude = excludePatterns.some((pattern) => {
const globPattern = pattern.includes('*') ? pattern : `**/${pattern}/**`
return minimatch(relativePath, globPattern, { dot: true })
})
if (shouldExclude) {
continue
}
if (entry.name.toLowerCase().includes(pattern.toLowerCase())) {
results.push(fullPath)
}
if (entry.isDirectory()) {
await search(fullPath)
}
} catch (error) {
// Skip invalid paths during search
}
}
}
await search(rootPath)
return results
}
// file editing and diffing utilities
function normalizeLineEndings(text: string): string {
return text.replace(/\r\n/g, '\n')
}
function createUnifiedDiff(originalContent: string, newContent: string, filepath: string = 'file'): string {
// Ensure consistent line endings for diff
const normalizedOriginal = normalizeLineEndings(originalContent)
const normalizedNew = normalizeLineEndings(newContent)
return createTwoFilesPatch(filepath, filepath, normalizedOriginal, normalizedNew, 'original', 'modified')
}
async function applyFileEdits(
filePath: string,
edits: Array<{ oldText: string; newText: string }>,
dryRun = false
): Promise<string> {
// Read file content and normalize line endings
const content = normalizeLineEndings(await fs.readFile(filePath, 'utf-8'))
// Apply edits sequentially
let modifiedContent = content
for (const edit of edits) {
const normalizedOld = normalizeLineEndings(edit.oldText)
const normalizedNew = normalizeLineEndings(edit.newText)
// If exact match exists, use it
if (modifiedContent.includes(normalizedOld)) {
modifiedContent = modifiedContent.replace(normalizedOld, normalizedNew)
continue
}
// Otherwise, try line-by-line matching with flexibility for whitespace
const oldLines = normalizedOld.split('\n')
const contentLines = modifiedContent.split('\n')
let matchFound = false
for (let i = 0; i <= contentLines.length - oldLines.length; i++) {
const potentialMatch = contentLines.slice(i, i + oldLines.length)
// Compare lines with normalized whitespace
const isMatch = oldLines.every((oldLine, j) => {
const contentLine = potentialMatch[j]
return oldLine.trim() === contentLine.trim()
})
if (isMatch) {
// Preserve original indentation of first line
const originalIndent = contentLines[i].match(/^\s*/)?.[0] || ''
const newLines = normalizedNew.split('\n').map((line, j) => {
if (j === 0) return originalIndent + line.trimStart()
// For subsequent lines, try to preserve relative indentation
const oldIndent = oldLines[j]?.match(/^\s*/)?.[0] || ''
const newIndent = line.match(/^\s*/)?.[0] || ''
if (oldIndent && newIndent) {
const relativeIndent = newIndent.length - oldIndent.length
return originalIndent + ' '.repeat(Math.max(0, relativeIndent)) + line.trimStart()
}
return line
})
contentLines.splice(i, oldLines.length, ...newLines)
modifiedContent = contentLines.join('\n')
matchFound = true
break
}
}
if (!matchFound) {
throw new Error(`Could not find exact match for edit:\n${edit.oldText}`)
}
}
// Create unified diff
const diff = createUnifiedDiff(content, modifiedContent, filePath)
// Format diff with appropriate number of backticks
let numBackticks = 3
while (diff.includes('`'.repeat(numBackticks))) {
numBackticks++
}
const formattedDiff = `${'`'.repeat(numBackticks)}diff\n${diff}${'`'.repeat(numBackticks)}\n\n`
if (!dryRun) {
await fs.writeFile(filePath, modifiedContent, 'utf-8')
}
return formattedDiff
}
class FileSystemServer {
public server: Server
private allowedDirectories: string[]
constructor(allowedDirs: string[]) {
if (!Array.isArray(allowedDirs) || allowedDirs.length === 0) {
throw new Error('No allowed directories provided, please specify at least one directory in args')
}
this.allowedDirectories = allowedDirs.map((dir) => normalizePath(path.resolve(expandHome(dir))))
// Validate that all directories exist and are accessible
this.validateDirs().catch((error) => {
logger.error('Error validating allowed directories:', error)
throw new Error(`Error validating allowed directories: ${error}`)
})
this.server = new Server(
{
name: 'secure-filesystem-server',
version: '0.2.0'
},
{
capabilities: {
tools: {}
}
}
)
this.initialize()
}
async validateDirs() {
// Validate that all directories exist and are accessible
await Promise.all(
this.allowedDirectories.map(async (dir) => {
try {
const stats = await fs.stat(expandHome(dir))
if (!stats.isDirectory()) {
logger.error(`Error: ${dir} is not a directory`)
throw new Error(`Error: ${dir} is not a directory`)
}
} catch (error: any) {
logger.error(`Error accessing directory ${dir}:`, error)
throw new Error(`Error accessing directory ${dir}:`, error)
}
})
)
}
initialize() {
// Tool handlers
this.server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [
{
name: 'read_file',
description:
'Read the complete contents of a file from the file system. ' +
'Handles various text encodings and provides detailed error messages ' +
'if the file cannot be read. Use this tool when you need to examine ' +
'the contents of a single file. Only works within allowed directories.',
inputSchema: z.toJSONSchema(ReadFileArgsSchema)
},
{
name: 'read_multiple_files',
description:
'Read the contents of multiple files simultaneously. This is more ' +
'efficient than reading files one by one when you need to analyze ' +
"or compare multiple files. Each file's content is returned with its " +
"path as a reference. Failed reads for individual files won't stop " +
'the entire operation. Only works within allowed directories.',
inputSchema: z.toJSONSchema(ReadMultipleFilesArgsSchema)
},
{
name: 'write_file',
description:
'Create a new file or completely overwrite an existing file with new content. ' +
'Use with caution as it will overwrite existing files without warning. ' +
'Handles text content with proper encoding. Only works within allowed directories.',
inputSchema: z.toJSONSchema(WriteFileArgsSchema)
},
{
name: 'edit_file',
description:
'Make line-based edits to a text file. Each edit replaces exact line sequences ' +
'with new content. Returns a git-style diff showing the changes made. ' +
'Only works within allowed directories.',
inputSchema: z.toJSONSchema(EditFileArgsSchema)
},
{
name: 'create_directory',
description:
'Create a new directory or ensure a directory exists. Can create multiple ' +
'nested directories in one operation. If the directory already exists, ' +
'this operation will succeed silently. Perfect for setting up directory ' +
'structures for projects or ensuring required paths exist. Only works within allowed directories.',
inputSchema: z.toJSONSchema(CreateDirectoryArgsSchema)
},
{
name: 'list_directory',
description:
'Get a detailed listing of all files and directories in a specified path. ' +
'Results clearly distinguish between files and directories with [FILE] and [DIR] ' +
'prefixes. This tool is essential for understanding directory structure and ' +
'finding specific files within a directory. Only works within allowed directories.',
inputSchema: z.toJSONSchema(ListDirectoryArgsSchema)
},
{
name: 'directory_tree',
description:
'Get a recursive tree view of files and directories as a JSON structure. ' +
"Each entry includes 'name', 'type' (file/directory), and 'children' for directories. " +
'Files have no children array, while directories always have a children array (which may be empty). ' +
'The output is formatted with 2-space indentation for readability. Only works within allowed directories.',
inputSchema: z.toJSONSchema(DirectoryTreeArgsSchema)
},
{
name: 'move_file',
description:
'Move or rename files and directories. Can move files between directories ' +
'and rename them in a single operation. If the destination exists, the ' +
'operation will fail. Works across different directories and can be used ' +
'for simple renaming within the same directory. Both source and destination must be within allowed directories.',
inputSchema: z.toJSONSchema(MoveFileArgsSchema)
},
{
name: 'search_files',
description:
'Recursively search for files and directories matching a pattern. ' +
'Searches through all subdirectories from the starting path. The search ' +
'is case-insensitive and matches partial names. Returns full paths to all ' +
"matching items. Great for finding files when you don't know their exact location. " +
'Only searches within allowed directories.',
inputSchema: z.toJSONSchema(SearchFilesArgsSchema)
},
{
name: 'get_file_info',
description:
'Retrieve detailed metadata about a file or directory. Returns comprehensive ' +
'information including size, creation time, last modified time, permissions, ' +
'and type. This tool is perfect for understanding file characteristics ' +
'without reading the actual content. Only works within allowed directories.',
inputSchema: z.toJSONSchema(GetFileInfoArgsSchema)
},
{
name: 'list_allowed_directories',
description:
'Returns the list of directories that this server is allowed to access. ' +
'Use this to understand which directories are available before trying to access files.',
inputSchema: {
type: 'object',
properties: {},
required: []
}
}
]
}
})
this.server.setRequestHandler(CallToolRequestSchema, async (request) => {
try {
const { name, arguments: args } = request.params
switch (name) {
case 'read_file': {
const parsed = ReadFileArgsSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for read_file: ${parsed.error}`)
}
const validPath = await validatePath(this.allowedDirectories, parsed.data.path)
const content = await fs.readFile(validPath, 'utf-8')
return {
content: [{ type: 'text', text: content }]
}
}
case 'read_multiple_files': {
const parsed = ReadMultipleFilesArgsSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for read_multiple_files: ${parsed.error}`)
}
const results = await Promise.all(
parsed.data.paths.map(async (filePath: string) => {
try {
const validPath = await validatePath(this.allowedDirectories, filePath)
const content = await fs.readFile(validPath, 'utf-8')
return `${filePath}:\n${content}\n`
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error)
return `${filePath}: Error - ${errorMessage}`
}
})
)
return {
content: [{ type: 'text', text: results.join('\n---\n') }]
}
}
case 'write_file': {
const parsed = WriteFileArgsSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for write_file: ${parsed.error}`)
}
const validPath = await validatePath(this.allowedDirectories, parsed.data.path)
await fs.writeFile(validPath, parsed.data.content, 'utf-8')
return {
content: [{ type: 'text', text: `Successfully wrote to ${parsed.data.path}` }]
}
}
case 'edit_file': {
const parsed = EditFileArgsSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for edit_file: ${parsed.error}`)
}
const validPath = await validatePath(this.allowedDirectories, parsed.data.path)
const result = await applyFileEdits(validPath, parsed.data.edits, parsed.data.dryRun)
return {
content: [{ type: 'text', text: result }]
}
}
case 'create_directory': {
const parsed = CreateDirectoryArgsSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for create_directory: ${parsed.error}`)
}
const validPath = await validatePath(this.allowedDirectories, parsed.data.path)
await fs.mkdir(validPath, { recursive: true })
return {
content: [{ type: 'text', text: `Successfully created directory ${parsed.data.path}` }]
}
}
case 'list_directory': {
const parsed = ListDirectoryArgsSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for list_directory: ${parsed.error}`)
}
const validPath = await validatePath(this.allowedDirectories, parsed.data.path)
const entries = await fs.readdir(validPath, { withFileTypes: true })
const formatted = entries
.map((entry) => `${entry.isDirectory() ? '[DIR]' : '[FILE]'} ${entry.name}`)
.join('\n')
return {
content: [{ type: 'text', text: formatted }]
}
}
case 'directory_tree': {
const parsed = DirectoryTreeArgsSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for directory_tree: ${parsed.error}`)
}
interface TreeEntry {
name: string
type: 'file' | 'directory'
children?: TreeEntry[]
}
async function buildTree(allowedDirectories: string[], currentPath: string): Promise<TreeEntry[]> {
const validPath = await validatePath(allowedDirectories, currentPath)
const entries = await fs.readdir(validPath, { withFileTypes: true })
const result: TreeEntry[] = []
for (const entry of entries) {
const entryData: TreeEntry = {
name: entry.name,
type: entry.isDirectory() ? 'directory' : 'file'
}
if (entry.isDirectory()) {
const subPath = path.join(currentPath, entry.name)
entryData.children = await buildTree(allowedDirectories, subPath)
}
result.push(entryData)
}
return result
}
const treeData = await buildTree(this.allowedDirectories, parsed.data.path)
return {
content: [
{
type: 'text',
text: JSON.stringify(treeData, null, 2)
}
]
}
}
case 'move_file': {
const parsed = MoveFileArgsSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for move_file: ${parsed.error}`)
}
const validSourcePath = await validatePath(this.allowedDirectories, parsed.data.source)
const validDestPath = await validatePath(this.allowedDirectories, parsed.data.destination)
await fs.rename(validSourcePath, validDestPath)
return {
content: [
{ type: 'text', text: `Successfully moved ${parsed.data.source} to ${parsed.data.destination}` }
]
}
}
case 'search_files': {
const parsed = SearchFilesArgsSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for search_files: ${parsed.error}`)
}
const validPath = await validatePath(this.allowedDirectories, parsed.data.path)
const results = await searchFiles(
this.allowedDirectories,
validPath,
parsed.data.pattern,
parsed.data.excludePatterns
)
return {
content: [{ type: 'text', text: results.length > 0 ? results.join('\n') : 'No matches found' }]
}
}
case 'get_file_info': {
const parsed = GetFileInfoArgsSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for get_file_info: ${parsed.error}`)
}
const validPath = await validatePath(this.allowedDirectories, parsed.data.path)
const info = await getFileStats(validPath)
return {
content: [
{
type: 'text',
text: Object.entries(info)
.map(([key, value]) => `${key}: ${value}`)
.join('\n')
}
]
}
}
case 'list_allowed_directories': {
return {
content: [
{
type: 'text',
text: `Allowed directories:\n${this.allowedDirectories.join('\n')}`
}
]
}
}
default:
throw new Error(`Unknown tool: ${name}`)
}
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error)
return {
content: [{ type: 'text', text: `Error: ${errorMessage}` }],
isError: true
}
}
})
}
}
export default FileSystemServer

View File

@ -0,0 +1,2 @@
// Re-export FileSystemServer to maintain existing import pattern
export { default, FileSystemServer } from './server'

View File

@ -0,0 +1,118 @@
import { Server } from '@modelcontextprotocol/sdk/server/index.js'
import { CallToolRequestSchema, ListToolsRequestSchema } from '@modelcontextprotocol/sdk/types.js'
import { app } from 'electron'
import fs from 'fs/promises'
import path from 'path'
import {
deleteToolDefinition,
editToolDefinition,
globToolDefinition,
grepToolDefinition,
handleDeleteTool,
handleEditTool,
handleGlobTool,
handleGrepTool,
handleLsTool,
handleReadTool,
handleWriteTool,
lsToolDefinition,
readToolDefinition,
writeToolDefinition
} from './tools'
import { logger } from './types'
export class FileSystemServer {
public server: Server
private baseDir: string
constructor(baseDir?: string) {
if (baseDir && path.isAbsolute(baseDir)) {
this.baseDir = baseDir
logger.info(`Using provided baseDir for filesystem MCP: ${baseDir}`)
} else {
const userData = app.getPath('userData')
this.baseDir = path.join(userData, 'Data', 'Workspace')
logger.info(`Using default workspace for filesystem MCP baseDir: ${this.baseDir}`)
}
this.server = new Server(
{
name: 'filesystem-server',
version: '2.0.0'
},
{
capabilities: {
tools: {}
}
}
)
this.initialize()
}
async initialize() {
try {
await fs.mkdir(this.baseDir, { recursive: true })
} catch (error) {
logger.error('Failed to create filesystem MCP baseDir', { error, baseDir: this.baseDir })
}
// Register tool list handler
this.server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [
globToolDefinition,
lsToolDefinition,
grepToolDefinition,
readToolDefinition,
editToolDefinition,
writeToolDefinition,
deleteToolDefinition
]
}
})
// Register tool call handler
this.server.setRequestHandler(CallToolRequestSchema, async (request) => {
try {
const { name, arguments: args } = request.params
switch (name) {
case 'glob':
return await handleGlobTool(args, this.baseDir)
case 'ls':
return await handleLsTool(args, this.baseDir)
case 'grep':
return await handleGrepTool(args, this.baseDir)
case 'read':
return await handleReadTool(args, this.baseDir)
case 'edit':
return await handleEditTool(args, this.baseDir)
case 'write':
return await handleWriteTool(args, this.baseDir)
case 'delete':
return await handleDeleteTool(args, this.baseDir)
default:
throw new Error(`Unknown tool: ${name}`)
}
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error)
logger.error(`Tool execution error for ${request.params.name}:`, { error })
return {
content: [{ type: 'text', text: `Error: ${errorMessage}` }],
isError: true
}
}
})
}
}
export default FileSystemServer

View File

@ -0,0 +1,93 @@
import fs from 'fs/promises'
import path from 'path'
import * as z from 'zod'
import { logger, validatePath } from '../types'
// Schema definition
export const DeleteToolSchema = z.object({
path: z.string().describe('The path to the file or directory to delete'),
recursive: z.boolean().optional().describe('For directories, whether to delete recursively (default: false)')
})
// Tool definition with detailed description
export const deleteToolDefinition = {
name: 'delete',
description: `Deletes a file or directory from the filesystem.
CAUTION: This operation cannot be undone!
- For files: simply provide the path
- For empty directories: provide the path
- For non-empty directories: set recursive=true
- The path must be an absolute path, not a relative path
- Always verify the path before deleting to avoid data loss`,
inputSchema: z.toJSONSchema(DeleteToolSchema)
}
// Handler implementation
export async function handleDeleteTool(args: unknown, baseDir: string) {
const parsed = DeleteToolSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for delete: ${parsed.error}`)
}
const targetPath = parsed.data.path
const validPath = await validatePath(targetPath, baseDir)
const recursive = parsed.data.recursive || false
// Check if path exists and get stats
let stats
try {
stats = await fs.stat(validPath)
} catch (error: any) {
if (error.code === 'ENOENT') {
throw new Error(`Path not found: ${targetPath}`)
}
throw error
}
const isDirectory = stats.isDirectory()
const relativePath = path.relative(baseDir, validPath)
// Perform deletion
try {
if (isDirectory) {
if (recursive) {
// Delete directory recursively
await fs.rm(validPath, { recursive: true, force: true })
} else {
// Try to delete empty directory
await fs.rmdir(validPath)
}
} else {
// Delete file
await fs.unlink(validPath)
}
} catch (error: any) {
if (error.code === 'ENOTEMPTY') {
throw new Error(`Directory not empty: ${targetPath}. Use recursive=true to delete non-empty directories.`)
}
throw new Error(`Failed to delete: ${error.message}`)
}
// Log the operation
logger.info('Path deleted', {
path: validPath,
type: isDirectory ? 'directory' : 'file',
recursive: isDirectory ? recursive : undefined
})
// Format output
const itemType = isDirectory ? 'Directory' : 'File'
const recursiveNote = isDirectory && recursive ? ' (recursive)' : ''
return {
content: [
{
type: 'text',
text: `${itemType} deleted${recursiveNote}: ${relativePath}`
}
]
}
}

View File

@ -0,0 +1,130 @@
import fs from 'fs/promises'
import path from 'path'
import * as z from 'zod'
import { logger, replaceWithFuzzyMatch, validatePath } from '../types'
// Schema definition
export const EditToolSchema = z.object({
file_path: z.string().describe('The path to the file to modify'),
old_string: z.string().describe('The text to replace'),
new_string: z.string().describe('The text to replace it with'),
replace_all: z.boolean().optional().default(false).describe('Replace all occurrences of old_string (default false)')
})
// Tool definition with detailed description
export const editToolDefinition = {
name: 'edit',
description: `Performs exact string replacements in files.
- You must use the 'read' tool at least once before editing
- The file_path must be an absolute path, not a relative path
- Preserve exact indentation from read output (after the line number prefix)
- Never include line number prefixes in old_string or new_string
- ALWAYS prefer editing existing files over creating new ones
- The edit will FAIL if old_string is not found in the file
- The edit will FAIL if old_string appears multiple times (provide more context or use replace_all)
- The edit will FAIL if old_string equals new_string
- Use replace_all to rename variables or replace all occurrences`,
inputSchema: z.toJSONSchema(EditToolSchema)
}
// Handler implementation
export async function handleEditTool(args: unknown, baseDir: string) {
const parsed = EditToolSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for edit: ${parsed.error}`)
}
const { file_path: filePath, old_string: oldString, new_string: newString, replace_all: replaceAll } = parsed.data
// Validate path
const validPath = await validatePath(filePath, baseDir)
// Check if file exists
try {
const stats = await fs.stat(validPath)
if (!stats.isFile()) {
throw new Error(`Path is not a file: ${filePath}`)
}
} catch (error: any) {
if (error.code === 'ENOENT') {
// If old_string is empty, this is a create new file operation
if (oldString === '') {
// Create parent directory if needed
const parentDir = path.dirname(validPath)
await fs.mkdir(parentDir, { recursive: true })
// Write the new content
await fs.writeFile(validPath, newString, 'utf-8')
logger.info('File created', { path: validPath })
const relativePath = path.relative(baseDir, validPath)
return {
content: [
{
type: 'text',
text: `Created new file: ${relativePath}\nLines: ${newString.split('\n').length}`
}
]
}
}
throw new Error(`File not found: ${filePath}`)
}
throw error
}
// Read current content
const content = await fs.readFile(validPath, 'utf-8')
// Handle special case: old_string is empty (create file with content)
if (oldString === '') {
await fs.writeFile(validPath, newString, 'utf-8')
logger.info('File overwritten', { path: validPath })
const relativePath = path.relative(baseDir, validPath)
return {
content: [
{
type: 'text',
text: `Overwrote file: ${relativePath}\nLines: ${newString.split('\n').length}`
}
]
}
}
// Perform the replacement with fuzzy matching
const newContent = replaceWithFuzzyMatch(content, oldString, newString, replaceAll)
// Write the modified content
await fs.writeFile(validPath, newContent, 'utf-8')
logger.info('File edited', {
path: validPath,
replaceAll
})
// Generate a simple diff summary
const oldLines = content.split('\n').length
const newLines = newContent.split('\n').length
const lineDiff = newLines - oldLines
const relativePath = path.relative(baseDir, validPath)
let diffSummary = `Edited: ${relativePath}`
if (lineDiff > 0) {
diffSummary += `\n+${lineDiff} lines`
} else if (lineDiff < 0) {
diffSummary += `\n${lineDiff} lines`
}
return {
content: [
{
type: 'text',
text: diffSummary
}
]
}
}

View File

@ -0,0 +1,149 @@
import fs from 'fs/promises'
import path from 'path'
import * as z from 'zod'
import type { FileInfo } from '../types'
import { logger, MAX_FILES_LIMIT, runRipgrep, validatePath } from '../types'
// Schema definition
export const GlobToolSchema = z.object({
pattern: z.string().describe('The glob pattern to match files against'),
path: z
.string()
.optional()
.describe('The directory to search in (must be absolute path). Defaults to the base directory')
})
// Tool definition with detailed description
export const globToolDefinition = {
name: 'glob',
description: `Fast file pattern matching tool that works with any codebase size.
- Supports glob patterns like "**/*.js" or "src/**/*.ts"
- Returns matching absolute file paths sorted by modification time (newest first)
- Use this when you need to find files by name patterns
- Patterns without "/" (e.g., "*.txt") match files at ANY depth in the directory tree
- Patterns with "/" (e.g., "src/*.ts") match relative to the search path
- Pattern syntax: * (any chars), ** (any path), {a,b} (alternatives), ? (single char)
- Results are limited to 100 files
- The path parameter must be an absolute path if specified
- If path is not specified, defaults to the base directory
- IMPORTANT: Omit the path field for the default directory (don't use "undefined" or "null")`,
inputSchema: z.toJSONSchema(GlobToolSchema)
}
// Handler implementation
export async function handleGlobTool(args: unknown, baseDir: string) {
const parsed = GlobToolSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for glob: ${parsed.error}`)
}
const searchPath = parsed.data.path || baseDir
const validPath = await validatePath(searchPath, baseDir)
// Verify the search directory exists
try {
const stats = await fs.stat(validPath)
if (!stats.isDirectory()) {
throw new Error(`Path is not a directory: ${validPath}`)
}
} catch (error: unknown) {
if (error && typeof error === 'object' && 'code' in error && error.code === 'ENOENT') {
throw new Error(`Directory not found: ${validPath}`)
}
throw error
}
// Validate pattern
const pattern = parsed.data.pattern.trim()
if (!pattern) {
throw new Error('Pattern cannot be empty')
}
const files: FileInfo[] = []
let truncated = false
// Build ripgrep arguments for file listing using --glob=pattern format
const rgArgs: string[] = [
'--files',
'--follow',
'--hidden',
`--glob=${pattern}`,
'--glob=!.git/*',
'--glob=!node_modules/*',
'--glob=!dist/*',
'--glob=!build/*',
'--glob=!__pycache__/*',
validPath
]
// Use ripgrep for file listing
logger.debug('Running ripgrep with args', { rgArgs })
const rgResult = await runRipgrep(rgArgs)
logger.debug('Ripgrep result', {
ok: rgResult.ok,
exitCode: rgResult.exitCode,
stdoutLength: rgResult.stdout.length,
stdoutPreview: rgResult.stdout.slice(0, 500)
})
// Process results if we have stdout content
// Exit code 2 can indicate partial errors (e.g., permission denied on some dirs) but still have valid results
if (rgResult.ok && rgResult.stdout.length > 0) {
const lines = rgResult.stdout.split('\n').filter(Boolean)
logger.debug('Parsed lines from ripgrep', { lineCount: lines.length, lines })
for (const line of lines) {
if (files.length >= MAX_FILES_LIMIT) {
truncated = true
break
}
const filePath = line.trim()
if (!filePath) continue
const absolutePath = path.isAbsolute(filePath) ? filePath : path.resolve(validPath, filePath)
try {
const stats = await fs.stat(absolutePath)
files.push({
path: absolutePath,
type: 'file', // ripgrep --files only returns files
size: stats.size,
modified: stats.mtime
})
} catch (error) {
logger.debug('Failed to stat file from ripgrep output, skipping', { file: absolutePath, error })
}
}
}
// Sort by modification time (newest first)
files.sort((a, b) => {
const aTime = a.modified ? a.modified.getTime() : 0
const bTime = b.modified ? b.modified.getTime() : 0
return bTime - aTime
})
// Format output - always use absolute paths
const output: string[] = []
if (files.length === 0) {
output.push(`No files found matching pattern "${parsed.data.pattern}" in ${validPath}`)
} else {
output.push(...files.map((f) => f.path))
if (truncated) {
output.push('')
output.push(`(Results truncated to ${MAX_FILES_LIMIT} files. Consider using a more specific pattern.)`)
}
}
return {
content: [
{
type: 'text',
text: output.join('\n')
}
]
}
}

View File

@ -0,0 +1,266 @@
import fs from 'fs/promises'
import path from 'path'
import * as z from 'zod'
import type { GrepMatch } from '../types'
import { isBinaryFile, MAX_GREP_MATCHES, MAX_LINE_LENGTH, runRipgrep, validatePath } from '../types'
// Schema definition
export const GrepToolSchema = z.object({
pattern: z.string().describe('The regex pattern to search for in file contents'),
path: z
.string()
.optional()
.describe('The directory to search in (must be absolute path). Defaults to the base directory'),
include: z.string().optional().describe('File pattern to include in the search (e.g. "*.js", "*.{ts,tsx}")')
})
// Tool definition with detailed description
export const grepToolDefinition = {
name: 'grep',
description: `Fast content search tool that works with any codebase size.
- Searches file contents using regular expressions
- Supports full regex syntax (e.g., "log.*Error", "function\\s+\\w+")
- Filter files by pattern with include (e.g., "*.js", "*.{ts,tsx}")
- Returns absolute file paths and line numbers with matching content
- Results are limited to 100 matches
- Binary files are automatically skipped
- Common directories (node_modules, .git, dist) are excluded
- The path parameter must be an absolute path if specified
- If path is not specified, defaults to the base directory`,
inputSchema: z.toJSONSchema(GrepToolSchema)
}
// Handler implementation
export async function handleGrepTool(args: unknown, baseDir: string) {
const parsed = GrepToolSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for grep: ${parsed.error}`)
}
const data = parsed.data
if (!data.pattern) {
throw new Error('Pattern is required for grep')
}
const searchPath = data.path || baseDir
const validPath = await validatePath(searchPath, baseDir)
const matches: GrepMatch[] = []
let truncated = false
let regex: RegExp
// Build ripgrep arguments
const rgArgs: string[] = [
'--no-heading',
'--line-number',
'--color',
'never',
'--ignore-case',
'--glob',
'!.git/**',
'--glob',
'!node_modules/**',
'--glob',
'!dist/**',
'--glob',
'!build/**',
'--glob',
'!__pycache__/**'
]
if (data.include) {
for (const pat of data.include
.split(',')
.map((p) => p.trim())
.filter(Boolean)) {
rgArgs.push('--glob', pat)
}
}
rgArgs.push(data.pattern)
rgArgs.push(validPath)
try {
regex = new RegExp(data.pattern, 'gi')
} catch (error) {
throw new Error(`Invalid regex pattern: ${data.pattern}`)
}
async function searchFile(filePath: string): Promise<void> {
if (matches.length >= MAX_GREP_MATCHES) {
truncated = true
return
}
try {
// Skip binary files
if (await isBinaryFile(filePath)) {
return
}
const content = await fs.readFile(filePath, 'utf-8')
const lines = content.split('\n')
lines.forEach((line, index) => {
if (matches.length >= MAX_GREP_MATCHES) {
truncated = true
return
}
if (regex.test(line)) {
// Truncate long lines
const truncatedLine = line.length > MAX_LINE_LENGTH ? line.substring(0, MAX_LINE_LENGTH) + '...' : line
matches.push({
file: filePath,
line: index + 1,
content: truncatedLine.trim()
})
}
})
} catch (error) {
// Skip files we can't read
}
}
async function searchDirectory(dir: string): Promise<void> {
if (matches.length >= MAX_GREP_MATCHES) {
truncated = true
return
}
try {
const entries = await fs.readdir(dir, { withFileTypes: true })
for (const entry of entries) {
if (matches.length >= MAX_GREP_MATCHES) {
truncated = true
break
}
const fullPath = path.join(dir, entry.name)
// Skip common ignore patterns
if (entry.name.startsWith('.') && entry.name !== '.env.example') {
continue
}
if (['node_modules', 'dist', 'build', '__pycache__', '.git'].includes(entry.name)) {
continue
}
if (entry.isFile()) {
// Check if file matches include pattern
if (data.include) {
const includePatterns = data.include.split(',').map((p) => p.trim())
const fileName = path.basename(fullPath)
const matchesInclude = includePatterns.some((pattern) => {
// Simple glob pattern matching
const regexPattern = pattern
.replace(/\*/g, '.*')
.replace(/\?/g, '.')
.replace(/\{([^}]+)\}/g, (_, group) => `(${group.split(',').join('|')})`)
return new RegExp(`^${regexPattern}$`).test(fileName)
})
if (!matchesInclude) {
continue
}
}
await searchFile(fullPath)
} else if (entry.isDirectory()) {
await searchDirectory(fullPath)
}
}
} catch (error) {
// Skip directories we can't read
}
}
// Perform the search
let usedRipgrep = false
try {
const rgResult = await runRipgrep(rgArgs)
if (rgResult.ok && rgResult.exitCode !== null && rgResult.exitCode !== 2) {
usedRipgrep = true
const lines = rgResult.stdout.split('\n').filter(Boolean)
for (const line of lines) {
if (matches.length >= MAX_GREP_MATCHES) {
truncated = true
break
}
const firstColon = line.indexOf(':')
const secondColon = line.indexOf(':', firstColon + 1)
if (firstColon === -1 || secondColon === -1) continue
const filePart = line.slice(0, firstColon)
const linePart = line.slice(firstColon + 1, secondColon)
const contentPart = line.slice(secondColon + 1)
const lineNum = Number.parseInt(linePart, 10)
if (!Number.isFinite(lineNum)) continue
const absoluteFilePath = path.isAbsolute(filePart) ? filePart : path.resolve(baseDir, filePart)
const truncatedLine =
contentPart.length > MAX_LINE_LENGTH ? contentPart.substring(0, MAX_LINE_LENGTH) + '...' : contentPart
matches.push({
file: absoluteFilePath,
line: lineNum,
content: truncatedLine.trim()
})
}
}
} catch {
usedRipgrep = false
}
if (!usedRipgrep) {
const stats = await fs.stat(validPath)
if (stats.isFile()) {
await searchFile(validPath)
} else {
await searchDirectory(validPath)
}
}
// Format output
const output: string[] = []
if (matches.length === 0) {
output.push('No matches found')
} else {
// Group matches by file
const fileGroups = new Map<string, GrepMatch[]>()
matches.forEach((match) => {
if (!fileGroups.has(match.file)) {
fileGroups.set(match.file, [])
}
fileGroups.get(match.file)!.push(match)
})
// Format grouped matches - always use absolute paths
fileGroups.forEach((fileMatches, filePath) => {
output.push(`\n${filePath}:`)
fileMatches.forEach((match) => {
output.push(` ${match.line}: ${match.content}`)
})
})
if (truncated) {
output.push('')
output.push(`(Results truncated to ${MAX_GREP_MATCHES} matches. Consider using a more specific pattern or path.)`)
}
}
return {
content: [
{
type: 'text',
text: output.join('\n')
}
]
}
}

View File

@ -0,0 +1,8 @@
// Export all tool definitions and handlers
export { deleteToolDefinition, handleDeleteTool } from './delete'
export { editToolDefinition, handleEditTool } from './edit'
export { globToolDefinition, handleGlobTool } from './glob'
export { grepToolDefinition, handleGrepTool } from './grep'
export { handleLsTool, lsToolDefinition } from './ls'
export { handleReadTool, readToolDefinition } from './read'
export { handleWriteTool, writeToolDefinition } from './write'

View File

@ -0,0 +1,150 @@
import fs from 'fs/promises'
import path from 'path'
import * as z from 'zod'
import { MAX_FILES_LIMIT, validatePath } from '../types'
// Schema definition
export const LsToolSchema = z.object({
path: z.string().optional().describe('The directory to list (must be absolute path). Defaults to the base directory'),
recursive: z.boolean().optional().describe('Whether to list directories recursively (default: false)')
})
// Tool definition with detailed description
export const lsToolDefinition = {
name: 'ls',
description: `Lists files and directories in a specified path.
- Returns a tree-like structure with icons (📁 directories, 📄 files)
- Shows the absolute directory path in the header
- Entries are sorted alphabetically with directories first
- Can list recursively with recursive=true (up to 5 levels deep)
- Common directories (node_modules, dist, .git) are excluded
- Hidden files (starting with .) are excluded except .env.example
- Results are limited to 100 entries
- The path parameter must be an absolute path if specified
- If path is not specified, defaults to the base directory`,
inputSchema: z.toJSONSchema(LsToolSchema)
}
// Handler implementation
export async function handleLsTool(args: unknown, baseDir: string) {
const parsed = LsToolSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for ls: ${parsed.error}`)
}
const targetPath = parsed.data.path || baseDir
const validPath = await validatePath(targetPath, baseDir)
const recursive = parsed.data.recursive || false
interface TreeNode {
name: string
type: 'file' | 'directory'
children?: TreeNode[]
}
let fileCount = 0
let truncated = false
async function buildTree(dirPath: string, depth: number = 0): Promise<TreeNode[]> {
if (fileCount >= MAX_FILES_LIMIT) {
truncated = true
return []
}
try {
const entries = await fs.readdir(dirPath, { withFileTypes: true })
const nodes: TreeNode[] = []
// Sort entries: directories first, then files, alphabetically
entries.sort((a, b) => {
if (a.isDirectory() && !b.isDirectory()) return -1
if (!a.isDirectory() && b.isDirectory()) return 1
return a.name.localeCompare(b.name)
})
for (const entry of entries) {
if (fileCount >= MAX_FILES_LIMIT) {
truncated = true
break
}
// Skip hidden files and common ignore patterns
if (entry.name.startsWith('.') && entry.name !== '.env.example') {
continue
}
if (['node_modules', 'dist', 'build', '__pycache__'].includes(entry.name)) {
continue
}
fileCount++
const node: TreeNode = {
name: entry.name,
type: entry.isDirectory() ? 'directory' : 'file'
}
if (entry.isDirectory() && recursive && depth < 5) {
// Limit depth to prevent infinite recursion
const childPath = path.join(dirPath, entry.name)
node.children = await buildTree(childPath, depth + 1)
}
nodes.push(node)
}
return nodes
} catch (error) {
return []
}
}
// Build the tree
const tree = await buildTree(validPath)
// Format as text output
function formatTree(nodes: TreeNode[], prefix: string = ''): string[] {
const lines: string[] = []
nodes.forEach((node, index) => {
const isLastNode = index === nodes.length - 1
const connector = isLastNode ? '└── ' : '├── '
const icon = node.type === 'directory' ? '📁 ' : '📄 '
lines.push(prefix + connector + icon + node.name)
if (node.children && node.children.length > 0) {
const childPrefix = prefix + (isLastNode ? ' ' : '│ ')
lines.push(...formatTree(node.children, childPrefix))
}
})
return lines
}
// Generate output
const output: string[] = []
output.push(`Directory: ${validPath}`)
output.push('')
if (tree.length === 0) {
output.push('(empty directory)')
} else {
const treeLines = formatTree(tree, '')
output.push(...treeLines)
if (truncated) {
output.push('')
output.push(`(Results truncated to ${MAX_FILES_LIMIT} files. Consider listing a more specific directory.)`)
}
}
return {
content: [
{
type: 'text',
text: output.join('\n')
}
]
}
}

View File

@ -0,0 +1,101 @@
import fs from 'fs/promises'
import path from 'path'
import * as z from 'zod'
import { DEFAULT_READ_LIMIT, isBinaryFile, MAX_LINE_LENGTH, validatePath } from '../types'
// Schema definition
export const ReadToolSchema = z.object({
file_path: z.string().describe('The path to the file to read'),
offset: z.number().optional().describe('The line number to start reading from (1-based)'),
limit: z.number().optional().describe('The number of lines to read (defaults to 2000)')
})
// Tool definition with detailed description
export const readToolDefinition = {
name: 'read',
description: `Reads a file from the local filesystem.
- Assumes this tool can read all files on the machine
- The file_path parameter must be an absolute path, not a relative path
- By default, reads up to 2000 lines starting from the beginning
- You can optionally specify a line offset and limit for long files
- Any lines longer than 2000 characters will be truncated
- Results are returned with line numbers starting at 1
- Binary files are detected and rejected with an error
- Empty files return a warning`,
inputSchema: z.toJSONSchema(ReadToolSchema)
}
// Handler implementation
export async function handleReadTool(args: unknown, baseDir: string) {
const parsed = ReadToolSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for read: ${parsed.error}`)
}
const filePath = parsed.data.file_path
const validPath = await validatePath(filePath, baseDir)
// Check if file exists
try {
const stats = await fs.stat(validPath)
if (!stats.isFile()) {
throw new Error(`Path is not a file: ${filePath}`)
}
} catch (error: any) {
if (error.code === 'ENOENT') {
throw new Error(`File not found: ${filePath}`)
}
throw error
}
// Check if file is binary
if (await isBinaryFile(validPath)) {
throw new Error(`Cannot read binary file: ${filePath}`)
}
// Read file content
const content = await fs.readFile(validPath, 'utf-8')
const lines = content.split('\n')
// Apply offset and limit
const offset = (parsed.data.offset || 1) - 1 // Convert to 0-based
const limit = parsed.data.limit || DEFAULT_READ_LIMIT
if (offset < 0 || offset >= lines.length) {
throw new Error(`Invalid offset: ${offset + 1}. File has ${lines.length} lines.`)
}
const selectedLines = lines.slice(offset, offset + limit)
// Format output with line numbers and truncate long lines
const output: string[] = []
const relativePath = path.relative(baseDir, validPath)
output.push(`File: ${relativePath}`)
if (offset > 0 || limit < lines.length) {
output.push(`Lines ${offset + 1} to ${Math.min(offset + limit, lines.length)} of ${lines.length}`)
}
output.push('')
selectedLines.forEach((line, index) => {
const lineNumber = offset + index + 1
const truncatedLine = line.length > MAX_LINE_LENGTH ? line.substring(0, MAX_LINE_LENGTH) + '...' : line
output.push(`${lineNumber.toString().padStart(6)}\t${truncatedLine}`)
})
if (offset + limit < lines.length) {
output.push('')
output.push(`(${lines.length - (offset + limit)} more lines not shown)`)
}
return {
content: [
{
type: 'text',
text: output.join('\n')
}
]
}
}

View File

@ -0,0 +1,83 @@
import fs from 'fs/promises'
import path from 'path'
import * as z from 'zod'
import { logger, validatePath } from '../types'
// Schema definition
export const WriteToolSchema = z.object({
file_path: z.string().describe('The path to the file to write'),
content: z.string().describe('The content to write to the file')
})
// Tool definition with detailed description
export const writeToolDefinition = {
name: 'write',
description: `Writes a file to the local filesystem.
- This tool will overwrite the existing file if one exists at the path
- You MUST use the read tool first to understand what you're overwriting
- ALWAYS prefer using the 'edit' tool for existing files
- NEVER proactively create documentation files unless explicitly requested
- Parent directories will be created automatically if they don't exist
- The file_path must be an absolute path, not a relative path`,
inputSchema: z.toJSONSchema(WriteToolSchema)
}
// Handler implementation
export async function handleWriteTool(args: unknown, baseDir: string) {
const parsed = WriteToolSchema.safeParse(args)
if (!parsed.success) {
throw new Error(`Invalid arguments for write: ${parsed.error}`)
}
const filePath = parsed.data.file_path
const validPath = await validatePath(filePath, baseDir)
// Create parent directory if it doesn't exist
const parentDir = path.dirname(validPath)
try {
await fs.mkdir(parentDir, { recursive: true })
} catch (error: any) {
if (error.code !== 'EEXIST') {
throw new Error(`Failed to create parent directory: ${error.message}`)
}
}
// Check if file exists (for logging)
let isOverwrite = false
try {
await fs.stat(validPath)
isOverwrite = true
} catch {
// File doesn't exist, that's fine
}
// Write the file
try {
await fs.writeFile(validPath, parsed.data.content, 'utf-8')
} catch (error: any) {
throw new Error(`Failed to write file: ${error.message}`)
}
// Log the operation
logger.info('File written', {
path: validPath,
overwrite: isOverwrite,
size: parsed.data.content.length
})
// Format output
const relativePath = path.relative(baseDir, validPath)
const action = isOverwrite ? 'Updated' : 'Created'
const lines = parsed.data.content.split('\n').length
return {
content: [
{
type: 'text',
text: `${action} file: ${relativePath}\n` + `Size: ${parsed.data.content.length} bytes\n` + `Lines: ${lines}`
}
]
}
}

View File

@ -0,0 +1,627 @@
import { loggerService } from '@logger'
import { isMac, isWin } from '@main/constant'
import { spawn } from 'child_process'
import fs from 'fs/promises'
import os from 'os'
import path from 'path'
export const logger = loggerService.withContext('MCP:FileSystemServer')
// Constants
export const MAX_LINE_LENGTH = 2000
export const DEFAULT_READ_LIMIT = 2000
export const MAX_FILES_LIMIT = 100
export const MAX_GREP_MATCHES = 100
// Common types
export interface FileInfo {
path: string
type: 'file' | 'directory'
size?: number
modified?: Date
}
export interface GrepMatch {
file: string
line: number
content: string
}
// Utility functions for path handling
export function normalizePath(p: string): string {
return path.normalize(p)
}
export function expandHome(filepath: string): string {
if (filepath.startsWith('~/') || filepath === '~') {
return path.join(os.homedir(), filepath.slice(1))
}
return filepath
}
// Security validation
export async function validatePath(requestedPath: string, baseDir?: string): Promise<string> {
const expandedPath = expandHome(requestedPath)
const root = baseDir ?? process.cwd()
const absolute = path.isAbsolute(expandedPath) ? path.resolve(expandedPath) : path.resolve(root, expandedPath)
// Handle symlinks by checking their real path
try {
const realPath = await fs.realpath(absolute)
return normalizePath(realPath)
} catch (error) {
// For new files that don't exist yet, verify parent directory
const parentDir = path.dirname(absolute)
try {
const realParentPath = await fs.realpath(parentDir)
normalizePath(realParentPath)
return normalizePath(absolute)
} catch {
return normalizePath(absolute)
}
}
}
// ============================================================================
// Edit Tool Utilities - Fuzzy matching replacers from opencode
// ============================================================================
export type Replacer = (content: string, find: string) => Generator<string, void, unknown>
// Similarity thresholds for block anchor fallback matching
const SINGLE_CANDIDATE_SIMILARITY_THRESHOLD = 0.0
const MULTIPLE_CANDIDATES_SIMILARITY_THRESHOLD = 0.3
/**
* Levenshtein distance algorithm implementation
*/
function levenshtein(a: string, b: string): number {
if (a === '' || b === '') {
return Math.max(a.length, b.length)
}
const matrix = Array.from({ length: a.length + 1 }, (_, i) =>
Array.from({ length: b.length + 1 }, (_, j) => (i === 0 ? j : j === 0 ? i : 0))
)
for (let i = 1; i <= a.length; i++) {
for (let j = 1; j <= b.length; j++) {
const cost = a[i - 1] === b[j - 1] ? 0 : 1
matrix[i][j] = Math.min(matrix[i - 1][j] + 1, matrix[i][j - 1] + 1, matrix[i - 1][j - 1] + cost)
}
}
return matrix[a.length][b.length]
}
export const SimpleReplacer: Replacer = function* (_content, find) {
yield find
}
export const LineTrimmedReplacer: Replacer = function* (content, find) {
const originalLines = content.split('\n')
const searchLines = find.split('\n')
if (searchLines[searchLines.length - 1] === '') {
searchLines.pop()
}
for (let i = 0; i <= originalLines.length - searchLines.length; i++) {
let matches = true
for (let j = 0; j < searchLines.length; j++) {
const originalTrimmed = originalLines[i + j].trim()
const searchTrimmed = searchLines[j].trim()
if (originalTrimmed !== searchTrimmed) {
matches = false
break
}
}
if (matches) {
let matchStartIndex = 0
for (let k = 0; k < i; k++) {
matchStartIndex += originalLines[k].length + 1
}
let matchEndIndex = matchStartIndex
for (let k = 0; k < searchLines.length; k++) {
matchEndIndex += originalLines[i + k].length
if (k < searchLines.length - 1) {
matchEndIndex += 1
}
}
yield content.substring(matchStartIndex, matchEndIndex)
}
}
}
export const BlockAnchorReplacer: Replacer = function* (content, find) {
const originalLines = content.split('\n')
const searchLines = find.split('\n')
if (searchLines.length < 3) {
return
}
if (searchLines[searchLines.length - 1] === '') {
searchLines.pop()
}
const firstLineSearch = searchLines[0].trim()
const lastLineSearch = searchLines[searchLines.length - 1].trim()
const searchBlockSize = searchLines.length
const candidates: Array<{ startLine: number; endLine: number }> = []
for (let i = 0; i < originalLines.length; i++) {
if (originalLines[i].trim() !== firstLineSearch) {
continue
}
for (let j = i + 2; j < originalLines.length; j++) {
if (originalLines[j].trim() === lastLineSearch) {
candidates.push({ startLine: i, endLine: j })
break
}
}
}
if (candidates.length === 0) {
return
}
if (candidates.length === 1) {
const { startLine, endLine } = candidates[0]
const actualBlockSize = endLine - startLine + 1
let similarity = 0
const linesToCheck = Math.min(searchBlockSize - 2, actualBlockSize - 2)
if (linesToCheck > 0) {
for (let j = 1; j < searchBlockSize - 1 && j < actualBlockSize - 1; j++) {
const originalLine = originalLines[startLine + j].trim()
const searchLine = searchLines[j].trim()
const maxLen = Math.max(originalLine.length, searchLine.length)
if (maxLen === 0) {
continue
}
const distance = levenshtein(originalLine, searchLine)
similarity += (1 - distance / maxLen) / linesToCheck
if (similarity >= SINGLE_CANDIDATE_SIMILARITY_THRESHOLD) {
break
}
}
} else {
similarity = 1.0
}
if (similarity >= SINGLE_CANDIDATE_SIMILARITY_THRESHOLD) {
let matchStartIndex = 0
for (let k = 0; k < startLine; k++) {
matchStartIndex += originalLines[k].length + 1
}
let matchEndIndex = matchStartIndex
for (let k = startLine; k <= endLine; k++) {
matchEndIndex += originalLines[k].length
if (k < endLine) {
matchEndIndex += 1
}
}
yield content.substring(matchStartIndex, matchEndIndex)
}
return
}
let bestMatch: { startLine: number; endLine: number } | null = null
let maxSimilarity = -1
for (const candidate of candidates) {
const { startLine, endLine } = candidate
const actualBlockSize = endLine - startLine + 1
let similarity = 0
const linesToCheck = Math.min(searchBlockSize - 2, actualBlockSize - 2)
if (linesToCheck > 0) {
for (let j = 1; j < searchBlockSize - 1 && j < actualBlockSize - 1; j++) {
const originalLine = originalLines[startLine + j].trim()
const searchLine = searchLines[j].trim()
const maxLen = Math.max(originalLine.length, searchLine.length)
if (maxLen === 0) {
continue
}
const distance = levenshtein(originalLine, searchLine)
similarity += 1 - distance / maxLen
}
similarity /= linesToCheck
} else {
similarity = 1.0
}
if (similarity > maxSimilarity) {
maxSimilarity = similarity
bestMatch = candidate
}
}
if (maxSimilarity >= MULTIPLE_CANDIDATES_SIMILARITY_THRESHOLD && bestMatch) {
const { startLine, endLine } = bestMatch
let matchStartIndex = 0
for (let k = 0; k < startLine; k++) {
matchStartIndex += originalLines[k].length + 1
}
let matchEndIndex = matchStartIndex
for (let k = startLine; k <= endLine; k++) {
matchEndIndex += originalLines[k].length
if (k < endLine) {
matchEndIndex += 1
}
}
yield content.substring(matchStartIndex, matchEndIndex)
}
}
export const WhitespaceNormalizedReplacer: Replacer = function* (content, find) {
const normalizeWhitespace = (text: string) => text.replace(/\s+/g, ' ').trim()
const normalizedFind = normalizeWhitespace(find)
const lines = content.split('\n')
for (let i = 0; i < lines.length; i++) {
const line = lines[i]
if (normalizeWhitespace(line) === normalizedFind) {
yield line
} else {
const normalizedLine = normalizeWhitespace(line)
if (normalizedLine.includes(normalizedFind)) {
const words = find.trim().split(/\s+/)
if (words.length > 0) {
const pattern = words.map((word) => word.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')).join('\\s+')
try {
const regex = new RegExp(pattern)
const match = line.match(regex)
if (match) {
yield match[0]
}
} catch {
// Invalid regex pattern, skip
}
}
}
}
}
const findLines = find.split('\n')
if (findLines.length > 1) {
for (let i = 0; i <= lines.length - findLines.length; i++) {
const block = lines.slice(i, i + findLines.length)
if (normalizeWhitespace(block.join('\n')) === normalizedFind) {
yield block.join('\n')
}
}
}
}
export const IndentationFlexibleReplacer: Replacer = function* (content, find) {
const removeIndentation = (text: string) => {
const lines = text.split('\n')
const nonEmptyLines = lines.filter((line) => line.trim().length > 0)
if (nonEmptyLines.length === 0) return text
const minIndent = Math.min(
...nonEmptyLines.map((line) => {
const match = line.match(/^(\s*)/)
return match ? match[1].length : 0
})
)
return lines.map((line) => (line.trim().length === 0 ? line : line.slice(minIndent))).join('\n')
}
const normalizedFind = removeIndentation(find)
const contentLines = content.split('\n')
const findLines = find.split('\n')
for (let i = 0; i <= contentLines.length - findLines.length; i++) {
const block = contentLines.slice(i, i + findLines.length).join('\n')
if (removeIndentation(block) === normalizedFind) {
yield block
}
}
}
export const EscapeNormalizedReplacer: Replacer = function* (content, find) {
const unescapeString = (str: string): string => {
return str.replace(/\\(n|t|r|'|"|`|\\|\n|\$)/g, (match, capturedChar) => {
switch (capturedChar) {
case 'n':
return '\n'
case 't':
return '\t'
case 'r':
return '\r'
case "'":
return "'"
case '"':
return '"'
case '`':
return '`'
case '\\':
return '\\'
case '\n':
return '\n'
case '$':
return '$'
default:
return match
}
})
}
const unescapedFind = unescapeString(find)
if (content.includes(unescapedFind)) {
yield unescapedFind
}
const lines = content.split('\n')
const findLines = unescapedFind.split('\n')
for (let i = 0; i <= lines.length - findLines.length; i++) {
const block = lines.slice(i, i + findLines.length).join('\n')
const unescapedBlock = unescapeString(block)
if (unescapedBlock === unescapedFind) {
yield block
}
}
}
export const TrimmedBoundaryReplacer: Replacer = function* (content, find) {
const trimmedFind = find.trim()
if (trimmedFind === find) {
return
}
if (content.includes(trimmedFind)) {
yield trimmedFind
}
const lines = content.split('\n')
const findLines = find.split('\n')
for (let i = 0; i <= lines.length - findLines.length; i++) {
const block = lines.slice(i, i + findLines.length).join('\n')
if (block.trim() === trimmedFind) {
yield block
}
}
}
export const ContextAwareReplacer: Replacer = function* (content, find) {
const findLines = find.split('\n')
if (findLines.length < 3) {
return
}
if (findLines[findLines.length - 1] === '') {
findLines.pop()
}
const contentLines = content.split('\n')
const firstLine = findLines[0].trim()
const lastLine = findLines[findLines.length - 1].trim()
for (let i = 0; i < contentLines.length; i++) {
if (contentLines[i].trim() !== firstLine) continue
for (let j = i + 2; j < contentLines.length; j++) {
if (contentLines[j].trim() === lastLine) {
const blockLines = contentLines.slice(i, j + 1)
const block = blockLines.join('\n')
if (blockLines.length === findLines.length) {
let matchingLines = 0
let totalNonEmptyLines = 0
for (let k = 1; k < blockLines.length - 1; k++) {
const blockLine = blockLines[k].trim()
const findLine = findLines[k].trim()
if (blockLine.length > 0 || findLine.length > 0) {
totalNonEmptyLines++
if (blockLine === findLine) {
matchingLines++
}
}
}
if (totalNonEmptyLines === 0 || matchingLines / totalNonEmptyLines >= 0.5) {
yield block
break
}
}
break
}
}
}
}
export const MultiOccurrenceReplacer: Replacer = function* (content, find) {
let startIndex = 0
while (true) {
const index = content.indexOf(find, startIndex)
if (index === -1) break
yield find
startIndex = index + find.length
}
}
/**
* All replacers in order of specificity
*/
export const ALL_REPLACERS: Replacer[] = [
SimpleReplacer,
LineTrimmedReplacer,
BlockAnchorReplacer,
WhitespaceNormalizedReplacer,
IndentationFlexibleReplacer,
EscapeNormalizedReplacer,
TrimmedBoundaryReplacer,
ContextAwareReplacer,
MultiOccurrenceReplacer
]
/**
* Replace oldString with newString in content using fuzzy matching
*/
export function replaceWithFuzzyMatch(
content: string,
oldString: string,
newString: string,
replaceAll = false
): string {
if (oldString === newString) {
throw new Error('old_string and new_string must be different')
}
let notFound = true
for (const replacer of ALL_REPLACERS) {
for (const search of replacer(content, oldString)) {
const index = content.indexOf(search)
if (index === -1) continue
notFound = false
if (replaceAll) {
return content.replaceAll(search, newString)
}
const lastIndex = content.lastIndexOf(search)
if (index !== lastIndex) continue
return content.substring(0, index) + newString + content.substring(index + search.length)
}
}
if (notFound) {
throw new Error('old_string not found in content')
}
throw new Error(
'Found multiple matches for old_string. Provide more surrounding lines in old_string to identify the correct match.'
)
}
// ============================================================================
// Binary File Detection
// ============================================================================
// Check if a file is likely binary
export async function isBinaryFile(filePath: string): Promise<boolean> {
try {
const buffer = Buffer.alloc(4096)
const fd = await fs.open(filePath, 'r')
const { bytesRead } = await fd.read(buffer, 0, buffer.length, 0)
await fd.close()
if (bytesRead === 0) return false
const view = buffer.subarray(0, bytesRead)
let zeroBytes = 0
let evenZeros = 0
let oddZeros = 0
let nonPrintable = 0
for (let i = 0; i < view.length; i++) {
const b = view[i]
if (b === 0) {
zeroBytes++
if (i % 2 === 0) evenZeros++
else oddZeros++
continue
}
// treat common whitespace as printable
if (b === 9 || b === 10 || b === 13) continue
// basic ASCII printable range
if (b >= 32 && b <= 126) continue
// bytes >= 128 are likely part of UTF-8 sequences; count as printable
if (b >= 128) continue
nonPrintable++
}
// If there are lots of null bytes, it's probably binary unless it looks like UTF-16 text.
if (zeroBytes > 0) {
const evenSlots = Math.ceil(view.length / 2)
const oddSlots = Math.floor(view.length / 2)
const evenZeroRatio = evenSlots > 0 ? evenZeros / evenSlots : 0
const oddZeroRatio = oddSlots > 0 ? oddZeros / oddSlots : 0
// UTF-16LE/BE tends to have zeros on every other byte.
if (evenZeroRatio > 0.7 || oddZeroRatio > 0.7) return false
if (zeroBytes / view.length > 0.05) return true
}
// Heuristic: too many non-printable bytes => binary.
return nonPrintable / view.length > 0.3
} catch {
return false
}
}
// ============================================================================
// Ripgrep Utilities
// ============================================================================
export interface RipgrepResult {
ok: boolean
stdout: string
exitCode: number | null
}
export function getRipgrepAddonPath(): string {
const pkgJsonPath = require.resolve('@anthropic-ai/claude-agent-sdk/package.json')
const pkgRoot = path.dirname(pkgJsonPath)
const platform = isMac ? 'darwin' : isWin ? 'win32' : 'linux'
const arch = process.arch === 'arm64' ? 'arm64' : 'x64'
return path.join(pkgRoot, 'vendor', 'ripgrep', `${arch}-${platform}`, 'ripgrep.node')
}
export async function runRipgrep(args: string[]): Promise<RipgrepResult> {
const addonPath = getRipgrepAddonPath()
const childScript = `const { ripgrepMain } = require(process.env.RIPGREP_ADDON_PATH); process.exit(ripgrepMain(process.argv.slice(1)));`
return new Promise((resolve) => {
const child = spawn(process.execPath, ['--eval', childScript, 'rg', ...args], {
cwd: process.cwd(),
env: {
...process.env,
ELECTRON_RUN_AS_NODE: '1',
RIPGREP_ADDON_PATH: addonPath
},
stdio: ['ignore', 'pipe', 'pipe']
})
let stdout = ''
child.stdout?.on('data', (chunk) => {
stdout += chunk.toString('utf-8')
})
child.on('error', () => {
resolve({ ok: false, stdout: '', exitCode: null })
})
child.on('close', (code) => {
resolve({ ok: true, stdout, exitCode: code })
})
})
}

View File

@ -1,19 +1,9 @@
import type { ReasoningDetailUnion } from '@main/apiServer/adapters/openrouter'
interface CacheItem<T> {
data: T
timestamp: number
duration: number
}
/**
* Interface for reasoning cache
*/
export interface IReasoningCache<T> {
set(key: string, value: T): void
get(key: string): T | undefined
}
export class CacheService {
private static cache: Map<string, CacheItem<any>> = new Map()
@ -82,14 +72,3 @@ export class CacheService {
return true
}
}
// Singleton cache instances using CacheService
export const googleReasoningCache: IReasoningCache<string> = {
set: (key, value) => CacheService.set(`google-reasoning:${key}`, value, 30 * 60 * 1000),
get: (key) => CacheService.get(`google-reasoning:${key}`) || undefined
}
export const openRouterReasoningCache: IReasoningCache<ReasoningDetailUnion[]> = {
set: (key, value) => CacheService.set(`openrouter-reasoning:${key}`, value, 30 * 60 * 1000),
get: (key) => CacheService.get(`openrouter-reasoning:${key}`) || undefined
}

View File

@ -32,7 +32,8 @@ export enum ConfigKeys {
Proxy = 'proxy',
EnableDeveloperMode = 'enableDeveloperMode',
ClientId = 'clientId',
GitBashPath = 'gitBashPath'
GitBashPath = 'gitBashPath',
GitBashPathSource = 'gitBashPathSource' // 'manual' | 'auto' | null
}
export class ConfigManager {

View File

@ -249,6 +249,26 @@ class McpService {
StdioClientTransport | SSEClientTransport | InMemoryTransport | StreamableHTTPClientTransport
> => {
// Create appropriate transport based on configuration
// Special case for nowledgeMem - uses HTTP transport instead of in-memory
if (isBuiltinMCPServer(server) && server.name === BuiltinMCPServerNames.nowledgeMem) {
const nowledgeMemUrl = 'http://127.0.0.1:14242/mcp'
const options: StreamableHTTPClientTransportOptions = {
fetch: async (url, init) => {
return net.fetch(typeof url === 'string' ? url : url.toString(), init)
},
requestInit: {
headers: {
...defaultAppHeaders(),
APP: 'Cherry Studio'
}
},
authProvider
}
getServerLogger(server).debug(`Using StreamableHTTPClientTransport for ${server.name}`)
return new StreamableHTTPClientTransport(new URL(nowledgeMemUrl), options)
}
if (isBuiltinMCPServer(server) && server.name !== BuiltinMCPServerNames.mcpAutoInstall) {
getServerLogger(server).debug(`Using in-memory transport`)
const [clientTransport, serverTransport] = InMemoryTransport.createLinkedPair()

View File

@ -15,8 +15,8 @@ import { query } from '@anthropic-ai/claude-agent-sdk'
import { loggerService } from '@logger'
import { config as apiConfigService } from '@main/apiServer/config'
import { validateModelId } from '@main/apiServer/utils'
import { ConfigKeys, configManager } from '@main/services/ConfigManager'
import { validateGitBashPath } from '@main/utils/process'
import { isWin } from '@main/constant'
import { autoDiscoverGitBash } from '@main/utils/process'
import getLoginShellEnvironment from '@main/utils/shell-env'
import { app } from 'electron'
@ -105,7 +105,8 @@ class ClaudeCodeService implements AgentServiceInterface {
Object.entries(loginShellEnv).filter(([key]) => !key.toLowerCase().endsWith('_proxy'))
) as Record<string, string>
const customGitBashPath = validateGitBashPath(configManager.get(ConfigKeys.GitBashPath) as string | undefined)
// Auto-discover Git Bash path on Windows (already logs internally)
const customGitBashPath = isWin ? autoDiscoverGitBash() : null
// Route through local API Server which handles format conversion via unified adapter
// This enables Claude Code Agent to work with any provider (OpenAI, Gemini, etc.)

View File

@ -1,9 +1,21 @@
import { configManager } from '@main/services/ConfigManager'
import { execFileSync } from 'child_process'
import fs from 'fs'
import path from 'path'
import { beforeEach, describe, expect, it, vi } from 'vitest'
import { findExecutable, findGitBash, validateGitBashPath } from '../process'
import { autoDiscoverGitBash, findExecutable, findGitBash, validateGitBashPath } from '../process'
// Mock configManager
vi.mock('@main/services/ConfigManager', () => ({
ConfigKeys: {
GitBashPath: 'gitBashPath'
},
configManager: {
get: vi.fn(),
set: vi.fn()
}
}))
// Mock dependencies
vi.mock('child_process')
@ -695,4 +707,284 @@ describe.skipIf(process.platform !== 'win32')('process utilities', () => {
})
})
})
describe('autoDiscoverGitBash', () => {
const originalEnvVar = process.env.CLAUDE_CODE_GIT_BASH_PATH
beforeEach(() => {
vi.mocked(configManager.get).mockReset()
vi.mocked(configManager.set).mockReset()
delete process.env.CLAUDE_CODE_GIT_BASH_PATH
})
afterEach(() => {
// Restore original environment variable
if (originalEnvVar !== undefined) {
process.env.CLAUDE_CODE_GIT_BASH_PATH = originalEnvVar
} else {
delete process.env.CLAUDE_CODE_GIT_BASH_PATH
}
})
/**
* Helper to mock fs.existsSync with a set of valid paths
*/
const mockExistingPaths = (...validPaths: string[]) => {
vi.mocked(fs.existsSync).mockImplementation((p) => validPaths.includes(p as string))
}
describe('with no existing config path', () => {
it('should discover and persist Git Bash path when not configured', () => {
const bashPath = 'C:\\Program Files\\Git\\bin\\bash.exe'
const gitPath = 'C:\\Program Files\\Git\\cmd\\git.exe'
vi.mocked(configManager.get).mockReturnValue(undefined)
process.env.ProgramFiles = 'C:\\Program Files'
mockExistingPaths(gitPath, bashPath)
const result = autoDiscoverGitBash()
expect(result).toBe(bashPath)
expect(configManager.set).toHaveBeenCalledWith('gitBashPath', bashPath)
})
it('should return null and not persist when Git Bash is not found', () => {
vi.mocked(configManager.get).mockReturnValue(undefined)
vi.mocked(fs.existsSync).mockReturnValue(false)
vi.mocked(execFileSync).mockImplementation(() => {
throw new Error('Not found')
})
const result = autoDiscoverGitBash()
expect(result).toBeNull()
expect(configManager.set).not.toHaveBeenCalled()
})
})
describe('environment variable precedence', () => {
it('should use env var over valid config path', () => {
const envPath = 'C:\\EnvGit\\bin\\bash.exe'
const configPath = 'C:\\ConfigGit\\bin\\bash.exe'
process.env.CLAUDE_CODE_GIT_BASH_PATH = envPath
vi.mocked(configManager.get).mockReturnValue(configPath)
mockExistingPaths(envPath, configPath)
const result = autoDiscoverGitBash()
// Env var should take precedence
expect(result).toBe(envPath)
// Should not persist env var path (it's a runtime override)
expect(configManager.set).not.toHaveBeenCalled()
})
it('should fall back to config path when env var is invalid', () => {
const envPath = 'C:\\Invalid\\bash.exe'
const configPath = 'C:\\ConfigGit\\bin\\bash.exe'
process.env.CLAUDE_CODE_GIT_BASH_PATH = envPath
vi.mocked(configManager.get).mockReturnValue(configPath)
// Env path is invalid (doesn't exist), only config path exists
mockExistingPaths(configPath)
const result = autoDiscoverGitBash()
// Should fall back to config path
expect(result).toBe(configPath)
expect(configManager.set).not.toHaveBeenCalled()
})
it('should fall back to auto-discovery when both env var and config are invalid', () => {
const envPath = 'C:\\InvalidEnv\\bash.exe'
const configPath = 'C:\\InvalidConfig\\bash.exe'
const discoveredPath = 'C:\\Program Files\\Git\\bin\\bash.exe'
const gitPath = 'C:\\Program Files\\Git\\cmd\\git.exe'
process.env.CLAUDE_CODE_GIT_BASH_PATH = envPath
process.env.ProgramFiles = 'C:\\Program Files'
vi.mocked(configManager.get).mockReturnValue(configPath)
// Both env and config paths are invalid, only standard Git exists
mockExistingPaths(gitPath, discoveredPath)
const result = autoDiscoverGitBash()
expect(result).toBe(discoveredPath)
expect(configManager.set).toHaveBeenCalledWith('gitBashPath', discoveredPath)
})
})
describe('with valid existing config path', () => {
it('should validate and return existing path without re-discovering', () => {
const existingPath = 'C:\\CustomGit\\bin\\bash.exe'
vi.mocked(configManager.get).mockReturnValue(existingPath)
mockExistingPaths(existingPath)
const result = autoDiscoverGitBash()
expect(result).toBe(existingPath)
// Should not call findGitBash or persist again
expect(configManager.set).not.toHaveBeenCalled()
// Should not call execFileSync (which findGitBash would use for discovery)
expect(execFileSync).not.toHaveBeenCalled()
})
it('should not override existing valid config with auto-discovery', () => {
const existingPath = 'C:\\CustomGit\\bin\\bash.exe'
const discoveredPath = 'C:\\Program Files\\Git\\bin\\bash.exe'
vi.mocked(configManager.get).mockReturnValue(existingPath)
mockExistingPaths(existingPath, discoveredPath)
const result = autoDiscoverGitBash()
expect(result).toBe(existingPath)
expect(configManager.set).not.toHaveBeenCalled()
})
})
describe('with invalid existing config path', () => {
it('should attempt auto-discovery when existing path does not exist', () => {
const existingPath = 'C:\\NonExistent\\bin\\bash.exe'
const discoveredPath = 'C:\\Program Files\\Git\\bin\\bash.exe'
const gitPath = 'C:\\Program Files\\Git\\cmd\\git.exe'
vi.mocked(configManager.get).mockReturnValue(existingPath)
process.env.ProgramFiles = 'C:\\Program Files'
// Invalid path doesn't exist, but Git is installed at standard location
mockExistingPaths(gitPath, discoveredPath)
const result = autoDiscoverGitBash()
// Should discover and return the new path
expect(result).toBe(discoveredPath)
// Should persist the discovered path (overwrites invalid)
expect(configManager.set).toHaveBeenCalledWith('gitBashPath', discoveredPath)
})
it('should attempt auto-discovery when existing path is not bash.exe', () => {
const existingPath = 'C:\\CustomGit\\bin\\git.exe'
const discoveredPath = 'C:\\Program Files\\Git\\bin\\bash.exe'
const gitPath = 'C:\\Program Files\\Git\\cmd\\git.exe'
vi.mocked(configManager.get).mockReturnValue(existingPath)
process.env.ProgramFiles = 'C:\\Program Files'
// Invalid path exists but is not bash.exe (validation will fail)
// Git is installed at standard location
mockExistingPaths(existingPath, gitPath, discoveredPath)
const result = autoDiscoverGitBash()
// Should discover and return the new path
expect(result).toBe(discoveredPath)
// Should persist the discovered path (overwrites invalid)
expect(configManager.set).toHaveBeenCalledWith('gitBashPath', discoveredPath)
})
it('should return null when existing path is invalid and discovery fails', () => {
const existingPath = 'C:\\NonExistent\\bin\\bash.exe'
vi.mocked(configManager.get).mockReturnValue(existingPath)
vi.mocked(fs.existsSync).mockReturnValue(false)
vi.mocked(execFileSync).mockImplementation(() => {
throw new Error('Not found')
})
const result = autoDiscoverGitBash()
// Both validation and discovery failed
expect(result).toBeNull()
// Should not persist when discovery fails
expect(configManager.set).not.toHaveBeenCalled()
})
})
describe('config persistence verification', () => {
it('should persist discovered path with correct config key', () => {
const bashPath = 'C:\\Program Files\\Git\\bin\\bash.exe'
const gitPath = 'C:\\Program Files\\Git\\cmd\\git.exe'
vi.mocked(configManager.get).mockReturnValue(undefined)
process.env.ProgramFiles = 'C:\\Program Files'
mockExistingPaths(gitPath, bashPath)
autoDiscoverGitBash()
// Verify the exact call to configManager.set
expect(configManager.set).toHaveBeenCalledTimes(1)
expect(configManager.set).toHaveBeenCalledWith('gitBashPath', bashPath)
})
it('should persist on each discovery when config remains undefined', () => {
const bashPath = 'C:\\Program Files\\Git\\bin\\bash.exe'
const gitPath = 'C:\\Program Files\\Git\\cmd\\git.exe'
vi.mocked(configManager.get).mockReturnValue(undefined)
process.env.ProgramFiles = 'C:\\Program Files'
mockExistingPaths(gitPath, bashPath)
autoDiscoverGitBash()
autoDiscoverGitBash()
// Each call discovers and persists since config remains undefined (mocked)
expect(configManager.set).toHaveBeenCalledTimes(2)
})
})
describe('real-world scenarios', () => {
it('should discover and persist standard Git for Windows installation', () => {
const gitPath = 'C:\\Program Files\\Git\\cmd\\git.exe'
const bashPath = 'C:\\Program Files\\Git\\bin\\bash.exe'
vi.mocked(configManager.get).mockReturnValue(undefined)
process.env.ProgramFiles = 'C:\\Program Files'
mockExistingPaths(gitPath, bashPath)
const result = autoDiscoverGitBash()
expect(result).toBe(bashPath)
expect(configManager.set).toHaveBeenCalledWith('gitBashPath', bashPath)
})
it('should discover portable Git via where.exe and persist', () => {
const gitPath = 'D:\\PortableApps\\Git\\bin\\git.exe'
const bashPath = 'D:\\PortableApps\\Git\\bin\\bash.exe'
vi.mocked(configManager.get).mockReturnValue(undefined)
vi.mocked(fs.existsSync).mockImplementation((p) => {
const pathStr = p?.toString() || ''
// Common git paths don't exist
if (pathStr.includes('Program Files\\Git\\cmd\\git.exe')) return false
if (pathStr.includes('Program Files (x86)\\Git\\cmd\\git.exe')) return false
// Portable bash path exists
if (pathStr === bashPath) return true
return false
})
vi.mocked(execFileSync).mockReturnValue(gitPath)
const result = autoDiscoverGitBash()
expect(result).toBe(bashPath)
expect(configManager.set).toHaveBeenCalledWith('gitBashPath', bashPath)
})
it('should respect user-configured path over auto-discovery', () => {
const userConfiguredPath = 'D:\\MyGit\\bin\\bash.exe'
const systemPath = 'C:\\Program Files\\Git\\bin\\bash.exe'
vi.mocked(configManager.get).mockReturnValue(userConfiguredPath)
mockExistingPaths(userConfiguredPath, systemPath)
const result = autoDiscoverGitBash()
expect(result).toBe(userConfiguredPath)
expect(configManager.set).not.toHaveBeenCalled()
// Verify findGitBash was not called for discovery
expect(execFileSync).not.toHaveBeenCalled()
})
})
})
})

View File

@ -1,4 +1,5 @@
import { loggerService } from '@logger'
import type { GitBashPathInfo, GitBashPathSource } from '@shared/config/constant'
import { HOME_CHERRY_DIR } from '@shared/config/constant'
import { execFileSync, spawn } from 'child_process'
import fs from 'fs'
@ -6,6 +7,7 @@ import os from 'os'
import path from 'path'
import { isWin } from '../constant'
import { ConfigKeys, configManager } from '../services/ConfigManager'
import { getResourcePath } from '.'
const logger = loggerService.withContext('Utils:Process')
@ -59,7 +61,7 @@ export async function getBinaryPath(name?: string): Promise<string> {
export async function isBinaryExists(name: string): Promise<boolean> {
const cmd = await getBinaryPath(name)
return await fs.existsSync(cmd)
return fs.existsSync(cmd)
}
/**
@ -225,3 +227,77 @@ export function validateGitBashPath(customPath?: string | null): string | null {
logger.debug('Validated custom Git Bash path', { path: resolved })
return resolved
}
/**
* Auto-discover and persist Git Bash path if not already configured
* Only called when Git Bash is actually needed
*
* Precedence order:
* 1. CLAUDE_CODE_GIT_BASH_PATH environment variable (highest - runtime override)
* 2. Configured path from settings (manual or auto)
* 3. Auto-discovery via findGitBash (only if no valid config exists)
*/
export function autoDiscoverGitBash(): string | null {
if (!isWin) {
return null
}
// 1. Check environment variable override first (highest priority)
const envOverride = process.env.CLAUDE_CODE_GIT_BASH_PATH
if (envOverride) {
const validated = validateGitBashPath(envOverride)
if (validated) {
logger.debug('Using CLAUDE_CODE_GIT_BASH_PATH override', { path: validated })
return validated
}
logger.warn('CLAUDE_CODE_GIT_BASH_PATH provided but path is invalid', { path: envOverride })
}
// 2. Check if a path is already configured
const existingPath = configManager.get<string | undefined>(ConfigKeys.GitBashPath)
const existingSource = configManager.get<GitBashPathSource | undefined>(ConfigKeys.GitBashPathSource)
if (existingPath) {
const validated = validateGitBashPath(existingPath)
if (validated) {
return validated
}
// Existing path is invalid, try to auto-discover
logger.warn('Existing Git Bash path is invalid, attempting auto-discovery', {
path: existingPath,
source: existingSource
})
}
// 3. Try to find Git Bash via auto-discovery
const discoveredPath = findGitBash()
if (discoveredPath) {
// Persist the discovered path with 'auto' source
configManager.set(ConfigKeys.GitBashPath, discoveredPath)
configManager.set(ConfigKeys.GitBashPathSource, 'auto')
logger.info('Auto-discovered Git Bash path', { path: discoveredPath })
}
return discoveredPath
}
/**
* Get Git Bash path info including source
* If no path is configured, triggers auto-discovery first
*/
export function getGitBashPathInfo(): GitBashPathInfo {
if (!isWin) {
return { path: null, source: null }
}
let path = configManager.get<string | null>(ConfigKeys.GitBashPath) ?? null
let source = configManager.get<GitBashPathSource | null>(ConfigKeys.GitBashPathSource) ?? null
// If no path configured, trigger auto-discovery (handles upgrade from old versions)
if (!path) {
path = autoDiscoverGitBash()
source = path ? 'auto' : null
}
return { path, source }
}

View File

@ -2,7 +2,7 @@ import type { PermissionUpdate } from '@anthropic-ai/claude-agent-sdk'
import { electronAPI } from '@electron-toolkit/preload'
import type { SpanEntity, TokenUsage } from '@mcp-trace/trace-core'
import type { SpanContext } from '@opentelemetry/api'
import type { TerminalConfig, UpgradeChannel } from '@shared/config/constant'
import type { GitBashPathInfo, TerminalConfig, UpgradeChannel } from '@shared/config/constant'
import type { LogLevel, LogSourceWithContext } from '@shared/config/logger'
import type { FileChangeEvent, WebviewKeyEvent } from '@shared/config/types'
import type { MCPServerLogEntry } from '@shared/config/types'
@ -126,6 +126,7 @@ const api = {
getCpuName: () => ipcRenderer.invoke(IpcChannel.System_GetCpuName),
checkGitBash: (): Promise<boolean> => ipcRenderer.invoke(IpcChannel.System_CheckGitBash),
getGitBashPath: (): Promise<string | null> => ipcRenderer.invoke(IpcChannel.System_GetGitBashPath),
getGitBashPathInfo: (): Promise<GitBashPathInfo> => ipcRenderer.invoke(IpcChannel.System_GetGitBashPathInfo),
setGitBashPath: (newPath: string | null): Promise<boolean> =>
ipcRenderer.invoke(IpcChannel.System_SetGitBashPath, newPath)
},

View File

@ -142,6 +142,10 @@ export class OpenAIAPIClient extends OpenAIBaseClient<
return { thinking: { type: reasoningEffort ? 'enabled' : 'disabled' } }
}
if (reasoningEffort === 'default') {
return {}
}
if (!reasoningEffort) {
// DeepSeek hybrid inference models, v3.1 and maybe more in the future
// 不同的 provider 有不同的思考控制方式,在这里统一解决
@ -303,7 +307,7 @@ export class OpenAIAPIClient extends OpenAIBaseClient<
// Grok models/Perplexity models/OpenAI models
if (isSupportedReasoningEffortModel(model)) {
// 检查模型是否支持所选选项
const supportedOptions = getModelSupportedReasoningEffortOptions(model)
const supportedOptions = getModelSupportedReasoningEffortOptions(model)?.filter((option) => option !== 'default')
if (supportedOptions?.includes(reasoningEffort)) {
return {
reasoning_effort: reasoningEffort

View File

@ -69,7 +69,7 @@ export abstract class OpenAIBaseClient<
const sdk = await this.getSdkInstance()
const response = (await sdk.request({
method: 'post',
path: '/images/generations',
path: '/v1/images/generations',
signal,
body: {
model,

View File

@ -5,17 +5,15 @@ import type { MCPTool } from '@renderer/types'
import { type Assistant, type Message, type Model, type Provider, SystemProviderIds } from '@renderer/types'
import type { Chunk } from '@renderer/types/chunk'
import { isOllamaProvider, isSupportEnableThinkingProvider } from '@renderer/utils/provider'
import { openrouterReasoningMiddleware, skipGeminiThoughtSignatureMiddleware } from '@shared/middleware'
import { openrouterReasoningMiddleware, skipGeminiThoughtSignatureMiddleware } from '@shared/ai-sdk-middlewares'
import type { LanguageModelMiddleware } from 'ai'
import { extractReasoningMiddleware, simulateStreamingMiddleware } from 'ai'
import { isEmpty } from 'lodash'
import { getAiSdkProviderId } from '../provider/factory'
import { isOpenRouterGeminiGenerateImageModel } from '../utils/image'
import { noThinkMiddleware } from './noThinkMiddleware'
import { openrouterGenerateImageMiddleware } from './openrouterGenerateImageMiddleware'
import { qwenThinkingMiddleware } from './qwenThinkingMiddleware'
import { toolChoiceMiddleware } from './toolChoiceMiddleware'
const logger = loggerService.withContext('AiSdkMiddlewareBuilder')
@ -135,15 +133,6 @@ export class AiSdkMiddlewareBuilder {
export function buildAiSdkMiddlewares(config: AiSdkMiddlewareConfig): LanguageModelMiddleware[] {
const builder = new AiSdkMiddlewareBuilder()
// 0. 知识库强制调用中间件(必须在最前面,确保第一轮强制调用知识库)
if (!isEmpty(config.assistant?.knowledge_bases?.map((base) => base.id)) && config.knowledgeRecognition !== 'on') {
builder.add({
name: 'force-knowledge-first',
middleware: toolChoiceMiddleware('builtin_knowledge_search')
})
logger.debug('Added toolChoice middleware to force knowledge base search on first round')
}
// 1. 根据provider添加特定中间件
if (config.provider) {
addProviderSpecificMiddlewares(builder, config)

View File

@ -31,7 +31,7 @@ import { webSearchToolWithPreExtractedKeywords } from '../tools/WebSearchTool'
const logger = loggerService.withContext('SearchOrchestrationPlugin')
const getMessageContent = (message: ModelMessage) => {
export const getMessageContent = (message: ModelMessage) => {
if (typeof message.content === 'string') return message.content
return message.content.reduce((acc, part) => {
if (part.type === 'text') {
@ -266,14 +266,14 @@ export const searchOrchestrationPlugin = (assistant: Assistant, topicId: string)
// 判断是否需要各种搜索
const knowledgeBaseIds = assistant.knowledge_bases?.map((base) => base.id)
const hasKnowledgeBase = !isEmpty(knowledgeBaseIds)
const knowledgeRecognition = assistant.knowledgeRecognition || 'on'
const knowledgeRecognition = assistant.knowledgeRecognition || 'off'
const globalMemoryEnabled = selectGlobalMemoryEnabled(store.getState())
const shouldWebSearch = !!assistant.webSearchProviderId
const shouldKnowledgeSearch = hasKnowledgeBase && knowledgeRecognition === 'on'
const shouldMemorySearch = globalMemoryEnabled && assistant.enableMemory
// 执行意图分析
if (shouldWebSearch || hasKnowledgeBase) {
if (shouldWebSearch || shouldKnowledgeSearch) {
const analysisResult = await analyzeSearchIntent(lastUserMessage, assistant, {
shouldWebSearch,
shouldKnowledgeSearch,
@ -330,41 +330,25 @@ export const searchOrchestrationPlugin = (assistant: Assistant, topicId: string)
// 📚 知识库搜索工具配置
const knowledgeBaseIds = assistant.knowledge_bases?.map((base) => base.id)
const hasKnowledgeBase = !isEmpty(knowledgeBaseIds)
const knowledgeRecognition = assistant.knowledgeRecognition || 'on'
const knowledgeRecognition = assistant.knowledgeRecognition || 'off'
const shouldKnowledgeSearch = hasKnowledgeBase && knowledgeRecognition === 'on'
if (hasKnowledgeBase) {
if (knowledgeRecognition === 'off') {
// off 模式:直接添加知识库搜索工具,使用用户消息作为搜索关键词
if (shouldKnowledgeSearch) {
// on 模式:根据意图识别结果决定是否添加工具
const needsKnowledgeSearch =
analysisResult?.knowledge &&
analysisResult.knowledge.question &&
analysisResult.knowledge.question[0] !== 'not_needed'
if (needsKnowledgeSearch && analysisResult.knowledge) {
// logger.info('📚 Adding knowledge search tool (intent-based)')
const userMessage = userMessages[context.requestId]
const fallbackKeywords = {
question: [getMessageContent(userMessage) || 'search'],
rewrite: getMessageContent(userMessage) || 'search'
}
// logger.info('📚 Adding knowledge search tool (force mode)')
params.tools['builtin_knowledge_search'] = knowledgeSearchTool(
assistant,
fallbackKeywords,
analysisResult.knowledge,
getMessageContent(userMessage),
topicId
)
// params.toolChoice = { type: 'tool', toolName: 'builtin_knowledge_search' }
} else {
// on 模式:根据意图识别结果决定是否添加工具
const needsKnowledgeSearch =
analysisResult?.knowledge &&
analysisResult.knowledge.question &&
analysisResult.knowledge.question[0] !== 'not_needed'
if (needsKnowledgeSearch && analysisResult.knowledge) {
// logger.info('📚 Adding knowledge search tool (intent-based)')
const userMessage = userMessages[context.requestId]
params.tools['builtin_knowledge_search'] = knowledgeSearchTool(
assistant,
analysisResult.knowledge,
getMessageContent(userMessage),
topicId
)
}
}
}

View File

@ -18,7 +18,7 @@ vi.mock('@renderer/services/AssistantService', () => ({
toolUseMode: assistant.settings?.toolUseMode ?? 'prompt',
defaultModel: assistant.defaultModel,
customParameters: assistant.settings?.customParameters ?? [],
reasoning_effort: assistant.settings?.reasoning_effort,
reasoning_effort: assistant.settings?.reasoning_effort ?? 'default',
reasoning_effort_cache: assistant.settings?.reasoning_effort_cache,
qwenThinkMode: assistant.settings?.qwenThinkMode
})

View File

@ -46,8 +46,8 @@ vi.mock('@renderer/utils/api', () => ({
isWithTrailingSharp: vi.fn((host) => host?.endsWith('#') || false)
}))
// Also mock @shared/api since formatProviderApiHost uses it directly
vi.mock('@shared/api', async (importOriginal) => {
// Also mock @shared/utils/url since formatProviderApiHost uses it directly
vi.mock('@shared/utils/url', async (importOriginal) => {
const actual = (await importOriginal()) as any
return {
...actual,
@ -92,8 +92,8 @@ vi.mock('@renderer/services/AssistantService', () => ({
import { getProviderByModel } from '@renderer/services/AssistantService'
import type { Model, Provider } from '@renderer/types'
import { isCherryAIProvider, isPerplexityProvider } from '@renderer/utils/provider'
import { formatApiHost } from '@shared/api'
import { isAzureOpenAIProvider, isCherryAIProvider, isPerplexityProvider } from '@renderer/utils/provider'
import { formatApiHost } from '@shared/utils/url'
import { COPILOT_DEFAULT_HEADERS, COPILOT_EDITOR_VERSION, isCopilotResponsesModel } from '../constants'
import { getActualProvider, providerToAiSdkConfig } from '../providerConfig'
@ -172,6 +172,17 @@ const createPerplexityProvider = (): Provider => ({
isSystem: false
})
const createAzureProvider = (apiVersion: string): Provider => ({
id: 'azure-openai',
type: 'azure-openai',
name: 'Azure OpenAI',
apiKey: 'test-key',
apiHost: 'https://example.openai.azure.com/openai',
apiVersion,
models: [],
isSystem: true
})
describe('Copilot responses routing', () => {
beforeEach(() => {
;(globalThis as any).window = {
@ -454,3 +465,46 @@ describe('Stream options includeUsage configuration', () => {
expect(config.providerId).toBe('github-copilot-openai-compatible')
})
})
describe('Azure OpenAI traditional API routing', () => {
beforeEach(() => {
;(globalThis as any).window = {
...(globalThis as any).window,
keyv: createWindowKeyv()
}
mockGetState.mockReturnValue({
settings: {
openAI: {
streamOptions: {
includeUsage: undefined
}
}
}
})
vi.mocked(isAzureOpenAIProvider).mockImplementation((provider) => provider.type === 'azure-openai')
})
it('uses deployment-based URLs when apiVersion is a date version', () => {
const provider = createAzureProvider('2024-02-15-preview')
const config = providerToAiSdkConfig(provider, createModel('gpt-4o', 'GPT-4o', provider.id))
expect(config.providerId).toBe('azure')
expect(config.options.apiVersion).toBe('2024-02-15-preview')
expect(config.options.useDeploymentBasedUrls).toBe(true)
})
it('does not force deployment-based URLs for apiVersion v1/preview', () => {
const v1Provider = createAzureProvider('v1')
const v1Config = providerToAiSdkConfig(v1Provider, createModel('gpt-4o', 'GPT-4o', v1Provider.id))
expect(v1Config.providerId).toBe('azure-responses')
expect(v1Config.options.apiVersion).toBe('v1')
expect(v1Config.options.useDeploymentBasedUrls).toBeUndefined()
const previewProvider = createAzureProvider('preview')
const previewConfig = providerToAiSdkConfig(previewProvider, createModel('gpt-4o', 'GPT-4o', previewProvider.id))
expect(previewConfig.providerId).toBe('azure-responses')
expect(previewConfig.options.apiVersion).toBe('preview')
expect(previewConfig.options.useDeploymentBasedUrls).toBeUndefined()
})
})

View File

@ -11,6 +11,7 @@ import { beforeEach, describe, expect, it, vi } from 'vitest'
import {
getAnthropicReasoningParams,
getAnthropicThinkingBudget,
getBedrockReasoningParams,
getCustomParameters,
getGeminiReasoningParams,
@ -89,7 +90,8 @@ vi.mock('@renderer/config/models', async (importOriginal) => {
isQwenAlwaysThinkModel: vi.fn(() => false),
isSupportedThinkingTokenHunyuanModel: vi.fn(() => false),
isSupportedThinkingTokenModel: vi.fn(() => false),
isGPT51SeriesModel: vi.fn(() => false)
isGPT51SeriesModel: vi.fn(() => false),
findTokenLimit: vi.fn(actual.findTokenLimit)
}
})
@ -596,7 +598,7 @@ describe('reasoning utils', () => {
expect(result).toEqual({})
})
it('should return disabled thinking when no reasoning effort', async () => {
it('should return disabled thinking when reasoning effort is none', async () => {
const { isReasoningModel, isSupportedThinkingTokenClaudeModel } = await import('@renderer/config/models')
vi.mocked(isReasoningModel).mockReturnValue(true)
@ -611,7 +613,9 @@ describe('reasoning utils', () => {
const assistant: Assistant = {
id: 'test',
name: 'Test',
settings: {}
settings: {
reasoning_effort: 'none'
}
} as Assistant
const result = getAnthropicReasoningParams(assistant, model)
@ -647,7 +651,7 @@ describe('reasoning utils', () => {
expect(result).toEqual({
thinking: {
type: 'enabled',
budgetTokens: 2048
budgetTokens: 4096
}
})
})
@ -675,7 +679,7 @@ describe('reasoning utils', () => {
expect(result).toEqual({})
})
it('should disable thinking for Flash models without reasoning effort', async () => {
it('should disable thinking for Flash models when reasoning effort is none', async () => {
const { isReasoningModel, isSupportedThinkingTokenGeminiModel } = await import('@renderer/config/models')
vi.mocked(isReasoningModel).mockReturnValue(true)
@ -690,7 +694,9 @@ describe('reasoning utils', () => {
const assistant: Assistant = {
id: 'test',
name: 'Test',
settings: {}
settings: {
reasoning_effort: 'none'
}
} as Assistant
const result = getGeminiReasoningParams(assistant, model)
@ -725,7 +731,7 @@ describe('reasoning utils', () => {
const result = getGeminiReasoningParams(assistant, model)
expect(result).toEqual({
thinkingConfig: {
thinkingBudget: 16448,
thinkingBudget: expect.any(Number),
includeThoughts: true
}
})
@ -889,7 +895,7 @@ describe('reasoning utils', () => {
expect(result).toEqual({
reasoningConfig: {
type: 'enabled',
budgetTokens: 2048
budgetTokens: 4096
}
})
})
@ -990,4 +996,89 @@ describe('reasoning utils', () => {
})
})
})
describe('getAnthropicThinkingBudget', () => {
it('should return undefined when reasoningEffort is undefined', async () => {
const result = getAnthropicThinkingBudget(4096, undefined, 'claude-3-7-sonnet')
expect(result).toBeUndefined()
})
it('should return undefined when reasoningEffort is none', async () => {
const result = getAnthropicThinkingBudget(4096, 'none', 'claude-3-7-sonnet')
expect(result).toBeUndefined()
})
it('should return undefined when tokenLimit is not found', async () => {
const { findTokenLimit } = await import('@renderer/config/models')
vi.mocked(findTokenLimit).mockReturnValue(undefined)
const result = getAnthropicThinkingBudget(4096, 'medium', 'unknown-model')
expect(result).toBeUndefined()
})
it('should calculate budget correctly when maxTokens is provided', async () => {
const { findTokenLimit } = await import('@renderer/config/models')
vi.mocked(findTokenLimit).mockReturnValue({ min: 1024, max: 32768 })
const result = getAnthropicThinkingBudget(4096, 'medium', 'claude-3-7-sonnet')
// EFFORT_RATIO['medium'] = 0.5
// budget = Math.floor((32768 - 1024) * 0.5 + 1024)
// = Math.floor(31744 * 0.5 + 1024) = Math.floor(15872 + 1024) = 16896
// budgetTokens = Math.min(16896, 4096) = 4096
// result = Math.max(1024, 4096) = 4096
expect(result).toBe(4096)
})
it('should use tokenLimit.max when maxTokens is undefined', async () => {
const { findTokenLimit } = await import('@renderer/config/models')
vi.mocked(findTokenLimit).mockReturnValue({ min: 1024, max: 32768 })
const result = getAnthropicThinkingBudget(undefined, 'medium', 'claude-3-7-sonnet')
// When maxTokens is undefined, budget is not constrained by maxTokens
// EFFORT_RATIO['medium'] = 0.5
// budget = Math.floor((32768 - 1024) * 0.5 + 1024)
// = Math.floor(31744 * 0.5 + 1024) = Math.floor(15872 + 1024) = 16896
// result = Math.max(1024, 16896) = 16896
expect(result).toBe(16896)
})
it('should enforce minimum budget of 1024', async () => {
const { findTokenLimit } = await import('@renderer/config/models')
vi.mocked(findTokenLimit).mockReturnValue({ min: 100, max: 1000 })
const result = getAnthropicThinkingBudget(500, 'low', 'claude-3-7-sonnet')
// EFFORT_RATIO['low'] = 0.05
// budget = Math.floor((1000 - 100) * 0.05 + 100)
// = Math.floor(900 * 0.05 + 100) = Math.floor(45 + 100) = 145
// budgetTokens = Math.min(145, 500) = 145
// result = Math.max(1024, 145) = 1024
expect(result).toBe(1024)
})
it('should respect effort ratio for high reasoning effort', async () => {
const { findTokenLimit } = await import('@renderer/config/models')
vi.mocked(findTokenLimit).mockReturnValue({ min: 1024, max: 32768 })
const result = getAnthropicThinkingBudget(8192, 'high', 'claude-3-7-sonnet')
// EFFORT_RATIO['high'] = 0.8
// budget = Math.floor((32768 - 1024) * 0.8 + 1024)
// = Math.floor(31744 * 0.8 + 1024) = Math.floor(25395.2 + 1024) = 26419
// budgetTokens = Math.min(26419, 8192) = 8192
// result = Math.max(1024, 8192) = 8192
expect(result).toBe(8192)
})
it('should use full token limit when maxTokens is undefined and reasoning effort is high', async () => {
const { findTokenLimit } = await import('@renderer/config/models')
vi.mocked(findTokenLimit).mockReturnValue({ min: 1024, max: 32768 })
const result = getAnthropicThinkingBudget(undefined, 'high', 'claude-3-7-sonnet')
// When maxTokens is undefined, budget is not constrained by maxTokens
// EFFORT_RATIO['high'] = 0.8
// budget = Math.floor((32768 - 1024) * 0.8 + 1024)
// = Math.floor(31744 * 0.8 + 1024) = Math.floor(25395.2 + 1024) = 26419
// result = Math.max(1024, 26419) = 26419
expect(result).toBe(26419)
})
})
})

View File

@ -10,6 +10,7 @@ import {
GEMINI_FLASH_MODEL_REGEX,
getModelSupportedReasoningEffortOptions,
isDeepSeekHybridInferenceModel,
isDoubaoSeed18Model,
isDoubaoSeedAfter251015,
isDoubaoThinkingAutoModel,
isGemini3ThinkingTokenModel,
@ -28,13 +29,14 @@ import {
isSupportedThinkingTokenDoubaoModel,
isSupportedThinkingTokenGeminiModel,
isSupportedThinkingTokenHunyuanModel,
isSupportedThinkingTokenMiMoModel,
isSupportedThinkingTokenModel,
isSupportedThinkingTokenQwenModel,
isSupportedThinkingTokenZhipuModel
} from '@renderer/config/models'
import { getStoreSetting } from '@renderer/hooks/useSettings'
import { getAssistantSettings, getProviderByModel } from '@renderer/services/AssistantService'
import type { Assistant, Model } from '@renderer/types'
import type { Assistant, Model, ReasoningEffortOption } from '@renderer/types'
import { EFFORT_RATIO, isSystemProvider, SystemProviderIds } from '@renderer/types'
import type { OpenAIReasoningSummary } from '@renderer/types/aiCoreTypes'
import type { ReasoningEffortOptionalParams } from '@renderer/types/sdk'
@ -64,7 +66,7 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
// reasoningEffort is not set, no extra reasoning setting
// Generally, for every model which supports reasoning control, the reasoning effort won't be undefined.
// It's for some reasoning models that don't support reasoning control, such as deepseek reasoner.
if (!reasoningEffort) {
if (!reasoningEffort || reasoningEffort === 'default') {
return {}
}
@ -329,7 +331,7 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
// Grok models/Perplexity models/OpenAI models, use reasoning_effort
if (isSupportedReasoningEffortModel(model)) {
// 检查模型是否支持所选选项
const supportedOptions = getModelSupportedReasoningEffortOptions(model)
const supportedOptions = getModelSupportedReasoningEffortOptions(model)?.filter((option) => option !== 'default')
if (supportedOptions?.includes(reasoningEffort)) {
return {
reasoningEffort
@ -389,7 +391,7 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
// Use thinking, doubao, zhipu, etc.
if (isSupportedThinkingTokenDoubaoModel(model)) {
if (isDoubaoSeedAfter251015(model)) {
if (isDoubaoSeedAfter251015(model) || isDoubaoSeed18Model(model)) {
return { reasoningEffort }
}
if (reasoningEffort === 'high') {
@ -408,6 +410,12 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
return { thinking: { type: 'enabled' } }
}
if (isSupportedThinkingTokenMiMoModel(model)) {
return {
thinking: { type: 'enabled' }
}
}
// Default case: no special thinking settings
return {}
}
@ -427,7 +435,7 @@ export function getOpenAIReasoningParams(
let reasoningEffort = assistant?.settings?.reasoning_effort
if (!reasoningEffort) {
if (!reasoningEffort || reasoningEffort === 'default') {
return {}
}
@ -479,16 +487,14 @@ export function getAnthropicThinkingBudget(
return undefined
}
const budgetTokens = Math.max(
1024,
Math.floor(
Math.min(
(tokenLimit.max - tokenLimit.min) * effortRatio + tokenLimit.min,
(maxTokens || DEFAULT_MAX_TOKENS) * effortRatio
)
)
)
return budgetTokens
const budget = Math.floor((tokenLimit.max - tokenLimit.min) * effortRatio + tokenLimit.min)
let budgetTokens = budget
if (maxTokens !== undefined) {
budgetTokens = Math.min(budget, maxTokens)
}
return Math.max(1024, budgetTokens)
}
/**
@ -505,7 +511,11 @@ export function getAnthropicReasoningParams(
const reasoningEffort = assistant?.settings?.reasoning_effort
if (reasoningEffort === undefined || reasoningEffort === 'none') {
if (!reasoningEffort || reasoningEffort === 'default') {
return {}
}
if (reasoningEffort === 'none') {
return {
thinking: {
type: 'disabled'
@ -529,20 +539,25 @@ export function getAnthropicReasoningParams(
return {}
}
// type GoogleThinkingLevel = NonNullable<GoogleGenerativeAIProviderOptions['thinkingConfig']>['thinkingLevel']
type GoogleThinkingLevel = NonNullable<GoogleGenerativeAIProviderOptions['thinkingConfig']>['thinkingLevel']
// function mapToGeminiThinkingLevel(reasoningEffort: ReasoningEffortOption): GoogelThinkingLevel {
// switch (reasoningEffort) {
// case 'low':
// return 'low'
// case 'medium':
// return 'medium'
// case 'high':
// return 'high'
// default:
// return 'medium'
// }
// }
function mapToGeminiThinkingLevel(reasoningEffort: ReasoningEffortOption): GoogleThinkingLevel {
switch (reasoningEffort) {
case 'default':
return undefined
case 'minimal':
return 'minimal'
case 'low':
return 'low'
case 'medium':
return 'medium'
case 'high':
return 'high'
default:
logger.warn('Unknown thinking level for Gemini. Fallback to medium instead.', { reasoningEffort })
return 'medium'
}
}
/**
* Gemini
@ -560,6 +575,10 @@ export function getGeminiReasoningParams(
const reasoningEffort = assistant?.settings?.reasoning_effort
if (!reasoningEffort || reasoningEffort === 'default') {
return {}
}
// Gemini 推理参数
if (isSupportedThinkingTokenGeminiModel(model)) {
if (reasoningEffort === undefined || reasoningEffort === 'none') {
@ -571,15 +590,15 @@ export function getGeminiReasoningParams(
}
}
// TODO: 很多中转还不支持
// https://ai.google.dev/gemini-api/docs/gemini-3?thinking=high#new_api_features_in_gemini_3
// if (isGemini3ThinkingTokenModel(model)) {
// return {
// thinkingConfig: {
// thinkingLevel: mapToGeminiThinkingLevel(reasoningEffort)
// }
// }
// }
if (isGemini3ThinkingTokenModel(model)) {
return {
thinkingConfig: {
includeThoughts: true,
thinkingLevel: mapToGeminiThinkingLevel(reasoningEffort)
}
}
}
const effortRatio = EFFORT_RATIO[reasoningEffort]
@ -620,10 +639,6 @@ export function getXAIReasoningParams(assistant: Assistant, model: Model): Pick<
const { reasoning_effort: reasoningEffort } = getAssistantSettings(assistant)
if (!reasoningEffort || reasoningEffort === 'none') {
return {}
}
switch (reasoningEffort) {
case 'auto':
case 'minimal':
@ -634,6 +649,10 @@ export function getXAIReasoningParams(assistant: Assistant, model: Model): Pick<
return { reasoningEffort }
case 'xhigh':
return { reasoningEffort: 'high' }
case 'default':
case 'none':
default:
return {}
}
}
@ -650,7 +669,7 @@ export function getBedrockReasoningParams(
const reasoningEffort = assistant?.settings?.reasoning_effort
if (reasoningEffort === undefined) {
if (reasoningEffort === undefined || reasoningEffort === 'default') {
return {}
}

View File

@ -0,0 +1,17 @@
<svg width="100" height="100" viewBox="0 0 100 100" fill="none" xmlns="http://www.w3.org/2000/svg">
<g transform="translate(10, 42) scale(1.35)">
<!-- m -->
<path d="M1.2683 15.9987C0.9317 15.998 0.6091 15.8638 0.3713 15.6256C0.1335 15.3873 0 15.0644 0 14.7278V7.165C0.0148 6.83757 0.1554 6.52848 0.3924 6.30203C0.6293 6.07559 0.9445 5.94922 1.2722 5.94922C1.6 5.94922 1.9152 6.07559 2.1521 6.30203C2.3891 6.52848 2.5296 6.83757 2.5445 7.165V14.7278C2.5442 14.895 2.5109 15.0606 2.4466 15.215C2.3822 15.3693 2.2881 15.5095 2.1696 15.6276C2.0511 15.7456 1.9105 15.8391 1.7559 15.9028C1.6012 15.9665 1.4356 15.9991 1.2683 15.9987Z" fill="currentColor"/>
<path d="M14.8841 15.9993C14.5468 15.9993 14.2232 15.8655 13.9845 15.6272C13.7457 15.389 13.6112 15.0657 13.6105 14.7284V4.67881L8.9888 9.45281C8.7538 9.69657 8.4315 9.83697 8.0929 9.84312C7.7544 9.84928 7.4272 9.72069 7.1835 9.48563C6.9397 9.25058 6.7993 8.92832 6.7931 8.58976C6.7901 8.42211 6.8201 8.25551 6.8814 8.09947C6.9428 7.94342 7.0342 7.80098 7.1506 7.68028L13.9703 0.661082C14.1463 0.478921 14.3728 0.35354 14.6207 0.301033C14.8685 0.248526 15.1264 0.271291 15.3612 0.366403C15.5961 0.461516 15.7971 0.624637 15.9385 0.834827C16.08 1.04502 16.1554 1.29268 16.1551 1.54603V14.7284C16.1551 15.0655 16.0212 15.3887 15.7828 15.6271C15.5444 15.8654 15.2212 15.9993 14.8841 15.9993Z" fill="currentColor"/>
<path d="M8.0748 9.82621C7.9058 9.82749 7.7383 9.79518 7.5818 9.73117C7.4254 9.66716 7.2833 9.57272 7.1636 9.45332L0.3571 2.4315C0.1224 2.18948 -0.0065 1.86414 -0.0014 1.52705C0.0038 1.18996 0.1427 0.868726 0.3847 0.634023C0.6267 0.399319 0.9521 0.270369 1.2892 0.27554C1.6262 0.280711 1.9475 0.419579 2.1822 0.661595L8.9887 7.66767C9.1623 7.84735 9.2792 8.07413 9.3249 8.31977C9.3706 8.56541 9.343 8.81906 9.2456 9.04914C9.1482 9.27922 8.9852 9.47557 8.7771 9.61374C8.5689 9.75191 8.3247 9.8258 8.0748 9.82621Z" fill="currentColor"/>
<!-- i -->
<path d="M20.3539 15.9997C20.0169 15.9997 19.6936 15.8658 19.4552 15.6274C19.2169 15.3891 19.083 15.0658 19.083 14.7287V1.54636C19.083 1.20928 19.2169 0.886001 19.4552 0.647648C19.6936 0.409296 20.0169 0.275391 20.3539 0.275391C20.691 0.275391 21.0143 0.409296 21.2526 0.647648C21.491 0.886001 21.6249 1.20928 21.6249 1.54636V14.7287C21.6249 14.8956 21.592 15.0609 21.5282 15.2151C21.4643 15.3693 21.3707 15.5094 21.2526 15.6274C21.1346 15.7454 20.9945 15.839 20.8403 15.9029C20.6861 15.9668 20.5208 15.9997 20.3539 15.9997Z" fill="currentColor"/>
<!-- m -->
<path d="M25.8263 15.9992C25.4893 15.9992 25.166 15.8653 24.9276 15.627C24.6893 15.3886 24.5554 15.0654 24.5554 14.7283V7.1655C24.5554 6.82842 24.6893 6.50514 24.9276 6.26679C25.166 6.02844 25.4893 5.89453 25.8263 5.89453C26.1634 5.89453 26.4867 6.02844 26.7251 6.26679C26.9634 6.50514 27.0973 6.82842 27.0973 7.1655V14.7283C27.0973 15.0654 26.9634 15.3886 26.7251 15.627C26.4867 15.8653 26.1634 15.9992 25.8263 15.9992Z" fill="currentColor"/>
<path d="M39.4394 16.0004C39.1023 16.0004 38.779 15.8664 38.5406 15.6281C38.3023 15.3897 38.1684 15.0665 38.1684 14.7294V4.67982L33.5467 9.45382C33.3117 9.69584 32.9901 9.83457 32.6523 9.83949C32.3156 9.84442 31.9894 9.71513 31.7474 9.48008C31.5054 9.24503 31.3674 8.92346 31.3623 8.58613C31.3573 8.24879 31.4863 7.92331 31.7214 7.6813L38.5284 0.662093C38.7044 0.483575 38.9304 0.361405 39.1767 0.311007C39.4233 0.260609 39.6787 0.284243 39.9114 0.378925C40.1437 0.473608 40.3427 0.635093 40.4837 0.842994C40.6247 1.05089 40.7007 1.29589 40.7027 1.54704V14.7294C40.7017 15.0649 40.5687 15.3866 40.3327 15.6246C40.0957 15.8625 39.7747 15.9976 39.4394 16.0004Z" fill="currentColor"/>
<path d="M32.6324 9.82618C32.4634 9.82746 32.2964 9.79516 32.1394 9.73115C31.9834 9.66713 31.8414 9.57269 31.7214 9.45329L24.9151 2.43147C24.7921 2.31326 24.6942 2.1715 24.6271 2.01463C24.5601 1.85777 24.5253 1.68901 24.5249 1.51842C24.5244 1.34783 24.5583 1.1789 24.6246 1.02169C24.6908 0.864476 24.788 0.722207 24.9104 0.603357C25.0327 0.484507 25.1778 0.391509 25.3369 0.329905C25.4959 0.268302 25.6658 0.239353 25.8363 0.244785C26.0068 0.250217 26.1745 0.289918 26.3293 0.361522C26.4841 0.433126 26.623 0.535168 26.7375 0.661566L33.5467 7.66764C33.7204 7.84732 33.8374 8.0741 33.8824 8.31974C33.9284 8.56538 33.9014 8.81903 33.8034 9.04911C33.7064 9.27919 33.5434 9.47554 33.3354 9.61371C33.1267 9.75189 32.8824 9.82577 32.6324 9.82618Z" fill="currentColor"/>
<!-- o -->
<path d="M50.9434 15.9814C49.5534 15.9865 48.1864 15.6287 46.9774 14.9433C45.7674 14.2579 44.7584 13.2687 44.0484 12.0735C43.3384 10.8783 42.9534 9.5185 42.9304 8.12863C42.9074 6.73875 43.2474 5.36692 43.9164 4.1488C44.0844 3.86356 44.3564 3.65487 44.6754 3.56707C44.9944 3.47927 45.3344 3.51928 45.6244 3.67859C45.9144 3.8379 46.1314 4.10397 46.2274 4.42026C46.3244 4.73656 46.2944 5.07816 46.1434 5.3725C45.5764 6.40664 45.3594 7.59693 45.5264 8.76468C45.6924 9.93243 46.2334 11.0147 47.0674 11.8489C47.9014 12.6831 48.9834 13.2244 50.1514 13.3914C51.3184 13.5584 52.5094 13.3421 53.5434 12.7751C53.8384 12.6125 54.1864 12.5738 54.5104 12.6676C54.8344 12.7614 55.1074 12.98 55.2704 13.2753C55.4324 13.5706 55.4714 13.9184 55.3774 14.2422C55.2834 14.566 55.0654 14.8393 54.7694 15.0019C53.5974 15.6455 52.2814 15.9824 50.9434 15.9814Z" fill="currentColor"/>
<path d="M56.8104 12.5052C56.5944 12.5044 56.3834 12.4484 56.1954 12.3424C55.9014 12.1795 55.6824 11.9066 55.5894 11.5833C55.4954 11.26 55.5324 10.9126 55.6944 10.6171C56.2614 9.58297 56.4784 8.39268 56.3114 7.22493C56.1454 6.05718 55.6044 4.97496 54.7704 4.14073C53.9364 3.30649 52.8544 2.76525 51.6864 2.59825C50.5194 2.43125 49.3284 2.64749 48.2944 3.21452C48.1474 3.30059 47.9854 3.3564 47.8164 3.37863C47.6484 3.40087 47.4774 3.38908 47.3134 3.34397C47.1494 3.29886 46.9964 3.22134 46.8624 3.116C46.7294 3.01066 46.6184 2.87964 46.5364 2.73069C46.4544 2.58174 46.4034 2.41788 46.3864 2.24882C46.3684 2.07975 46.3854 1.90891 46.4354 1.7464C46.4854 1.58389 46.5674 1.43301 46.6764 1.3027C46.7854 1.17238 46.9194 1.06527 47.0704 0.987704C48.5874 0.155491 50.3324 -0.162266 52.0454 0.0821474C53.7574 0.326561 55.3454 1.11995 56.5684 2.34319C57.7914 3.56642 58.5844 5.15347 58.8294 6.86604C59.0734 8.5786 58.7554 10.3242 57.9234 11.8408C57.8144 12.0411 57.6534 12.2084 57.4574 12.3253C57.2624 12.4422 57.0384 12.5043 56.8104 12.5052Z" fill="currentColor"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 6.2 KiB

View File

@ -0,0 +1,17 @@
<svg width="100" height="100" viewBox="0 0 100 100" fill="none" xmlns="http://www.w3.org/2000/svg">
<g transform="translate(10, 42) scale(1.35)">
<!-- m -->
<path d="M1.2683 15.9987C0.9317 15.998 0.6091 15.8638 0.3713 15.6256C0.1335 15.3873 0 15.0644 0 14.7278V7.165C0.0148 6.83757 0.1554 6.52848 0.3924 6.30203C0.6293 6.07559 0.9445 5.94922 1.2722 5.94922C1.6 5.94922 1.9152 6.07559 2.1521 6.30203C2.3891 6.52848 2.5296 6.83757 2.5445 7.165V14.7278C2.5442 14.895 2.5109 15.0606 2.4466 15.215C2.3822 15.3693 2.2881 15.5095 2.1696 15.6276C2.0511 15.7456 1.9105 15.8391 1.7559 15.9028C1.6012 15.9665 1.4356 15.9991 1.2683 15.9987Z" fill="currentColor"/>
<path d="M14.8841 15.9993C14.5468 15.9993 14.2232 15.8655 13.9845 15.6272C13.7457 15.389 13.6112 15.0657 13.6105 14.7284V4.67881L8.9888 9.45281C8.7538 9.69657 8.4315 9.83697 8.0929 9.84312C7.7544 9.84928 7.4272 9.72069 7.1835 9.48563C6.9397 9.25058 6.7993 8.92832 6.7931 8.58976C6.7901 8.42211 6.8201 8.25551 6.8814 8.09947C6.9428 7.94342 7.0342 7.80098 7.1506 7.68028L13.9703 0.661082C14.1463 0.478921 14.3728 0.35354 14.6207 0.301033C14.8685 0.248526 15.1264 0.271291 15.3612 0.366403C15.5961 0.461516 15.7971 0.624637 15.9385 0.834827C16.08 1.04502 16.1554 1.29268 16.1551 1.54603V14.7284C16.1551 15.0655 16.0212 15.3887 15.7828 15.6271C15.5444 15.8654 15.2212 15.9993 14.8841 15.9993Z" fill="currentColor"/>
<path d="M8.0748 9.82621C7.9058 9.82749 7.7383 9.79518 7.5818 9.73117C7.4254 9.66716 7.2833 9.57272 7.1636 9.45332L0.3571 2.4315C0.1224 2.18948 -0.0065 1.86414 -0.0014 1.52705C0.0038 1.18996 0.1427 0.868726 0.3847 0.634023C0.6267 0.399319 0.9521 0.270369 1.2892 0.27554C1.6262 0.280711 1.9475 0.419579 2.1822 0.661595L8.9887 7.66767C9.1623 7.84735 9.2792 8.07413 9.3249 8.31977C9.3706 8.56541 9.343 8.81906 9.2456 9.04914C9.1482 9.27922 8.9852 9.47557 8.7771 9.61374C8.5689 9.75191 8.3247 9.8258 8.0748 9.82621Z" fill="currentColor"/>
<!-- i -->
<path d="M20.3539 15.9997C20.0169 15.9997 19.6936 15.8658 19.4552 15.6274C19.2169 15.3891 19.083 15.0658 19.083 14.7287V1.54636C19.083 1.20928 19.2169 0.886001 19.4552 0.647648C19.6936 0.409296 20.0169 0.275391 20.3539 0.275391C20.691 0.275391 21.0143 0.409296 21.2526 0.647648C21.491 0.886001 21.6249 1.20928 21.6249 1.54636V14.7287C21.6249 14.8956 21.592 15.0609 21.5282 15.2151C21.4643 15.3693 21.3707 15.5094 21.2526 15.6274C21.1346 15.7454 20.9945 15.839 20.8403 15.9029C20.6861 15.9668 20.5208 15.9997 20.3539 15.9997Z" fill="currentColor"/>
<!-- m -->
<path d="M25.8263 15.9992C25.4893 15.9992 25.166 15.8653 24.9276 15.627C24.6893 15.3886 24.5554 15.0654 24.5554 14.7283V7.1655C24.5554 6.82842 24.6893 6.50514 24.9276 6.26679C25.166 6.02844 25.4893 5.89453 25.8263 5.89453C26.1634 5.89453 26.4867 6.02844 26.7251 6.26679C26.9634 6.50514 27.0973 6.82842 27.0973 7.1655V14.7283C27.0973 15.0654 26.9634 15.3886 26.7251 15.627C26.4867 15.8653 26.1634 15.9992 25.8263 15.9992Z" fill="currentColor"/>
<path d="M39.4394 16.0004C39.1023 16.0004 38.779 15.8664 38.5406 15.6281C38.3023 15.3897 38.1684 15.0665 38.1684 14.7294V4.67982L33.5467 9.45382C33.3117 9.69584 32.9901 9.83457 32.6523 9.83949C32.3156 9.84442 31.9894 9.71513 31.7474 9.48008C31.5054 9.24503 31.3674 8.92346 31.3623 8.58613C31.3573 8.24879 31.4863 7.92331 31.7214 7.6813L38.5284 0.662093C38.7044 0.483575 38.9304 0.361405 39.1767 0.311007C39.4233 0.260609 39.6787 0.284243 39.9114 0.378925C40.1437 0.473608 40.3427 0.635093 40.4837 0.842994C40.6247 1.05089 40.7007 1.29589 40.7027 1.54704V14.7294C40.7017 15.0649 40.5687 15.3866 40.3327 15.6246C40.0957 15.8625 39.7747 15.9976 39.4394 16.0004Z" fill="currentColor"/>
<path d="M32.6324 9.82618C32.4634 9.82746 32.2964 9.79516 32.1394 9.73115C31.9834 9.66713 31.8414 9.57269 31.7214 9.45329L24.9151 2.43147C24.7921 2.31326 24.6942 2.1715 24.6271 2.01463C24.5601 1.85777 24.5253 1.68901 24.5249 1.51842C24.5244 1.34783 24.5583 1.1789 24.6246 1.02169C24.6908 0.864476 24.788 0.722207 24.9104 0.603357C25.0327 0.484507 25.1778 0.391509 25.3369 0.329905C25.4959 0.268302 25.6658 0.239353 25.8363 0.244785C26.0068 0.250217 26.1745 0.289918 26.3293 0.361522C26.4841 0.433126 26.623 0.535168 26.7375 0.661566L33.5467 7.66764C33.7204 7.84732 33.8374 8.0741 33.8824 8.31974C33.9284 8.56538 33.9014 8.81903 33.8034 9.04911C33.7064 9.27919 33.5434 9.47554 33.3354 9.61371C33.1267 9.75189 32.8824 9.82577 32.6324 9.82618Z" fill="currentColor"/>
<!-- o -->
<path d="M50.9434 15.9814C49.5534 15.9865 48.1864 15.6287 46.9774 14.9433C45.7674 14.2579 44.7584 13.2687 44.0484 12.0735C43.3384 10.8783 42.9534 9.5185 42.9304 8.12863C42.9074 6.73875 43.2474 5.36692 43.9164 4.1488C44.0844 3.86356 44.3564 3.65487 44.6754 3.56707C44.9944 3.47927 45.3344 3.51928 45.6244 3.67859C45.9144 3.8379 46.1314 4.10397 46.2274 4.42026C46.3244 4.73656 46.2944 5.07816 46.1434 5.3725C45.5764 6.40664 45.3594 7.59693 45.5264 8.76468C45.6924 9.93243 46.2334 11.0147 47.0674 11.8489C47.9014 12.6831 48.9834 13.2244 50.1514 13.3914C51.3184 13.5584 52.5094 13.3421 53.5434 12.7751C53.8384 12.6125 54.1864 12.5738 54.5104 12.6676C54.8344 12.7614 55.1074 12.98 55.2704 13.2753C55.4324 13.5706 55.4714 13.9184 55.3774 14.2422C55.2834 14.566 55.0654 14.8393 54.7694 15.0019C53.5974 15.6455 52.2814 15.9824 50.9434 15.9814Z" fill="currentColor"/>
<path d="M56.8104 12.5052C56.5944 12.5044 56.3834 12.4484 56.1954 12.3424C55.9014 12.1795 55.6824 11.9066 55.5894 11.5833C55.4954 11.26 55.5324 10.9126 55.6944 10.6171C56.2614 9.58297 56.4784 8.39268 56.3114 7.22493C56.1454 6.05718 55.6044 4.97496 54.7704 4.14073C53.9364 3.30649 52.8544 2.76525 51.6864 2.59825C50.5194 2.43125 49.3284 2.64749 48.2944 3.21452C48.1474 3.30059 47.9854 3.3564 47.8164 3.37863C47.6484 3.40087 47.4774 3.38908 47.3134 3.34397C47.1494 3.29886 46.9964 3.22134 46.8624 3.116C46.7294 3.01066 46.6184 2.87964 46.5364 2.73069C46.4544 2.58174 46.4034 2.41788 46.3864 2.24882C46.3684 2.07975 46.3854 1.90891 46.4354 1.7464C46.4854 1.58389 46.5674 1.43301 46.6764 1.3027C46.7854 1.17238 46.9194 1.06527 47.0704 0.987704C48.5874 0.155491 50.3324 -0.162266 52.0454 0.0821474C53.7574 0.326561 55.3454 1.11995 56.5684 2.34319C57.7914 3.56642 58.5844 5.15347 58.8294 6.86604C59.0734 8.5786 58.7554 10.3242 57.9234 11.8408C57.8144 12.0411 57.6534 12.2084 57.4574 12.3253C57.2624 12.4422 57.0384 12.5043 56.8104 12.5052Z" fill="currentColor"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 6.2 KiB

View File

@ -113,6 +113,18 @@ export function MdiLightbulbOn(props: SVGProps<SVGSVGElement>) {
)
}
export function MdiLightbulbQuestion(props: SVGProps<SVGSVGElement>) {
// {/* Icon from Material Design Icons by Pictogrammers - https://github.com/Templarian/MaterialDesign/blob/master/LICENSE */}
return (
<svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24" {...props}>
<path
fill="currentColor"
d="M8 2C11.9 2 15 5.1 15 9C15 11.4 13.8 13.5 12 14.7V17C12 17.6 11.6 18 11 18H5C4.4 18 4 17.6 4 17V14.7C2.2 13.5 1 11.4 1 9C1 5.1 4.1 2 8 2M5 21V20H11V21C11 21.6 10.6 22 10 22H6C5.4 22 5 21.6 5 21M8 4C5.2 4 3 6.2 3 9C3 11.1 4.2 12.8 6 13.6V16H10V13.6C11.8 12.8 13 11.1 13 9C13 6.2 10.8 4 8 4M20.5 14.5V16H19V14.5H20.5M18.5 9.5H17V9C17 7.3 18.3 6 20 6S23 7.3 23 9C23 10 22.5 10.9 21.7 11.4L21.4 11.6C20.8 12 20.5 12.6 20.5 13.3V13.5H19V13.3C19 12.1 19.6 11 20.6 10.4L20.9 10.2C21.3 9.9 21.5 9.5 21.5 9C21.5 8.2 20.8 7.5 20 7.5S18.5 8.2 18.5 9V9.5Z"
/>
</svg>
)
}
export function BingLogo(props: SVGProps<SVGSVGElement>) {
return (
<svg

View File

@ -3,6 +3,7 @@ import { ErrorBoundary } from '@renderer/components/ErrorBoundary'
import { HelpTooltip } from '@renderer/components/TooltipIcons'
import { TopView } from '@renderer/components/TopView'
import { permissionModeCards } from '@renderer/config/agent'
import { isWin } from '@renderer/config/constant'
import { useAgents } from '@renderer/hooks/agents/useAgents'
import { useUpdateAgent } from '@renderer/hooks/agents/useUpdateAgent'
import SelectAgentBaseModelButton from '@renderer/pages/home/components/SelectAgentBaseModelButton'
@ -16,7 +17,8 @@ import type {
UpdateAgentForm
} from '@renderer/types'
import { AgentConfigurationSchema, isAgentType } from '@renderer/types'
import { Alert, Button, Input, Modal, Select } from 'antd'
import type { GitBashPathInfo } from '@shared/config/constant'
import { Button, Input, Modal, Select } from 'antd'
import { AlertTriangleIcon } from 'lucide-react'
import type { ChangeEvent, FormEvent } from 'react'
import { useCallback, useEffect, useMemo, useRef, useState } from 'react'
@ -59,8 +61,7 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
const isEditing = (agent?: AgentWithTools) => agent !== undefined
const [form, setForm] = useState<BaseAgentForm>(() => buildAgentForm(agent))
const [hasGitBash, setHasGitBash] = useState<boolean>(true)
const [customGitBashPath, setCustomGitBashPath] = useState<string>('')
const [gitBashPathInfo, setGitBashPathInfo] = useState<GitBashPathInfo>({ path: null, source: null })
useEffect(() => {
if (open) {
@ -68,29 +69,15 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
}
}, [agent, open])
const checkGitBash = useCallback(
async (showToast = false) => {
try {
const [gitBashInstalled, savedPath] = await Promise.all([
window.api.system.checkGitBash(),
window.api.system.getGitBashPath().catch(() => null)
])
setCustomGitBashPath(savedPath ?? '')
setHasGitBash(gitBashInstalled)
if (showToast) {
if (gitBashInstalled) {
window.toast.success(t('agent.gitBash.success', 'Git Bash detected successfully!'))
} else {
window.toast.error(t('agent.gitBash.notFound', 'Git Bash not found. Please install it first.'))
}
}
} catch (error) {
logger.error('Failed to check Git Bash:', error as Error)
setHasGitBash(true) // Default to true on error to avoid false warnings
}
},
[t]
)
const checkGitBash = useCallback(async () => {
if (!isWin) return
try {
const pathInfo = await window.api.system.getGitBashPathInfo()
setGitBashPathInfo(pathInfo)
} catch (error) {
logger.error('Failed to check Git Bash:', error as Error)
}
}, [])
useEffect(() => {
checkGitBash()
@ -119,24 +106,22 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
return
}
setCustomGitBashPath(pickedPath)
await checkGitBash(true)
await checkGitBash()
} catch (error) {
logger.error('Failed to pick Git Bash path', error as Error)
window.toast.error(t('agent.gitBash.pick.failed', 'Failed to set Git Bash path'))
}
}, [checkGitBash, t])
const handleClearGitBash = useCallback(async () => {
const handleResetGitBash = useCallback(async () => {
try {
// Clear manual setting and re-run auto-discovery
await window.api.system.setGitBashPath(null)
setCustomGitBashPath('')
await checkGitBash(true)
await checkGitBash()
} catch (error) {
logger.error('Failed to clear Git Bash path', error as Error)
window.toast.error(t('agent.gitBash.pick.failed', 'Failed to set Git Bash path'))
logger.error('Failed to reset Git Bash path', error as Error)
}
}, [checkGitBash, t])
}, [checkGitBash])
const onPermissionModeChange = useCallback((value: PermissionMode) => {
setForm((prev) => {
@ -268,6 +253,12 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
return
}
if (isWin && !gitBashPathInfo.path) {
window.toast.error(t('agent.gitBash.error.required', 'Git Bash path is required on Windows'))
loadingRef.current = false
return
}
if (isEditing(agent)) {
if (!agent) {
loadingRef.current = false
@ -327,7 +318,8 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
t,
updateAgent,
afterSubmit,
addAgent
addAgent,
gitBashPathInfo.path
]
)
@ -346,66 +338,6 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
footer={null}>
<StyledForm onSubmit={onSubmit}>
<FormContent>
{!hasGitBash && (
<Alert
message={t('agent.gitBash.error.title', 'Git Bash Required')}
description={
<div>
<div style={{ marginBottom: 8 }}>
{t(
'agent.gitBash.error.description',
'Git Bash is required to run agents on Windows. The agent cannot function without it. Please install Git for Windows from'
)}{' '}
<a
href="https://git-scm.com/download/win"
onClick={(e) => {
e.preventDefault()
window.api.openWebsite('https://git-scm.com/download/win')
}}
style={{ textDecoration: 'underline' }}>
git-scm.com
</a>
</div>
<Button size="small" onClick={() => checkGitBash(true)}>
{t('agent.gitBash.error.recheck', 'Recheck Git Bash Installation')}
</Button>
<Button size="small" style={{ marginLeft: 8 }} onClick={handlePickGitBash}>
{t('agent.gitBash.pick.button', 'Select Git Bash Path')}
</Button>
</div>
}
type="error"
showIcon
style={{ marginBottom: 16 }}
/>
)}
{hasGitBash && customGitBashPath && (
<Alert
message={t('agent.gitBash.found.title', 'Git Bash configured')}
description={
<div style={{ display: 'flex', flexDirection: 'column', gap: 8 }}>
<div>
{t('agent.gitBash.customPath', {
defaultValue: 'Using custom path: {{path}}',
path: customGitBashPath
})}
</div>
<div style={{ display: 'flex', gap: 8 }}>
<Button size="small" onClick={handlePickGitBash}>
{t('agent.gitBash.pick.button', 'Select Git Bash Path')}
</Button>
<Button size="small" onClick={handleClearGitBash}>
{t('agent.gitBash.clear.button', 'Clear custom path')}
</Button>
</div>
</div>
}
type="success"
showIcon
style={{ marginBottom: 16 }}
/>
)}
<FormRow>
<FormItem style={{ flex: 1 }}>
<Label>
@ -439,6 +371,40 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
/>
</FormItem>
{isWin && (
<FormItem>
<div className="flex items-center gap-2">
<Label>
Git Bash <RequiredMark>*</RequiredMark>
</Label>
<HelpTooltip
title={t(
'agent.gitBash.tooltip',
'Git Bash is required to run agents on Windows. Install from git-scm.com if not available.'
)}
/>
</div>
<GitBashInputWrapper>
<Input
value={gitBashPathInfo.path ?? ''}
readOnly
placeholder={t('agent.gitBash.placeholder', 'Select bash.exe path')}
/>
<Button size="small" onClick={handlePickGitBash}>
{t('common.select', 'Select')}
</Button>
{gitBashPathInfo.source === 'manual' && (
<Button size="small" onClick={handleResetGitBash}>
{t('common.reset', 'Reset')}
</Button>
)}
</GitBashInputWrapper>
{gitBashPathInfo.path && gitBashPathInfo.source === 'auto' && (
<SourceHint>{t('agent.gitBash.autoDiscoveredHint', 'Auto-discovered')}</SourceHint>
)}
</FormItem>
)}
<FormItem>
<Label>
{t('agent.settings.tooling.permissionMode.title', 'Permission mode')} <RequiredMark>*</RequiredMark>
@ -511,7 +477,11 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
<FormFooter>
<Button onClick={onCancel}>{t('common.close')}</Button>
<Button type="primary" htmlType="submit" loading={loadingRef.current} disabled={!hasGitBash}>
<Button
type="primary"
htmlType="submit"
loading={loadingRef.current}
disabled={isWin && !gitBashPathInfo.path}>
{isEditing(agent) ? t('common.confirm') : t('common.add')}
</Button>
</FormFooter>
@ -582,6 +552,21 @@ const FormItem = styled.div`
gap: 8px;
`
const GitBashInputWrapper = styled.div`
display: flex;
gap: 8px;
align-items: center;
input {
flex: 1;
}
`
const SourceHint = styled.span`
font-size: 12px;
color: var(--color-text-3);
`
const Label = styled.label`
font-size: 14px;
color: var(--color-text-1);

View File

@ -631,7 +631,7 @@ describe('Reasoning option configuration', () => {
it('restricts GPT-5 Pro reasoning to high effort only', () => {
expect(MODEL_SUPPORTED_REASONING_EFFORT.gpt5pro).toEqual(['high'])
expect(MODEL_SUPPORTED_OPTIONS.gpt5pro).toEqual(['high'])
expect(MODEL_SUPPORTED_OPTIONS.gpt5pro).toEqual(['default', 'high'])
})
})
@ -695,15 +695,20 @@ describe('getThinkModelType - Comprehensive Coverage', () => {
})
describe('Gemini models', () => {
it('should return gemini for Flash models', () => {
expect(getThinkModelType(createModel({ id: 'gemini-2.5-flash-latest' }))).toBe('gemini')
expect(getThinkModelType(createModel({ id: 'gemini-flash-latest' }))).toBe('gemini')
expect(getThinkModelType(createModel({ id: 'gemini-flash-lite-latest' }))).toBe('gemini')
it('should return gemini2_flash for Flash models', () => {
expect(getThinkModelType(createModel({ id: 'gemini-2.5-flash-latest' }))).toBe('gemini2_flash')
})
it('should return gemini3_flash for Gemini 3 Flash models', () => {
expect(getThinkModelType(createModel({ id: 'gemini-3-flash-preview' }))).toBe('gemini3_flash')
expect(getThinkModelType(createModel({ id: 'gemini-flash-latest' }))).toBe('gemini3_flash')
})
it('should return gemini_pro for Pro models', () => {
expect(getThinkModelType(createModel({ id: 'gemini-2.5-pro-latest' }))).toBe('gemini_pro')
expect(getThinkModelType(createModel({ id: 'gemini-pro-latest' }))).toBe('gemini_pro')
it('should return gemini2_pro for Gemini 2.5 Pro models', () => {
expect(getThinkModelType(createModel({ id: 'gemini-2.5-pro-latest' }))).toBe('gemini2_pro')
})
it('should return gemini3_pro for Gemini 3 Pro models', () => {
expect(getThinkModelType(createModel({ id: 'gemini-3-pro-preview' }))).toBe('gemini3_pro')
expect(getThinkModelType(createModel({ id: 'gemini-pro-latest' }))).toBe('gemini3_pro')
})
})
@ -733,6 +738,11 @@ describe('getThinkModelType - Comprehensive Coverage', () => {
expect(getThinkModelType(createModel({ id: 'doubao-seed-1-6-lite-251015' }))).toBe('doubao_after_251015')
})
it('should return doubao_after_251015 for Doubao-Seed-1.8 models', () => {
expect(getThinkModelType(createModel({ id: 'doubao-seed-1-8-251215' }))).toBe('doubao_after_251015')
expect(getThinkModelType(createModel({ id: 'doubao-seed-1.8' }))).toBe('doubao_after_251015')
})
it('should return doubao_no_auto for other Doubao thinking models', () => {
expect(getThinkModelType(createModel({ id: 'doubao-1.5-thinking-vision-pro' }))).toBe('doubao_no_auto')
})
@ -805,7 +815,7 @@ describe('getThinkModelType - Comprehensive Coverage', () => {
name: 'gemini-2.5-flash-latest'
})
)
).toBe('gemini')
).toBe('gemini2_flash')
})
it('should use id result when id matches', () => {
@ -830,7 +840,7 @@ describe('getThinkModelType - Comprehensive Coverage', () => {
it('should handle case insensitivity correctly', () => {
expect(getThinkModelType(createModel({ id: 'GPT-5.1' }))).toBe('gpt5_1')
expect(getThinkModelType(createModel({ id: 'Gemini-2.5-Flash-Latest' }))).toBe('gemini')
expect(getThinkModelType(createModel({ id: 'Gemini-2.5-Flash-Latest' }))).toBe('gemini2_flash')
expect(getThinkModelType(createModel({ id: 'DeepSeek-V3.1' }))).toBe('deepseek_hybrid')
})
@ -850,7 +860,7 @@ describe('getThinkModelType - Comprehensive Coverage', () => {
it('should handle models with version suffixes', () => {
expect(getThinkModelType(createModel({ id: 'gpt-5-preview-2024' }))).toBe('gpt5')
expect(getThinkModelType(createModel({ id: 'o3-mini-2024' }))).toBe('o')
expect(getThinkModelType(createModel({ id: 'gemini-2.5-flash-latest-001' }))).toBe('gemini')
expect(getThinkModelType(createModel({ id: 'gemini-2.5-flash-latest-001' }))).toBe('gemini2_flash')
})
it('should prioritize GPT-5.1 over GPT-5 detection', () => {
@ -863,6 +873,7 @@ describe('getThinkModelType - Comprehensive Coverage', () => {
// auto > after_251015 > no_auto
expect(getThinkModelType(createModel({ id: 'doubao-seed-1.6' }))).toBe('doubao')
expect(getThinkModelType(createModel({ id: 'doubao-seed-1-6-251015' }))).toBe('doubao_after_251015')
expect(getThinkModelType(createModel({ id: 'doubao-seed-1-8-251215' }))).toBe('doubao_after_251015')
expect(getThinkModelType(createModel({ id: 'doubao-1.5-thinking-vision-pro' }))).toBe('doubao_no_auto')
})
@ -949,6 +960,14 @@ describe('Gemini Models', () => {
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-flash-preview',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'google/gemini-3-pro-preview',
@ -990,6 +1009,31 @@ describe('Gemini Models', () => {
group: ''
})
).toBe(true)
// Version with date suffixes
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-flash-preview-09-2025',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-pro-preview-09-2025',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-flash-exp-1234',
name: '',
provider: '',
group: ''
})
).toBe(true)
// Version with decimals
expect(
isSupportedThinkingTokenGeminiModel({
@ -1009,7 +1053,8 @@ describe('Gemini Models', () => {
).toBe(true)
})
it('should return true for gemini-3 image models', () => {
it('should return true for gemini-3-pro-image models only', () => {
// Only gemini-3-pro-image models should return true
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-pro-image-preview',
@ -1018,6 +1063,17 @@ describe('Gemini Models', () => {
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-pro-image',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return false for other gemini-3 image models', () => {
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3.0-flash-image-preview',
@ -1080,6 +1136,22 @@ describe('Gemini Models', () => {
group: ''
})
).toBe(false)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-flash-preview-tts',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-pro-tts',
name: '',
provider: '',
group: ''
})
).toBe(false)
})
it('should return false for older gemini models', () => {
@ -1672,10 +1744,26 @@ describe('getModelSupportedReasoningEffortOptions', () => {
describe('OpenAI models', () => {
it('should return correct options for o-series models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'o3' }))).toEqual(['low', 'medium', 'high'])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'o3-mini' }))).toEqual(['low', 'medium', 'high'])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'o4' }))).toEqual(['low', 'medium', 'high'])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'o3' }))).toEqual([
'default',
'low',
'medium',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'o3-mini' }))).toEqual([
'default',
'low',
'medium',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'o4' }))).toEqual([
'default',
'low',
'medium',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gpt-oss-reasoning' }))).toEqual([
'default',
'low',
'medium',
'high'
@ -1685,17 +1773,22 @@ describe('getModelSupportedReasoningEffortOptions', () => {
it('should return correct options for deep research models', () => {
// Note: Deep research models need to be actual OpenAI reasoning models to be detected
// 'sonar-deep-research' from Perplexity is the primary deep research model
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'sonar-deep-research' }))).toEqual(['medium'])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'sonar-deep-research' }))).toEqual([
'default',
'medium'
])
})
it('should return correct options for GPT-5 models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gpt-5' }))).toEqual([
'default',
'minimal',
'low',
'medium',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gpt-5-preview' }))).toEqual([
'default',
'minimal',
'low',
'medium',
@ -1704,17 +1797,22 @@ describe('getModelSupportedReasoningEffortOptions', () => {
})
it('should return correct options for GPT-5 Pro models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gpt-5-pro' }))).toEqual(['high'])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gpt-5-pro-preview' }))).toEqual(['high'])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gpt-5-pro' }))).toEqual(['default', 'high'])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gpt-5-pro-preview' }))).toEqual([
'default',
'high'
])
})
it('should return correct options for GPT-5 Codex models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gpt-5-codex' }))).toEqual([
'default',
'low',
'medium',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gpt-5-codex-mini' }))).toEqual([
'default',
'low',
'medium',
'high'
@ -1723,18 +1821,21 @@ describe('getModelSupportedReasoningEffortOptions', () => {
it('should return correct options for GPT-5.1 models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gpt-5.1' }))).toEqual([
'default',
'none',
'low',
'medium',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gpt-5.1-preview' }))).toEqual([
'default',
'none',
'low',
'medium',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gpt-5.1-mini' }))).toEqual([
'default',
'none',
'low',
'medium',
@ -1744,11 +1845,13 @@ describe('getModelSupportedReasoningEffortOptions', () => {
it('should return correct options for GPT-5.1 Codex models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gpt-5.1-codex' }))).toEqual([
'default',
'none',
'medium',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gpt-5.1-codex-mini' }))).toEqual([
'default',
'none',
'medium',
'high'
@ -1758,58 +1861,77 @@ describe('getModelSupportedReasoningEffortOptions', () => {
describe('Grok models', () => {
it('should return correct options for Grok 3 mini', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'grok-3-mini' }))).toEqual(['low', 'high'])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'grok-3-mini' }))).toEqual([
'default',
'low',
'high'
])
})
it('should return correct options for Grok 4 Fast', () => {
expect(
getModelSupportedReasoningEffortOptions(createModel({ id: 'grok-4-fast', provider: 'openrouter' }))
).toEqual(['none', 'auto'])
).toEqual(['default', 'none', 'auto'])
})
})
describe('Gemini models', () => {
it('should return correct options for Gemini Flash models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-2.5-flash-latest' }))).toEqual([
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-2.5-flash' }))).toEqual([
'default',
'none',
'low',
'medium',
'high',
'auto'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-flash-latest' }))).toEqual([
'none',
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-3-flash-preview' }))).toEqual([
'default',
'minimal',
'low',
'medium',
'high',
'auto'
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-flash-latest' }))).toEqual([
'default',
'minimal',
'low',
'medium',
'high'
])
})
it('should return correct options for Gemini Pro models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-2.5-pro-latest' }))).toEqual([
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-2.5-pro' }))).toEqual([
'default',
'low',
'medium',
'high',
'auto'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-pro-latest' }))).toEqual([
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-3-pro-preview' }))).toEqual([
'default',
'low',
'medium',
'high',
'auto'
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-pro-latest' }))).toEqual([
'default',
'low',
'high'
])
})
it('should return correct options for Gemini 3 models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-3-flash' }))).toEqual([
'default',
'minimal',
'low',
'medium',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'gemini-3-pro-preview' }))).toEqual([
'default',
'low',
'medium',
'high'
])
})
@ -1818,24 +1940,28 @@ describe('getModelSupportedReasoningEffortOptions', () => {
describe('Qwen models', () => {
it('should return correct options for controllable Qwen models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'qwen-plus' }))).toEqual([
'default',
'none',
'low',
'medium',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'qwen-turbo' }))).toEqual([
'default',
'none',
'low',
'medium',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'qwen-flash' }))).toEqual([
'default',
'none',
'low',
'medium',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'qwen3-8b' }))).toEqual([
'default',
'none',
'low',
'medium',
@ -1853,11 +1979,13 @@ describe('getModelSupportedReasoningEffortOptions', () => {
describe('Doubao models', () => {
it('should return correct options for auto-thinking Doubao models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'doubao-seed-1.6' }))).toEqual([
'default',
'none',
'auto',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'doubao-1-5-thinking-pro-m' }))).toEqual([
'default',
'none',
'auto',
'high'
@ -1866,12 +1994,14 @@ describe('getModelSupportedReasoningEffortOptions', () => {
it('should return correct options for Doubao models after 251015', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'doubao-seed-1-6-251015' }))).toEqual([
'default',
'minimal',
'low',
'medium',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'doubao-seed-1-6-lite-251015' }))).toEqual([
'default',
'minimal',
'low',
'medium',
@ -1881,6 +2011,7 @@ describe('getModelSupportedReasoningEffortOptions', () => {
it('should return correct options for other Doubao thinking models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'doubao-1.5-thinking-vision-pro' }))).toEqual([
'default',
'none',
'high'
])
@ -1889,28 +2020,43 @@ describe('getModelSupportedReasoningEffortOptions', () => {
describe('Other providers', () => {
it('should return correct options for Hunyuan models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'hunyuan-a13b' }))).toEqual(['none', 'auto'])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'hunyuan-a13b' }))).toEqual([
'default',
'none',
'auto'
])
})
it('should return correct options for Zhipu models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'glm-4.5' }))).toEqual(['none', 'auto'])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'glm-4.6' }))).toEqual(['none', 'auto'])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'glm-4.5' }))).toEqual([
'default',
'none',
'auto'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'glm-4.6' }))).toEqual([
'default',
'none',
'auto'
])
})
it('should return correct options for Perplexity models', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'sonar-deep-research' }))).toEqual(['medium'])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'sonar-deep-research' }))).toEqual([
'default',
'medium'
])
})
it('should return correct options for DeepSeek hybrid models', () => {
expect(
getModelSupportedReasoningEffortOptions(createModel({ id: 'deepseek-v3.1', provider: 'deepseek' }))
).toEqual(['none', 'auto'])
).toEqual(['default', 'none', 'auto'])
expect(
getModelSupportedReasoningEffortOptions(createModel({ id: 'deepseek-v3.2', provider: 'openrouter' }))
).toEqual(['none', 'auto'])
).toEqual(['default', 'none', 'auto'])
expect(
getModelSupportedReasoningEffortOptions(createModel({ id: 'deepseek-chat', provider: 'deepseek' }))
).toEqual(['none', 'auto'])
).toEqual(['default', 'none', 'auto'])
})
})
@ -1925,7 +2071,7 @@ describe('getModelSupportedReasoningEffortOptions', () => {
provider: 'openrouter'
})
)
).toEqual(['none', 'auto'])
).toEqual(['default', 'none', 'auto'])
expect(
getModelSupportedReasoningEffortOptions(
@ -1934,7 +2080,7 @@ describe('getModelSupportedReasoningEffortOptions', () => {
name: 'gpt-5.1'
})
)
).toEqual(['none', 'low', 'medium', 'high'])
).toEqual(['default', 'none', 'low', 'medium', 'high'])
// Qwen models work well for name-based fallback
expect(
@ -1944,7 +2090,7 @@ describe('getModelSupportedReasoningEffortOptions', () => {
name: 'qwen-plus'
})
)
).toEqual(['none', 'low', 'medium', 'high'])
).toEqual(['default', 'none', 'low', 'medium', 'high'])
})
it('should use id result when id matches', () => {
@ -1955,7 +2101,7 @@ describe('getModelSupportedReasoningEffortOptions', () => {
name: 'Different Name'
})
)
).toEqual(['none', 'low', 'medium', 'high'])
).toEqual(['default', 'none', 'low', 'medium', 'high'])
expect(
getModelSupportedReasoningEffortOptions(
@ -1964,20 +2110,27 @@ describe('getModelSupportedReasoningEffortOptions', () => {
name: 'Some other name'
})
)
).toEqual(['low', 'medium', 'high'])
).toEqual(['default', 'low', 'medium', 'high'])
})
})
describe('Case sensitivity', () => {
it('should handle case insensitive model IDs', () => {
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'GPT-5.1' }))).toEqual([
'default',
'none',
'low',
'medium',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'O3-MINI' }))).toEqual(['low', 'medium', 'high'])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'O3-MINI' }))).toEqual([
'default',
'low',
'medium',
'high'
])
expect(getModelSupportedReasoningEffortOptions(createModel({ id: 'Gemini-2.5-Flash-Latest' }))).toEqual([
'default',
'none',
'low',
'medium',
@ -2000,7 +2153,7 @@ describe('getModelSupportedReasoningEffortOptions', () => {
const geminiModel = createModel({ id: 'gemini-2.5-flash-latest' })
const geminiResult = getModelSupportedReasoningEffortOptions(geminiModel)
expect(geminiResult).toEqual(MODEL_SUPPORTED_OPTIONS.gemini)
expect(geminiResult).toEqual(MODEL_SUPPORTED_OPTIONS.gemini2_flash)
})
})
})

View File

@ -21,6 +21,8 @@ import {
getModelSupportedVerbosity,
groupQwenModels,
isAnthropicModel,
isGemini3FlashModel,
isGemini3ProModel,
isGeminiModel,
isGemmaModel,
isGenerateImageModels,
@ -462,6 +464,101 @@ describe('model utils', () => {
})
})
describe('isGemini3FlashModel', () => {
it('detects gemini-3-flash model', () => {
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-flash' }))).toBe(true)
})
it('detects gemini-3-flash-preview model', () => {
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-flash-preview' }))).toBe(true)
})
it('detects gemini-3-flash with version suffixes', () => {
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-flash-latest' }))).toBe(true)
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-flash-preview-09-2025' }))).toBe(true)
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-flash-exp-1234' }))).toBe(true)
})
it('detects gemini-flash-latest alias', () => {
expect(isGemini3FlashModel(createModel({ id: 'gemini-flash-latest' }))).toBe(true)
expect(isGemini3FlashModel(createModel({ id: 'Gemini-Flash-Latest' }))).toBe(true)
})
it('detects gemini-3-flash with uppercase', () => {
expect(isGemini3FlashModel(createModel({ id: 'Gemini-3-Flash' }))).toBe(true)
expect(isGemini3FlashModel(createModel({ id: 'GEMINI-3-FLASH-PREVIEW' }))).toBe(true)
})
it('excludes gemini-3-flash-image models', () => {
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-flash-image-preview' }))).toBe(false)
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-flash-image' }))).toBe(false)
})
it('returns false for non-flash gemini-3 models', () => {
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-pro' }))).toBe(false)
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-pro-preview' }))).toBe(false)
expect(isGemini3FlashModel(createModel({ id: 'gemini-3-pro-image-preview' }))).toBe(false)
})
it('returns false for other gemini models', () => {
expect(isGemini3FlashModel(createModel({ id: 'gemini-2-flash' }))).toBe(false)
expect(isGemini3FlashModel(createModel({ id: 'gemini-2-flash-preview' }))).toBe(false)
expect(isGemini3FlashModel(createModel({ id: 'gemini-2.5-flash-preview-09-2025' }))).toBe(false)
})
it('returns false for null/undefined models', () => {
expect(isGemini3FlashModel(null)).toBe(false)
expect(isGemini3FlashModel(undefined)).toBe(false)
})
})
describe('isGemini3ProModel', () => {
it('detects gemini-3-pro model', () => {
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro' }))).toBe(true)
})
it('detects gemini-3-pro-preview model', () => {
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro-preview' }))).toBe(true)
})
it('detects gemini-3-pro with version suffixes', () => {
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro-latest' }))).toBe(true)
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro-preview-09-2025' }))).toBe(true)
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro-exp-1234' }))).toBe(true)
})
it('detects gemini-pro-latest alias', () => {
expect(isGemini3ProModel(createModel({ id: 'gemini-pro-latest' }))).toBe(true)
expect(isGemini3ProModel(createModel({ id: 'Gemini-Pro-Latest' }))).toBe(true)
})
it('detects gemini-3-pro with uppercase', () => {
expect(isGemini3ProModel(createModel({ id: 'Gemini-3-Pro' }))).toBe(true)
expect(isGemini3ProModel(createModel({ id: 'GEMINI-3-PRO-PREVIEW' }))).toBe(true)
})
it('excludes gemini-3-pro-image models', () => {
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro-image-preview' }))).toBe(false)
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro-image' }))).toBe(false)
expect(isGemini3ProModel(createModel({ id: 'gemini-3-pro-image-latest' }))).toBe(false)
})
it('returns false for non-pro gemini-3 models', () => {
expect(isGemini3ProModel(createModel({ id: 'gemini-3-flash' }))).toBe(false)
expect(isGemini3ProModel(createModel({ id: 'gemini-3-flash-preview' }))).toBe(false)
})
it('returns false for other gemini models', () => {
expect(isGemini3ProModel(createModel({ id: 'gemini-2-pro' }))).toBe(false)
expect(isGemini3ProModel(createModel({ id: 'gemini-2.5-pro-preview-09-2025' }))).toBe(false)
})
it('returns false for null/undefined models', () => {
expect(isGemini3ProModel(null)).toBe(false)
expect(isGemini3ProModel(undefined)).toBe(false)
})
})
describe('isZhipuModel', () => {
it('detects Zhipu models by provider', () => {
expect(isZhipuModel(createModel({ provider: 'zhipu' }))).toBe(true)

View File

@ -362,7 +362,7 @@ export const SYSTEM_MODELS: Record<SystemProviderId | 'defaultModel', Model[]> =
{
id: 'gemini-3-pro-image-preview',
provider: 'gemini',
name: 'Gemini 3 Pro Image Privew',
name: 'Gemini 3 Pro Image Preview',
group: 'Gemini 3'
},
{
@ -746,6 +746,12 @@ export const SYSTEM_MODELS: Record<SystemProviderId | 'defaultModel', Model[]> =
}
],
doubao: [
{
id: 'doubao-seed-1-8-251215',
provider: 'doubao',
name: 'Doubao-Seed-1.8',
group: 'Doubao-Seed-1.8'
},
{
id: 'doubao-1-5-vision-pro-32k-250115',
provider: 'doubao',
@ -1785,5 +1791,13 @@ export const SYSTEM_MODELS: Record<SystemProviderId | 'defaultModel', Model[]> =
provider: 'cerebras',
group: 'qwen'
}
],
mimo: [
{
id: 'mimo-v2-flash',
name: 'Mimo V2 Flash',
provider: 'mimo',
group: 'Mimo'
}
]
}

View File

@ -103,6 +103,7 @@ import MicrosoftModelLogo from '@renderer/assets/images/models/microsoft.png'
import MicrosoftModelLogoDark from '@renderer/assets/images/models/microsoft_dark.png'
import MidjourneyModelLogo from '@renderer/assets/images/models/midjourney.png'
import MidjourneyModelLogoDark from '@renderer/assets/images/models/midjourney_dark.png'
import MiMoModelLogo from '@renderer/assets/images/models/mimo.svg'
import {
default as MinicpmModelLogo,
default as MinicpmModelLogoDark
@ -301,7 +302,8 @@ export function getModelLogoById(modelId: string): string | undefined {
bytedance: BytedanceModelLogo,
ling: LingModelLogo,
ring: LingModelLogo,
'(V_1|V_1_TURBO|V_2|V_2A|V_2_TURBO|DESCRIBE|UPSCALE)': IdeogramModelLogo
'(V_1|V_1_TURBO|V_2|V_2A|V_2_TURBO|DESCRIBE|UPSCALE)': IdeogramModelLogo,
mimo: MiMoModelLogo
} as const satisfies Record<string, string>
for (const key in logoMap) {

View File

@ -20,7 +20,7 @@ import {
isOpenAIReasoningModel,
isSupportedReasoningEffortOpenAIModel
} from './openai'
import { GEMINI_FLASH_MODEL_REGEX, isGemini3ThinkingTokenModel } from './utils'
import { GEMINI_FLASH_MODEL_REGEX, isGemini3FlashModel, isGemini3ProModel } from './utils'
import { isTextToImageModel } from './vision'
// Reasoning models
@ -43,15 +43,17 @@ export const MODEL_SUPPORTED_REASONING_EFFORT = {
gpt52pro: ['medium', 'high', 'xhigh'] as const,
grok: ['low', 'high'] as const,
grok4_fast: ['auto'] as const,
gemini: ['low', 'medium', 'high', 'auto'] as const,
gemini3: ['low', 'medium', 'high'] as const,
gemini_pro: ['low', 'medium', 'high', 'auto'] as const,
gemini2_flash: ['low', 'medium', 'high', 'auto'] as const,
gemini2_pro: ['low', 'medium', 'high', 'auto'] as const,
gemini3_flash: ['minimal', 'low', 'medium', 'high'] as const,
gemini3_pro: ['low', 'high'] as const,
qwen: ['low', 'medium', 'high'] as const,
qwen_thinking: ['low', 'medium', 'high'] as const,
doubao: ['auto', 'high'] as const,
doubao_no_auto: ['high'] as const,
doubao_after_251015: ['minimal', 'low', 'medium', 'high'] as const,
hunyuan: ['auto'] as const,
mimo: ['auto'] as const,
zhipu: ['auto'] as const,
perplexity: ['low', 'medium', 'high'] as const,
deepseek_hybrid: ['auto'] as const
@ -59,31 +61,33 @@ export const MODEL_SUPPORTED_REASONING_EFFORT = {
// 模型类型到支持选项的映射表
export const MODEL_SUPPORTED_OPTIONS: ThinkingOptionConfig = {
default: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.default] as const,
o: MODEL_SUPPORTED_REASONING_EFFORT.o,
openai_deep_research: MODEL_SUPPORTED_REASONING_EFFORT.openai_deep_research,
gpt5: [...MODEL_SUPPORTED_REASONING_EFFORT.gpt5] as const,
gpt5pro: MODEL_SUPPORTED_REASONING_EFFORT.gpt5pro,
gpt5_codex: MODEL_SUPPORTED_REASONING_EFFORT.gpt5_codex,
gpt5_1: MODEL_SUPPORTED_REASONING_EFFORT.gpt5_1,
gpt5_1_codex: MODEL_SUPPORTED_REASONING_EFFORT.gpt5_1_codex,
gpt5_2: MODEL_SUPPORTED_REASONING_EFFORT.gpt5_2,
gpt5_1_codex_max: MODEL_SUPPORTED_REASONING_EFFORT.gpt5_1_codex_max,
gpt52pro: MODEL_SUPPORTED_REASONING_EFFORT.gpt52pro,
grok: MODEL_SUPPORTED_REASONING_EFFORT.grok,
grok4_fast: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.grok4_fast] as const,
gemini: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini] as const,
gemini_pro: MODEL_SUPPORTED_REASONING_EFFORT.gemini_pro,
gemini3: MODEL_SUPPORTED_REASONING_EFFORT.gemini3,
qwen: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.qwen] as const,
qwen_thinking: MODEL_SUPPORTED_REASONING_EFFORT.qwen_thinking,
doubao: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao] as const,
doubao_no_auto: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao_no_auto] as const,
doubao_after_251015: MODEL_SUPPORTED_REASONING_EFFORT.doubao_after_251015,
hunyuan: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.hunyuan] as const,
zhipu: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.zhipu] as const,
perplexity: MODEL_SUPPORTED_REASONING_EFFORT.perplexity,
deepseek_hybrid: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.deepseek_hybrid] as const
default: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.default] as const,
o: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.o] as const,
openai_deep_research: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.openai_deep_research] as const,
gpt5: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gpt5] as const,
gpt5pro: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gpt5pro] as const,
gpt5_codex: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gpt5_codex] as const,
gpt5_1: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gpt5_1] as const,
gpt5_1_codex: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gpt5_1_codex] as const,
gpt5_2: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gpt5_2] as const,
gpt5_1_codex_max: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gpt5_1_codex_max] as const,
gpt52pro: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gpt52pro] as const,
grok: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.grok] as const,
grok4_fast: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.grok4_fast] as const,
gemini2_flash: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini2_flash] as const,
gemini2_pro: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini2_pro] as const,
gemini3_flash: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini3_flash] as const,
gemini3_pro: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini3_pro] as const,
qwen: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.qwen] as const,
qwen_thinking: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.qwen_thinking] as const,
doubao: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao] as const,
doubao_no_auto: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao_no_auto] as const,
doubao_after_251015: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao_after_251015] as const,
mimo: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.mimo] as const,
hunyuan: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.hunyuan] as const,
zhipu: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.zhipu] as const,
perplexity: ['default', ...MODEL_SUPPORTED_REASONING_EFFORT.perplexity] as const,
deepseek_hybrid: ['default', 'none', ...MODEL_SUPPORTED_REASONING_EFFORT.deepseek_hybrid] as const
} as const
const withModelIdAndNameAsId = <T>(model: Model, fn: (model: Model) => T): { idResult: T; nameResult: T } => {
@ -100,8 +104,7 @@ const _getThinkModelType = (model: Model): ThinkingModelType => {
const modelId = getLowerBaseModelName(model.id)
if (isOpenAIDeepResearchModel(model)) {
return 'openai_deep_research'
}
if (isGPT51SeriesModel(model)) {
} else if (isGPT51SeriesModel(model)) {
if (modelId.includes('codex')) {
thinkingModelType = 'gpt5_1_codex'
if (isGPT51CodexMaxModel(model)) {
@ -129,16 +132,18 @@ const _getThinkModelType = (model: Model): ThinkingModelType => {
} else if (isGrok4FastReasoningModel(model)) {
thinkingModelType = 'grok4_fast'
} else if (isSupportedThinkingTokenGeminiModel(model)) {
if (GEMINI_FLASH_MODEL_REGEX.test(model.id)) {
thinkingModelType = 'gemini'
if (isGemini3FlashModel(model)) {
thinkingModelType = 'gemini3_flash'
} else if (isGemini3ProModel(model)) {
thinkingModelType = 'gemini3_pro'
} else if (GEMINI_FLASH_MODEL_REGEX.test(model.id)) {
thinkingModelType = 'gemini2_flash'
} else {
thinkingModelType = 'gemini_pro'
thinkingModelType = 'gemini2_pro'
}
if (isGemini3ThinkingTokenModel(model)) {
thinkingModelType = 'gemini3'
}
} else if (isSupportedReasoningEffortGrokModel(model)) thinkingModelType = 'grok'
else if (isSupportedThinkingTokenQwenModel(model)) {
} else if (isSupportedReasoningEffortGrokModel(model)) {
thinkingModelType = 'grok'
} else if (isSupportedThinkingTokenQwenModel(model)) {
if (isQwenAlwaysThinkModel(model)) {
thinkingModelType = 'qwen_thinking'
}
@ -146,15 +151,22 @@ const _getThinkModelType = (model: Model): ThinkingModelType => {
} else if (isSupportedThinkingTokenDoubaoModel(model)) {
if (isDoubaoThinkingAutoModel(model)) {
thinkingModelType = 'doubao'
} else if (isDoubaoSeedAfter251015(model)) {
} else if (isDoubaoSeedAfter251015(model) || isDoubaoSeed18Model(model)) {
thinkingModelType = 'doubao_after_251015'
} else {
thinkingModelType = 'doubao_no_auto'
}
} else if (isSupportedThinkingTokenHunyuanModel(model)) thinkingModelType = 'hunyuan'
else if (isSupportedReasoningEffortPerplexityModel(model)) thinkingModelType = 'perplexity'
else if (isSupportedThinkingTokenZhipuModel(model)) thinkingModelType = 'zhipu'
else if (isDeepSeekHybridInferenceModel(model)) thinkingModelType = 'deepseek_hybrid'
} else if (isSupportedThinkingTokenHunyuanModel(model)) {
thinkingModelType = 'hunyuan'
} else if (isSupportedReasoningEffortPerplexityModel(model)) {
thinkingModelType = 'perplexity'
} else if (isSupportedThinkingTokenZhipuModel(model)) {
thinkingModelType = 'zhipu'
} else if (isDeepSeekHybridInferenceModel(model)) {
thinkingModelType = 'deepseek_hybrid'
} else if (isSupportedThinkingTokenMiMoModel(model)) {
thinkingModelType = 'mimo'
}
return thinkingModelType
}
@ -191,20 +203,28 @@ const _getModelSupportedReasoningEffortOptions = (model: Model): ReasoningEffort
* - The model is null/undefined
* - The model doesn't support reasoning effort or thinking tokens
*
* All reasoning models support the 'default' option (always the first element),
* which represents no additional configuration for thinking behavior.
*
* @example
* // OpenAI o-series models support low, medium, high
* // OpenAI o-series models support default, low, medium, high
* getModelSupportedReasoningEffortOptions({ id: 'o3-mini', ... })
* // Returns: ['low', 'medium', 'high']
* // Returns: ['default', 'low', 'medium', 'high']
* // 'default' = no additional configuration for thinking behavior
*
* @example
* // GPT-5.1 models support none, low, medium, high
* // GPT-5.1 models support default, none, low, medium, high
* getModelSupportedReasoningEffortOptions({ id: 'gpt-5.1', ... })
* // Returns: ['none', 'low', 'medium', 'high']
* // Returns: ['default', 'none', 'low', 'medium', 'high']
* // 'default' = no additional configuration
* // 'none' = explicitly disable reasoning
*
* @example
* // Gemini Flash models support none, low, medium, high, auto
* // Gemini Flash models support default, none, low, medium, high, auto
* getModelSupportedReasoningEffortOptions({ id: 'gemini-2.5-flash-latest', ... })
* // Returns: ['none', 'low', 'medium', 'high', 'auto']
* // Returns: ['default', 'none', 'low', 'medium', 'high', 'auto']
* // 'default' = no additional configuration
* // 'auto' = let the model automatically decide
*
* @example
* // Non-reasoning models return undefined
@ -214,7 +234,7 @@ const _getModelSupportedReasoningEffortOptions = (model: Model): ReasoningEffort
* @example
* // Name fallback when id doesn't match
* getModelSupportedReasoningEffortOptions({ id: 'custom-id', name: 'gpt-5.1', ... })
* // Returns: ['none', 'low', 'medium', 'high']
* // Returns: ['default', 'none', 'low', 'medium', 'high']
*/
export const getModelSupportedReasoningEffortOptions = (
model: Model | undefined | null
@ -255,7 +275,8 @@ function _isSupportedThinkingTokenModel(model: Model): boolean {
isSupportedThinkingTokenClaudeModel(model) ||
isSupportedThinkingTokenDoubaoModel(model) ||
isSupportedThinkingTokenHunyuanModel(model) ||
isSupportedThinkingTokenZhipuModel(model)
isSupportedThinkingTokenZhipuModel(model) ||
isSupportedThinkingTokenMiMoModel(model)
)
}
@ -449,7 +470,7 @@ export function isQwenAlwaysThinkModel(model?: Model): boolean {
// Doubao 支持思考模式的模型正则
export const DOUBAO_THINKING_MODEL_REGEX =
/doubao-(?:1[.-]5-thinking-vision-pro|1[.-]5-thinking-pro-m|seed-1[.-]6(?:-flash)?(?!-(?:thinking)(?:-|$))|seed-code(?:-preview)?(?:-\d+)?)(?:-[\w-]+)*/i
/doubao-(?:1[.-]5-thinking-vision-pro|1[.-]5-thinking-pro-m|seed-1[.-][68](?:-flash)?(?!-(?:thinking)(?:-|$))|seed-code(?:-preview)?(?:-\d+)?)(?:-[\w-]+)*/i
// 支持 auto 的 Doubao 模型 doubao-seed-1.6-xxx doubao-seed-1-6-xxx doubao-1-5-thinking-pro-m-xxx
// Auto thinking is no longer supported after version 251015, see https://console.volcengine.com/ark/region:ark+cn-beijing/model/detail?Id=doubao-seed-1-6
@ -467,6 +488,11 @@ export function isDoubaoSeedAfter251015(model: Model): boolean {
return result
}
export function isDoubaoSeed18Model(model: Model): boolean {
const pattern = /doubao-seed-1[.-]8(?:-[\w-]+)?/i
return pattern.test(model.id) || pattern.test(model.name)
}
export function isSupportedThinkingTokenDoubaoModel(model?: Model): boolean {
if (!model) {
return false
@ -548,6 +574,11 @@ export const isSupportedThinkingTokenZhipuModel = (model: Model): boolean => {
return ['glm-4.5', 'glm-4.6'].some((id) => modelId.includes(id))
}
export const isSupportedThinkingTokenMiMoModel = (model: Model): boolean => {
const modelId = getLowerBaseModelName(model.id, '/')
return ['mimo-v2-flash'].some((id) => modelId.includes(id))
}
export const isDeepSeekHybridInferenceModel = (model: Model) => {
const { idResult, nameResult } = withModelIdAndNameAsId(model, (model) => {
const modelId = getLowerBaseModelName(model.id)
@ -586,6 +617,8 @@ export const isZhipuReasoningModel = (model?: Model): boolean => {
return isSupportedThinkingTokenZhipuModel(model) || modelId.includes('glm-z1')
}
export const isMiMoReasoningModel = isSupportedThinkingTokenMiMoModel
export const isStepReasoningModel = (model?: Model): boolean => {
if (!model) {
return false
@ -636,6 +669,7 @@ export function isReasoningModel(model?: Model): boolean {
isDeepSeekHybridInferenceModel(model) ||
isLingReasoningModel(model) ||
isMiniMaxReasoningModel(model) ||
isMiMoReasoningModel(model) ||
modelId.includes('magistral') ||
modelId.includes('pangu-pro-moe') ||
modelId.includes('seed-oss') ||

View File

@ -27,12 +27,13 @@ export const FUNCTION_CALLING_MODELS = [
'learnlm(?:-[\\w-]+)?',
'gemini(?:-[\\w-]+)?', // 提前排除了gemini的嵌入模型
'grok-3(?:-[\\w-]+)?',
'doubao-seed-1[.-]6(?:-[\\w-]+)?',
'doubao-seed-1[.-][68](?:-[\\w-]+)?',
'doubao-seed-code(?:-[\\w-]+)?',
'kimi-k2(?:-[\\w-]+)?',
'ling-\\w+(?:-[\\w-]+)?',
'ring-\\w+(?:-[\\w-]+)?',
'minimax-m2'
'minimax-m2',
'mimo-v2-flash'
] as const
const FUNCTION_CALLING_EXCLUDED_MODELS = [

View File

@ -282,3 +282,43 @@ export const isGemini3ThinkingTokenModel = (model: Model) => {
const modelId = getLowerBaseModelName(model.id)
return isGemini3Model(model) && !modelId.includes('image')
}
/**
* Check if the model is a Gemini 3 Flash model
* Matches: gemini-3-flash, gemini-3-flash-preview, gemini-3-flash-preview-09-2025, gemini-flash-latest (alias)
* Excludes: gemini-3-flash-image-preview
* @param model - The model to check
* @returns true if the model is a Gemini 3 Flash model
*/
export const isGemini3FlashModel = (model: Model | undefined | null): boolean => {
if (!model) {
return false
}
const modelId = getLowerBaseModelName(model.id)
// Check for gemini-flash-latest alias (currently points to gemini-3-flash, may change in future)
if (modelId === 'gemini-flash-latest') {
return true
}
// Check for gemini-3-flash with optional suffixes, excluding image variants
return /gemini-3-flash(?!-image)(?:-[\w-]+)*$/i.test(modelId)
}
/**
* Check if the model is a Gemini 3 Pro model
* Matches: gemini-3-pro, gemini-3-pro-preview, gemini-3-pro-preview-09-2025, gemini-pro-latest (alias)
* Excludes: gemini-3-pro-image-preview
* @param model - The model to check
* @returns true if the model is a Gemini 3 Pro model
*/
export const isGemini3ProModel = (model: Model | undefined | null): boolean => {
if (!model) {
return false
}
const modelId = getLowerBaseModelName(model.id)
// Check for gemini-pro-latest alias (currently points to gemini-3-pro, may change in future)
if (modelId === 'gemini-pro-latest') {
return true
}
// Check for gemini-3-pro with optional suffixes, excluding image variants
return /gemini-3-pro(?!-image)(?:-[\w-]+)*$/i.test(modelId)
}

View File

@ -45,7 +45,7 @@ const visionAllowedModels = [
'deepseek-vl(?:[\\w-]+)?',
'kimi-latest',
'gemma-3(?:-[\\w-]+)',
'doubao-seed-1[.-]6(?:-[\\w-]+)?',
'doubao-seed-1[.-][68](?:-[\\w-]+)?',
'doubao-seed-code(?:-[\\w-]+)?',
'kimi-thinking-preview',
`gemma3(?:[-:\\w]+)?`,

View File

@ -31,6 +31,7 @@ import JinaProviderLogo from '@renderer/assets/images/providers/jina.png'
import LanyunProviderLogo from '@renderer/assets/images/providers/lanyun.png'
import LMStudioProviderLogo from '@renderer/assets/images/providers/lmstudio.png'
import LongCatProviderLogo from '@renderer/assets/images/providers/longcat.png'
import MiMoProviderLogo from '@renderer/assets/images/providers/mimo.svg'
import MinimaxProviderLogo from '@renderer/assets/images/providers/minimax.png'
import MistralProviderLogo from '@renderer/assets/images/providers/mistral.png'
import ModelScopeProviderLogo from '@renderer/assets/images/providers/modelscope.png'
@ -695,6 +696,17 @@ export const SYSTEM_PROVIDERS_CONFIG: Record<SystemProviderId, SystemProvider> =
models: SYSTEM_MODELS.cerebras,
isSystem: true,
enabled: false
},
mimo: {
id: 'mimo',
name: 'Xiaomi MiMo',
type: 'openai',
apiKey: '',
apiHost: 'https://api.xiaomimimo.com',
anthropicApiHost: 'https://api.xiaomimimo.com/anthropic',
models: SYSTEM_MODELS.mimo,
isSystem: true,
enabled: false
}
} as const
@ -763,7 +775,8 @@ export const PROVIDER_LOGO_MAP: AtLeast<SystemProviderId, string> = {
huggingface: HuggingfaceProviderLogo,
sophnet: SophnetProviderLogo,
gateway: AIGatewayProviderLogo,
cerebras: CerebrasProviderLogo
cerebras: CerebrasProviderLogo,
mimo: MiMoProviderLogo
} as const
export function getProviderLogo(providerId: string) {
@ -1434,5 +1447,16 @@ export const PROVIDER_URLS: Record<SystemProviderId, ProviderUrls> = {
docs: 'https://inference-docs.cerebras.ai/introduction',
models: 'https://inference-docs.cerebras.ai/models/overview'
}
},
mimo: {
api: {
url: 'https://api.xiaomimimo.com'
},
websites: {
official: 'https://platform.xiaomimimo.com/',
apiKey: 'https://platform.xiaomimimo.com/#/console/usage',
docs: 'https://platform.xiaomimimo.com/#/docs/welcome',
models: 'https://platform.xiaomimimo.com/'
}
}
}

View File

@ -5,7 +5,7 @@
*/
import { loggerService } from '@logger'
import type { AgentType, BuiltinMCPServerName, BuiltinOcrProviderId, ThinkingOption } from '@renderer/types'
import type { AgentType, BuiltinMCPServerName, BuiltinOcrProviderId } from '@renderer/types'
import { BuiltinMCPServerNames } from '@renderer/types'
import i18n from './index'
@ -88,7 +88,8 @@ const providerKeyMap = {
huggingface: 'provider.huggingface',
sophnet: 'provider.sophnet',
gateway: 'provider.ai-gateway',
cerebras: 'provider.cerebras'
cerebras: 'provider.cerebras',
mimo: 'provider.mimo'
} as const
/**
@ -310,20 +311,6 @@ export const getHttpMessageLabel = (key: string): string => {
return getLabel(httpMessageKeyMap, key)
}
const reasoningEffortOptionsKeyMap: Record<ThinkingOption, string> = {
none: 'assistants.settings.reasoning_effort.off',
minimal: 'assistants.settings.reasoning_effort.minimal',
high: 'assistants.settings.reasoning_effort.high',
low: 'assistants.settings.reasoning_effort.low',
medium: 'assistants.settings.reasoning_effort.medium',
auto: 'assistants.settings.reasoning_effort.default',
xhigh: 'assistants.settings.reasoning_effort.xhigh'
} as const
export const getReasoningEffortOptionsLabel = (key: string): string => {
return getLabel(reasoningEffortOptionsKeyMap, key)
}
const fileFieldKeyMap = {
created_at: 'files.created_at',
size: 'files.size',
@ -344,7 +331,8 @@ const builtInMcpDescriptionKeyMap: Record<BuiltinMCPServerName, string> = {
[BuiltinMCPServerNames.difyKnowledge]: 'settings.mcp.builtinServersDescriptions.dify_knowledge',
[BuiltinMCPServerNames.python]: 'settings.mcp.builtinServersDescriptions.python',
[BuiltinMCPServerNames.didiMCP]: 'settings.mcp.builtinServersDescriptions.didi_mcp',
[BuiltinMCPServerNames.browser]: 'settings.mcp.builtinServersDescriptions.browser'
[BuiltinMCPServerNames.browser]: 'settings.mcp.builtinServersDescriptions.browser',
[BuiltinMCPServerNames.nowledgeMem]: 'settings.mcp.builtinServersDescriptions.nowledge_mem'
} as const
export const getBuiltInMcpServerDescriptionLabel = (key: string): string => {

View File

@ -32,6 +32,7 @@
},
"gitBash": {
"autoDetected": "Using auto-detected Git Bash",
"autoDiscoveredHint": "Auto-discovered",
"clear": {
"button": "Clear custom path"
},
@ -39,6 +40,7 @@
"error": {
"description": "Git Bash is required to run agents on Windows. The agent cannot function without it. Please install Git for Windows from",
"recheck": "Recheck Git Bash Installation",
"required": "Git Bash path is required on Windows",
"title": "Git Bash Required"
},
"found": {
@ -51,7 +53,9 @@
"invalidPath": "Selected file is not a valid Git Bash executable (bash.exe).",
"title": "Select Git Bash executable"
},
"success": "Git Bash detected successfully!"
"placeholder": "Select bash.exe path",
"success": "Git Bash detected successfully!",
"tooltip": "Git Bash is required to run agents on Windows. Install from git-scm.com if not available."
},
"input": {
"placeholder": "Enter your message here, send with {{key}} - @ select path, / select command"
@ -544,14 +548,23 @@
"more": "Assistant Settings",
"prompt": "Prompt Settings",
"reasoning_effort": {
"auto": "Auto",
"auto_description": "Flexibly determine reasoning effort",
"default": "Default",
"default_description": "Depend on the model's default behavior, without any configuration.",
"high": "High",
"high_description": "High level reasoning",
"label": "Reasoning effort",
"low": "Low",
"low_description": "Low level reasoning",
"medium": "Medium",
"medium_description": "Medium level reasoning",
"minimal": "Minimal",
"minimal_description": "Minimal reasoning",
"off": "Off",
"xhigh": "Extra High"
"off_description": "Disable reasoning",
"xhigh": "Extra High",
"xhigh_description": "Extra high level reasoning"
},
"regular_phrases": {
"add": "Add Phrase",
@ -2630,6 +2643,7 @@
"lanyun": "LANYUN",
"lmstudio": "LM Studio",
"longcat": "LongCat AI",
"mimo": "Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope",
@ -3926,6 +3940,7 @@
"mcp_auto_install": "Automatically install MCP service (beta)",
"memory": "Persistent memory implementation based on a local knowledge graph. This enables the model to remember user-related information across different conversations. Requires configuring the MEMORY_FILE_PATH environment variable.",
"no": "No description",
"nowledge_mem": "Requires Nowledge Mem app running locally. Keeps AI chats, tools, notes, agents, and files in private memory on your computer. Download from https://mem.nowledge.co/",
"python": "Execute Python code in a secure sandbox environment. Run Python with Pyodide, supporting most standard libraries and scientific computing packages",
"sequentialthinking": "A MCP server implementation that provides tools for dynamic and reflective problem solving through structured thinking processes"
},

View File

@ -32,6 +32,7 @@
},
"gitBash": {
"autoDetected": "使用自动检测的 Git Bash",
"autoDiscoveredHint": "自动发现",
"clear": {
"button": "清除自定义路径"
},
@ -39,6 +40,7 @@
"error": {
"description": "在 Windows 上运行智能体需要 Git Bash。没有它智能体无法运行。请从以下地址安装 Git for Windows",
"recheck": "重新检测 Git Bash 安装",
"required": "在 Windows 上需要配置 Git Bash 路径",
"title": "需要 Git Bash"
},
"found": {
@ -51,7 +53,9 @@
"invalidPath": "选择的文件不是有效的 Git Bash 可执行文件bash.exe。",
"title": "选择 Git Bash 可执行文件"
},
"success": "成功检测到 Git Bash"
"placeholder": "选择 bash.exe 路径",
"success": "成功检测到 Git Bash",
"tooltip": "在 Windows 上运行智能体需要 Git Bash。如果未安装请从 git-scm.com 下载安装。"
},
"input": {
"placeholder": "在这里输入消息,按 {{key}} 发送 - @ 选择路径, / 选择命令"
@ -544,14 +548,23 @@
"more": "助手设置",
"prompt": "提示词设置",
"reasoning_effort": {
"auto": "自动",
"auto_description": "灵活决定推理力度",
"default": "默认",
"default_description": "依赖模型默认行为,不作任何配置",
"high": "沉思",
"high_description": "高强度推理",
"label": "思维链长度",
"low": "浮想",
"low_description": "低强度推理",
"medium": "斟酌",
"medium_description": "中强度推理",
"minimal": "微念",
"minimal_description": "最小程度的推理",
"off": "关闭",
"xhigh": "穷究"
"off_description": "禁用推理",
"xhigh": "穷究",
"xhigh_description": "超高强度推理"
},
"regular_phrases": {
"add": "添加短语",
@ -2630,6 +2643,7 @@
"lanyun": "蓝耘科技",
"lmstudio": "LM Studio",
"longcat": "龙猫",
"mimo": "Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope 魔搭",
@ -3926,6 +3940,7 @@
"mcp_auto_install": "自动安装 MCP 服务(测试版)",
"memory": "基于本地知识图谱的持久性记忆基础实现。这使得模型能够在不同对话间记住用户的相关信息。需要配置 MEMORY_FILE_PATH 环境变量。",
"no": "无描述",
"nowledge_mem": "需要本地运行 Nowledge Mem 应用。将 AI 对话、工具、笔记、智能体和文件保存在本地计算机的私有记忆中。请从 https://mem.nowledge.co/ 下载",
"python": "在安全的沙盒环境中执行 Python 代码。使用 Pyodide 运行 Python支持大多数标准库和科学计算包",
"sequentialthinking": "一个 MCP 服务器实现,提供了通过结构化思维过程进行动态和反思性问题解决的工具"
},

View File

@ -32,6 +32,7 @@
},
"gitBash": {
"autoDetected": "使用自動偵測的 Git Bash",
"autoDiscoveredHint": "自動發現",
"clear": {
"button": "清除自訂路徑"
},
@ -39,6 +40,7 @@
"error": {
"description": "在 Windows 上執行 Agent 需要 Git Bash。沒有它 Agent 無法運作。請從以下網址安裝 Git for Windows",
"recheck": "重新偵測 Git Bash 安裝",
"required": "在 Windows 上需要設定 Git Bash 路徑",
"title": "需要 Git Bash"
},
"found": {
@ -51,7 +53,9 @@
"invalidPath": "選擇的檔案不是有效的 Git Bash 可執行檔bash.exe。",
"title": "選擇 Git Bash 可執行檔"
},
"success": "成功偵測到 Git Bash"
"placeholder": "選擇 bash.exe 路徑",
"success": "成功偵測到 Git Bash",
"tooltip": "在 Windows 上執行 Agent 需要 Git Bash。如未安裝請從 git-scm.com 下載安裝。"
},
"input": {
"placeholder": "在這裡輸入您的訊息,使用 {{key}} 傳送 - @ 選擇路徑,/ 選擇命令"
@ -544,14 +548,23 @@
"more": "助手設定",
"prompt": "提示詞設定",
"reasoning_effort": {
"auto": "自動",
"auto_description": "彈性決定推理投入的心力",
"default": "預設",
"default_description": "依賴模型的預設行為,無需任何配置。",
"high": "盡力思考",
"high_description": "高級推理",
"label": "思維鏈長度",
"low": "稍微思考",
"low_description": "低階推理",
"medium": "正常思考",
"medium_description": "中等程度推理",
"minimal": "最少思考",
"minimal_description": "最少推理",
"off": "關閉",
"xhigh": "極力思考"
"off_description": "禁用推理",
"xhigh": "極力思考",
"xhigh_description": "超高階推理"
},
"regular_phrases": {
"add": "新增短語",
@ -2630,6 +2643,7 @@
"lanyun": "藍耘",
"lmstudio": "LM Studio",
"longcat": "龍貓",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope 魔搭",
@ -3926,6 +3940,7 @@
"mcp_auto_install": "自動安裝 MCP 服務(測試版)",
"memory": "基於本機知識圖譜的持久性記憶基礎實做。這使得模型能夠在不同對話間記住使用者的相關資訊。需要設定 MEMORY_FILE_PATH 環境變數。",
"no": "無描述",
"nowledge_mem": "需要本機執行 Nowledge Mem 應用程式。將 AI 對話、工具、筆記、代理和檔案保存在電腦上的私人記憶體中。請從 https://mem.nowledge.co/ 下載",
"python": "在安全的沙盒環境中執行 Python 程式碼。使用 Pyodide 執行 Python支援大多數標準函式庫和科學計算套件",
"sequentialthinking": "一個 MCP 伺服器實做,提供了透過結構化思維過程進行動態和反思性問題解決的工具"
},

View File

@ -32,6 +32,7 @@
},
"gitBash": {
"autoDetected": "Automatisch ermitteltes Git Bash wird verwendet",
"autoDiscoveredHint": "[to be translated]:Auto-discovered",
"clear": {
"button": "Benutzerdefinierten Pfad löschen"
},
@ -39,6 +40,7 @@
"error": {
"description": "Git Bash ist erforderlich, um Agents unter Windows auszuführen. Der Agent kann ohne es nicht funktionieren. Bitte installieren Sie Git für Windows von",
"recheck": "Überprüfe die Git Bash-Installation erneut",
"required": "[to be translated]:Git Bash path is required on Windows",
"title": "Git Bash erforderlich"
},
"found": {
@ -51,7 +53,9 @@
"invalidPath": "Die ausgewählte Datei ist keine gültige Git Bash ausführbare Datei (bash.exe).",
"title": "Git Bash ausführbare Datei auswählen"
},
"success": "Git Bash erfolgreich erkannt!"
"placeholder": "[to be translated]:Select bash.exe path",
"success": "Git Bash erfolgreich erkannt!",
"tooltip": "[to be translated]:Git Bash is required to run agents on Windows. Install from git-scm.com if not available."
},
"input": {
"placeholder": "Gib hier deine Nachricht ein, senden mit {{key}} @ Pfad auswählen, / Befehl auswählen"
@ -544,14 +548,23 @@
"more": "Assistenteneinstellungen",
"prompt": "Prompt-Einstellungen",
"reasoning_effort": {
"auto": "Auto",
"auto_description": "Denkaufwand flexibel bestimmen",
"default": "Standard",
"default_description": "Vom Standardverhalten des Modells abhängen, ohne Konfiguration.",
"high": "Tiefes Nachdenken",
"high_description": "Ganzheitliches Denken",
"label": "Gedankenkettenlänge",
"low": "Spontan",
"low_description": "Geringfügige Argumentation",
"medium": "Überlegt",
"medium_description": "Denken auf mittlerem Niveau",
"minimal": "Minimal",
"minimal_description": "Minimales Denken",
"off": "Aus",
"xhigh": "Extra hoch"
"off_description": "Denken deaktivieren",
"xhigh": "Extra hoch",
"xhigh_description": "Extra hohes Denkvermögen"
},
"regular_phrases": {
"add": "Phrase hinzufügen",
@ -2630,6 +2643,7 @@
"lanyun": "Lanyun Technologie",
"lmstudio": "LM Studio",
"longcat": "Meißner Riesenhamster",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope",
@ -3926,6 +3940,7 @@
"mcp_auto_install": "MCP-Service automatisch installieren (Beta-Version)",
"memory": "MCP-Server mit persistenter Erinnerungsbasis auf lokalem Wissensgraphen, der Informationen über verschiedene Dialoge hinweg speichert. MEMORY_FILE_PATH-Umgebungsvariable muss konfiguriert werden",
"no": "Keine Beschreibung",
"nowledge_mem": "Erfordert lokal laufende Nowledge Mem App. Speichert KI-Chats, Tools, Notizen, Agenten und Dateien in einem privaten Speicher auf Ihrem Computer. Download unter https://mem.nowledge.co/",
"python": "Python-Code in einem sicheren Sandbox-Umgebung ausführen. Verwendung von Pyodide für Python, Unterstützung für die meisten Standardbibliotheken und wissenschaftliche Pakete",
"sequentialthinking": "MCP-Server-Implementierung mit strukturiertem Denkprozess, der dynamische und reflektierende Problemlösungen ermöglicht"
},

View File

@ -32,6 +32,7 @@
},
"gitBash": {
"autoDetected": "Χρησιμοποιείται αυτόματα εντοπισμένο Git Bash",
"autoDiscoveredHint": "[to be translated]:Auto-discovered",
"clear": {
"button": "Διαγραφή προσαρμοσμένης διαδρομής"
},
@ -39,6 +40,7 @@
"error": {
"description": "Το Git Bash απαιτείται για την εκτέλεση πρακτόρων στα Windows. Ο πράκτορας δεν μπορεί να λειτουργήσει χωρίς αυτό. Παρακαλούμε εγκαταστήστε το Git για Windows από",
"recheck": "Επανέλεγχος Εγκατάστασης του Git Bash",
"required": "[to be translated]:Git Bash path is required on Windows",
"title": "Απαιτείται Git Bash"
},
"found": {
@ -51,7 +53,9 @@
"invalidPath": "Το επιλεγμένο αρχείο δεν είναι έγκυρο εκτελέσιμο Git Bash (bash.exe).",
"title": "Επιλογή εκτελέσιμου Git Bash"
},
"success": "Το Git Bash εντοπίστηκε με επιτυχία!"
"placeholder": "[to be translated]:Select bash.exe path",
"success": "Το Git Bash εντοπίστηκε με επιτυχία!",
"tooltip": "[to be translated]:Git Bash is required to run agents on Windows. Install from git-scm.com if not available."
},
"input": {
"placeholder": "Εισάγετε το μήνυμά σας εδώ, στείλτε με {{key}} - @ επιλέξτε διαδρομή, / επιλέξτε εντολή"
@ -544,14 +548,23 @@
"more": "Ρυθμίσεις Βοηθού",
"prompt": "Ρυθμίσεις προκαλύμματος",
"reasoning_effort": {
"auto": "Αυτοκίνητο",
"auto_description": "Ευέλικτος καθορισμός της προσπάθειας συλλογισμού",
"default": "Προεπιλογή",
"default_description": "Εξαρτηθείτε από την προεπιλεγμένη συμπεριφορά του μοντέλου, χωρίς καμία διαμόρφωση.",
"high": "Μεγάλο",
"high_description": "Υψηλού επιπέδου συλλογισμός",
"label": "Μήκος λογισμικού αλυσίδας",
"low": "Μικρό",
"low_description": "Χαμηλού επιπέδου συλλογιστική",
"medium": "Μεσαίο",
"medium_description": "Αιτιολόγηση μεσαίου επιπέδου",
"minimal": "ελάχιστος",
"minimal_description": "Ελάχιστος συλλογισμός",
"off": "Απενεργοποίηση",
"xhigh": "Εξαιρετικά Υψηλή"
"off_description": "Απενεργοποίηση λογικής",
"xhigh": "Εξαιρετικά Υψηλή",
"xhigh_description": "Εξαιρετικά υψηλού επιπέδου συλλογισμός"
},
"regular_phrases": {
"add": "Προσθήκη φράσης",
@ -2630,6 +2643,7 @@
"lanyun": "Λανιούν Τεχνολογία",
"lmstudio": "LM Studio",
"longcat": "Τσίρο",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope Magpie",
@ -3926,6 +3940,7 @@
"mcp_auto_install": "Αυτόματη εγκατάσταση υπηρεσίας MCP (προβολή)",
"memory": "Βασική υλοποίηση μόνιμης μνήμης με βάση τοπικό γράφημα γνώσης. Αυτό επιτρέπει στο μοντέλο να θυμάται πληροφορίες σχετικές με τον χρήστη ανάμεσα σε διαφορετικές συνομιλίες. Απαιτείται η ρύθμιση της μεταβλητής περιβάλλοντος MEMORY_FILE_PATH.",
"no": "Χωρίς περιγραφή",
"nowledge_mem": "[to be translated]:Requires Nowledge Mem app running locally. Keeps AI chats, tools, notes, agents, and files in private memory on your computer. Download from https://mem.nowledge.co/",
"python": "Εκτελέστε κώδικα Python σε ένα ασφαλές περιβάλλον sandbox. Χρησιμοποιήστε το Pyodide για να εκτελέσετε Python, υποστηρίζοντας την πλειονότητα των βιβλιοθηκών της τυπικής βιβλιοθήκης και των πακέτων επιστημονικού υπολογισμού",
"sequentialthinking": "ένας εξυπηρετητής MCP που υλοποιείται, παρέχοντας εργαλεία για δυναμική και αναστοχαστική επίλυση προβλημάτων μέσω δομημένων διαδικασιών σκέψης"
},

View File

@ -32,6 +32,7 @@
},
"gitBash": {
"autoDetected": "Usando Git Bash detectado automáticamente",
"autoDiscoveredHint": "[to be translated]:Auto-discovered",
"clear": {
"button": "Borrar ruta personalizada"
},
@ -39,6 +40,7 @@
"error": {
"description": "Se requiere Git Bash para ejecutar agentes en Windows. El agente no puede funcionar sin él. Instale Git para Windows desde",
"recheck": "Volver a verificar la instalación de Git Bash",
"required": "[to be translated]:Git Bash path is required on Windows",
"title": "Git Bash Requerido"
},
"found": {
@ -51,7 +53,9 @@
"invalidPath": "El archivo seleccionado no es un ejecutable válido de Git Bash (bash.exe).",
"title": "Seleccionar ejecutable de Git Bash"
},
"success": "¡Git Bash detectado con éxito!"
"placeholder": "[to be translated]:Select bash.exe path",
"success": "¡Git Bash detectado con éxito!",
"tooltip": "[to be translated]:Git Bash is required to run agents on Windows. Install from git-scm.com if not available."
},
"input": {
"placeholder": "Introduce tu mensaje aquí, envía con {{key}} - @ seleccionar ruta, / seleccionar comando"
@ -544,14 +548,23 @@
"more": "Configuración del Asistente",
"prompt": "Configuración de Palabras Clave",
"reasoning_effort": {
"auto": "Automóvil",
"auto_description": "Determinar flexiblemente el esfuerzo de razonamiento",
"default": "Por defecto",
"default_description": "Depender del comportamiento predeterminado del modelo, sin ninguna configuración.",
"high": "Largo",
"high_description": "Razonamiento de alto nivel",
"label": "Longitud de Cadena de Razonamiento",
"low": "Corto",
"low_description": "Razonamiento de bajo nivel",
"medium": "Medio",
"medium_description": "Razonamiento de nivel medio",
"minimal": "minimal",
"minimal_description": "Razonamiento mínimo",
"off": "Apagado",
"xhigh": "Extra Alta"
"off_description": "Deshabilitar razonamiento",
"xhigh": "Extra Alta",
"xhigh_description": "Razonamiento de extra alto nivel"
},
"regular_phrases": {
"add": "Agregar frase",
@ -2630,6 +2643,7 @@
"lanyun": "Tecnología Lanyun",
"lmstudio": "Estudio LM",
"longcat": "Totoro",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "Minimax",
"mistral": "Mistral",
"modelscope": "ModelScope Módulo",
@ -3926,6 +3940,7 @@
"mcp_auto_install": "Instalación automática del servicio MCP (versión beta)",
"memory": "Implementación básica de memoria persistente basada en un grafo de conocimiento local. Esto permite que el modelo recuerde información relevante del usuario entre diferentes conversaciones. Es necesario configurar la variable de entorno MEMORY_FILE_PATH.",
"no": "sin descripción",
"nowledge_mem": "[to be translated]:Requires Nowledge Mem app running locally. Keeps AI chats, tools, notes, agents, and files in private memory on your computer. Download from https://mem.nowledge.co/",
"python": "Ejecuta código Python en un entorno sandbox seguro. Usa Pyodide para ejecutar Python, compatible con la mayoría de las bibliotecas estándar y paquetes de cálculo científico.",
"sequentialthinking": "Una implementación de servidor MCP que proporciona herramientas para la resolución dinámica y reflexiva de problemas mediante un proceso de pensamiento estructurado"
},

View File

@ -32,6 +32,7 @@
},
"gitBash": {
"autoDetected": "Utilisation de Git Bash détecté automatiquement",
"autoDiscoveredHint": "[to be translated]:Auto-discovered",
"clear": {
"button": "Effacer le chemin personnalisé"
},
@ -39,6 +40,7 @@
"error": {
"description": "Git Bash est requis pour exécuter des agents sur Windows. L'agent ne peut pas fonctionner sans. Veuillez installer Git pour Windows depuis",
"recheck": "Revérifier l'installation de Git Bash",
"required": "[to be translated]:Git Bash path is required on Windows",
"title": "Git Bash requis"
},
"found": {
@ -51,7 +53,9 @@
"invalidPath": "Le fichier sélectionné n'est pas un exécutable Git Bash valide (bash.exe).",
"title": "Sélectionner l'exécutable Git Bash"
},
"success": "Git Bash détecté avec succès !"
"placeholder": "[to be translated]:Select bash.exe path",
"success": "Git Bash détecté avec succès !",
"tooltip": "[to be translated]:Git Bash is required to run agents on Windows. Install from git-scm.com if not available."
},
"input": {
"placeholder": "Entrez votre message ici, envoyez avec {{key}} - @ sélectionner le chemin, / sélectionner la commande"
@ -544,14 +548,23 @@
"more": "Paramètres de l'assistant",
"prompt": "Paramètres de l'invite",
"reasoning_effort": {
"auto": "Auto",
"auto_description": "Déterminer de manière flexible l'effort de raisonnement",
"default": "Par défaut",
"default_description": "Dépendre du comportement par défaut du modèle, sans aucune configuration.",
"high": "Long",
"high_description": "Raisonnement de haut niveau",
"label": "Longueur de la chaîne de raisonnement",
"low": "Court",
"low_description": "Raisonnement de bas niveau",
"medium": "Moyen",
"medium_description": "Raisonnement de niveau moyen",
"minimal": "minimal",
"minimal_description": "Réflexion minimale",
"off": "Off",
"xhigh": "Très élevée"
"off_description": "Désactiver le raisonnement",
"xhigh": "Très élevée",
"xhigh_description": "Raisonnement de très haut niveau"
},
"regular_phrases": {
"add": "Добавить фразу",
@ -2630,6 +2643,7 @@
"lanyun": "Technologie Lan Yun",
"lmstudio": "Studio LM",
"longcat": "Mon voisin Totoro",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope MoDa",
@ -3926,6 +3940,7 @@
"mcp_auto_install": "Installation automatique du service MCP (version bêta)",
"memory": "Implémentation de base de mémoire persistante basée sur un graphe de connaissances local. Cela permet au modèle de se souvenir des informations relatives à l'utilisateur entre différentes conversations. Nécessite la configuration de la variable d'environnement MEMORY_FILE_PATH.",
"no": "sans description",
"nowledge_mem": "[to be translated]:Requires Nowledge Mem app running locally. Keeps AI chats, tools, notes, agents, and files in private memory on your computer. Download from https://mem.nowledge.co/",
"python": "Exécutez du code Python dans un environnement bac à sable sécurisé. Utilisez Pyodide pour exécuter Python, prenant en charge la plupart des bibliothèques standard et des packages de calcul scientifique.",
"sequentialthinking": "Un serveur MCP qui fournit des outils permettant une résolution dynamique et réflexive des problèmes à travers un processus de pensée structuré"
},

View File

@ -32,6 +32,7 @@
},
"gitBash": {
"autoDetected": "自動検出されたGit Bashを使用中",
"autoDiscoveredHint": "[to be translated]:Auto-discovered",
"clear": {
"button": "カスタムパスをクリア"
},
@ -39,6 +40,7 @@
"error": {
"description": "Windowsでエージェントを実行するにはGit Bashが必要です。これがないとエージェントは動作しません。以下からGit for Windowsをインストールしてください。",
"recheck": "Git Bashのインストールを再確認してください",
"required": "[to be translated]:Git Bash path is required on Windows",
"title": "Git Bashが必要です"
},
"found": {
@ -51,7 +53,9 @@
"invalidPath": "選択されたファイルは有効なGit Bash実行ファイルbash.exeではありません。",
"title": "Git Bash実行ファイルを選択"
},
"success": "Git Bashが正常に検出されました"
"placeholder": "[to be translated]:Select bash.exe path",
"success": "Git Bashが正常に検出されました",
"tooltip": "[to be translated]:Git Bash is required to run agents on Windows. Install from git-scm.com if not available."
},
"input": {
"placeholder": "メッセージをここに入力し、{{key}}で送信 - @でパスを選択、/でコマンドを選択"
@ -544,14 +548,23 @@
"more": "アシスタント設定",
"prompt": "プロンプト設定",
"reasoning_effort": {
"auto": "自動",
"auto_description": "推論にかける労力を柔軟に調整する",
"default": "デフォルト",
"default_description": "設定なしで、モデルの既定の動作に依存する。",
"high": "最大限の思考",
"high_description": "高度な推論",
"label": "思考連鎖の長さ",
"low": "少しの思考",
"low_description": "低レベル推論",
"medium": "普通の思考",
"medium_description": "中レベル推論",
"minimal": "最小限の思考",
"minimal_description": "最小限の推論",
"off": "オフ",
"xhigh": "超高"
"off_description": "推論を無効にする",
"xhigh": "超高",
"xhigh_description": "超高度な推論"
},
"regular_phrases": {
"add": "プロンプトを追加",
@ -2630,6 +2643,7 @@
"lanyun": "LANYUN",
"lmstudio": "LM Studio",
"longcat": "トトロ",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope",
@ -3926,6 +3940,7 @@
"mcp_auto_install": "MCPサービスの自動インストールベータ版",
"memory": "ローカルのナレッジグラフに基づく永続的なメモリの基本的な実装です。これにより、モデルは異なる会話間でユーザーの関連情報を記憶できるようになります。MEMORY_FILE_PATH 環境変数の設定が必要です。",
"no": "説明なし",
"nowledge_mem": "Nowledge Mem アプリをローカルで実行する必要があります。AI チャット、ツール、ート、エージェント、ファイルをコンピューター上のプライベートメモリに保存します。https://mem.nowledge.co/ からダウンロードしてください",
"python": "安全なサンドボックス環境でPythonコードを実行します。Pyodideを使用してPythonを実行し、ほとんどの標準ライブラリと科学計算パッケージをサポートしています。",
"sequentialthinking": "構造化された思考プロセスを通じて動的かつ反省的な問題解決を行うためのツールを提供するMCPサーバーの実装"
},

View File

@ -32,6 +32,7 @@
},
"gitBash": {
"autoDetected": "Usando Git Bash detectado automaticamente",
"autoDiscoveredHint": "[to be translated]:Auto-discovered",
"clear": {
"button": "Limpar caminho personalizado"
},
@ -39,6 +40,7 @@
"error": {
"description": "O Git Bash é necessário para executar agentes no Windows. O agente não pode funcionar sem ele. Por favor, instale o Git para Windows a partir de",
"recheck": "Reverificar a Instalação do Git Bash",
"required": "[to be translated]:Git Bash path is required on Windows",
"title": "Git Bash Necessário"
},
"found": {
@ -51,7 +53,9 @@
"invalidPath": "O arquivo selecionado não é um executável válido do Git Bash (bash.exe).",
"title": "Selecionar executável do Git Bash"
},
"success": "Git Bash detectado com sucesso!"
"placeholder": "[to be translated]:Select bash.exe path",
"success": "Git Bash detectado com sucesso!",
"tooltip": "[to be translated]:Git Bash is required to run agents on Windows. Install from git-scm.com if not available."
},
"input": {
"placeholder": "Digite sua mensagem aqui, envie com {{key}} - @ selecionar caminho, / selecionar comando"
@ -544,14 +548,23 @@
"more": "Configurações do Assistente",
"prompt": "Configurações de Prompt",
"reasoning_effort": {
"auto": "Automóvel",
"auto_description": "Determinar flexivelmente o esforço de raciocínio",
"default": "Padrão",
"default_description": "Depender do comportamento padrão do modelo, sem qualquer configuração.",
"high": "Longo",
"high_description": "Raciocínio de alto nível",
"label": "Comprimento da Cadeia de Raciocínio",
"low": "Curto",
"low_description": "Raciocínio de baixo nível",
"medium": "Médio",
"medium_description": "Raciocínio de nível médio",
"minimal": "mínimo",
"minimal_description": "Raciocínio mínimo",
"off": "Desligado",
"xhigh": "Extra Alta"
"off_description": "Desabilitar raciocínio",
"xhigh": "Extra Alta",
"xhigh_description": "Raciocínio de altíssimo nível"
},
"regular_phrases": {
"add": "Adicionar Frase",
@ -2630,6 +2643,7 @@
"lanyun": "Lanyun Tecnologia",
"lmstudio": "Estúdio LM",
"longcat": "Totoro",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "Minimax",
"mistral": "Mistral",
"modelscope": "ModelScope MôDá",
@ -3926,6 +3940,7 @@
"mcp_auto_install": "Instalação automática do serviço MCP (beta)",
"memory": "Implementação base de memória persistente baseada em grafos de conhecimento locais. Isso permite que o modelo lembre informações relevantes do utilizador entre diferentes conversas. É necessário configurar a variável de ambiente MEMORY_FILE_PATH.",
"no": "sem descrição",
"nowledge_mem": "Requer a aplicação Nowledge Mem em execução localmente. Mantém conversas de IA, ferramentas, notas, agentes e ficheiros numa memória privada no seu computador. Transfira de https://mem.nowledge.co/",
"python": "Executar código Python num ambiente sandbox seguro. Utilizar Pyodide para executar Python, suportando a maioria das bibliotecas padrão e pacotes de computação científica",
"sequentialthinking": "Uma implementação de servidor MCP que fornece ferramentas para resolução dinâmica e reflexiva de problemas através de um processo de pensamento estruturado"
},

View File

@ -32,6 +32,7 @@
},
"gitBash": {
"autoDetected": "Используется автоматически обнаруженный Git Bash",
"autoDiscoveredHint": "[to be translated]:Auto-discovered",
"clear": {
"button": "Очистить пользовательский путь"
},
@ -39,6 +40,7 @@
"error": {
"description": "Для запуска агентов в Windows требуется Git Bash. Без него агент не может работать. Пожалуйста, установите Git для Windows с",
"recheck": "Повторная проверка установки Git Bash",
"required": "[to be translated]:Git Bash path is required on Windows",
"title": "Требуется Git Bash"
},
"found": {
@ -51,7 +53,9 @@
"invalidPath": "Выбранный файл не является допустимым исполняемым файлом Git Bash (bash.exe).",
"title": "Выберите исполняемый файл Git Bash"
},
"success": "Git Bash успешно обнаружен!"
"placeholder": "[to be translated]:Select bash.exe path",
"success": "Git Bash успешно обнаружен!",
"tooltip": "[to be translated]:Git Bash is required to run agents on Windows. Install from git-scm.com if not available."
},
"input": {
"placeholder": "Введите ваше сообщение здесь, отправьте с помощью {{key}} — @ выбрать путь, / выбрать команду"
@ -544,14 +548,23 @@
"more": "Настройки ассистента",
"prompt": "Настройки промптов",
"reasoning_effort": {
"auto": "Авто",
"auto_description": "Гибко определяйте усилие на рассуждение",
"default": "По умолчанию",
"default_description": "Полагаться на поведение модели по умолчанию, без какой-либо конфигурации.",
"high": "Стараюсь думать",
"high_description": "Высокоуровневое рассуждение",
"label": "Настройки размышлений",
"low": "Меньше думать",
"low_description": "Низкоуровневое рассуждение",
"medium": "Среднее",
"medium_description": "Средний уровень рассуждения",
"minimal": "минимальный",
"minimal_description": "Минимальное рассуждение",
"off": "Выключить",
"xhigh": "Сверхвысокое"
"off_description": "Отключить рассуждение",
"xhigh": "Сверхвысокое",
"xhigh_description": "Высочайший уровень рассуждений"
},
"regular_phrases": {
"add": "Добавить подсказку",
@ -2630,6 +2643,7 @@
"lanyun": "LANYUN",
"lmstudio": "LM Studio",
"longcat": "Тоторо",
"mimo": "[to be translated]:Xiaomi MiMo",
"minimax": "MiniMax",
"mistral": "Mistral",
"modelscope": "ModelScope",
@ -3926,6 +3940,7 @@
"mcp_auto_install": "Автоматическая установка службы MCP (бета-версия)",
"memory": "реализация постоянной памяти на основе локального графа знаний. Это позволяет модели запоминать информацию о пользователе между различными диалогами. Требуется настроить переменную среды MEMORY_FILE_PATH.",
"no": "без описания",
"nowledge_mem": "Требуется запущенное локально приложение Nowledge Mem. Хранит чаты ИИ, инструменты, заметки, агентов и файлы в приватной памяти на вашем компьютере. Скачать можно на https://mem.nowledge.co/",
"python": "Выполняйте код Python в безопасной песочнице. Запускайте Python с помощью Pyodide, поддерживается большинство стандартных библиотек и пакетов для научных вычислений",
"sequentialthinking": "MCP серверная реализация, предоставляющая инструменты для динамического и рефлексивного решения проблем посредством структурированного мыслительного процесса"
},

View File

@ -23,6 +23,7 @@ import { abortCompletion } from '@renderer/utils/abortController'
import { buildAgentSessionTopicId } from '@renderer/utils/agentSession'
import { getSendMessageShortcutLabel } from '@renderer/utils/input'
import { createMainTextBlock, createMessage } from '@renderer/utils/messageUtils/create'
import { parseModelId } from '@renderer/utils/model'
import { documentExts, imageExts, textExts } from '@shared/config/constant'
import type { FC } from 'react'
import React, { useCallback, useEffect, useMemo, useRef } from 'react'
@ -67,8 +68,9 @@ const AgentSessionInputbar: FC<Props> = ({ agentId, sessionId }) => {
if (!session) return null
// Extract model info
const [providerId, actualModelId] = session.model?.split(':') ?? [undefined, undefined]
const actualModel = actualModelId ? getModel(actualModelId, providerId) : undefined
// Use parseModelId to handle model IDs with colons (e.g., "openrouter:anthropic/claude:free")
const parsed = parseModelId(session.model)
const actualModel = parsed ? getModel(parsed.modelId, parsed.providerId) : undefined
const model: Model | undefined = actualModel
? {

View File

@ -6,7 +6,8 @@ import {
MdiLightbulbOn30,
MdiLightbulbOn50,
MdiLightbulbOn80,
MdiLightbulbOn90
MdiLightbulbOn90,
MdiLightbulbQuestion
} from '@renderer/components/Icons/SVGIcon'
import { QuickPanelReservedSymbol, useQuickPanel } from '@renderer/components/QuickPanel'
import {
@ -18,7 +19,6 @@ import {
MODEL_SUPPORTED_OPTIONS
} from '@renderer/config/models'
import { useAssistant } from '@renderer/hooks/useAssistant'
import { getReasoningEffortOptionsLabel } from '@renderer/i18n/label'
import type { ToolQuickPanelApi } from '@renderer/pages/home/Inputbar/types'
import type { Model, ThinkingOption } from '@renderer/types'
import { Tooltip } from 'antd'
@ -88,19 +88,48 @@ const ThinkingButton: FC<Props> = ({ quickPanel, model, assistantId }): ReactEle
[updateAssistantSettings, assistant.enableWebSearch, model, t]
)
const reasoningEffortOptionLabelMap = {
default: t('assistants.settings.reasoning_effort.default'),
none: t('assistants.settings.reasoning_effort.off'),
minimal: t('assistants.settings.reasoning_effort.minimal'),
high: t('assistants.settings.reasoning_effort.high'),
low: t('assistants.settings.reasoning_effort.low'),
medium: t('assistants.settings.reasoning_effort.medium'),
auto: t('assistants.settings.reasoning_effort.auto'),
xhigh: t('assistants.settings.reasoning_effort.xhigh')
} as const satisfies Record<ThinkingOption, string>
const reasoningEffortDescriptionMap = {
default: t('assistants.settings.reasoning_effort.default_description'),
none: t('assistants.settings.reasoning_effort.off_description'),
minimal: t('assistants.settings.reasoning_effort.minimal_description'),
low: t('assistants.settings.reasoning_effort.low_description'),
medium: t('assistants.settings.reasoning_effort.medium_description'),
high: t('assistants.settings.reasoning_effort.high_description'),
xhigh: t('assistants.settings.reasoning_effort.xhigh_description'),
auto: t('assistants.settings.reasoning_effort.auto_description')
} as const satisfies Record<ThinkingOption, string>
const panelItems = useMemo(() => {
// 使用表中定义的选项创建UI选项
return supportedOptions.map((option) => ({
level: option,
label: getReasoningEffortOptionsLabel(option),
description: '',
label: reasoningEffortOptionLabelMap[option],
description: reasoningEffortDescriptionMap[option],
icon: ThinkingIcon({ option }),
isSelected: currentReasoningEffort === option,
action: () => onThinkingChange(option)
}))
}, [currentReasoningEffort, supportedOptions, onThinkingChange])
}, [
supportedOptions,
reasoningEffortOptionLabelMap,
reasoningEffortDescriptionMap,
currentReasoningEffort,
onThinkingChange
])
const isThinkingEnabled = currentReasoningEffort !== undefined && currentReasoningEffort !== 'none'
const isThinkingEnabled =
currentReasoningEffort !== undefined && currentReasoningEffort !== 'none' && currentReasoningEffort !== 'default'
const disableThinking = useCallback(() => {
onThinkingChange('none')
@ -197,8 +226,9 @@ const ThinkingIcon = (props: { option?: ThinkingOption; isFixedReasoning?: boole
case 'none':
IconComponent = MdiLightbulbOffOutline
break
case 'default':
default:
IconComponent = MdiLightbulbOffOutline
IconComponent = MdiLightbulbQuestion
break
}
}

View File

@ -61,9 +61,14 @@ const BuiltinMCPServerList: FC = () => {
{getMcpTypeLabel(server.type ?? 'stdio')}
</Tag>
{server?.shouldConfig && (
<Tag color="warning" style={{ borderRadius: 20, margin: 0, fontWeight: 500 }}>
{t('settings.mcp.requiresConfig')}
</Tag>
<a
href="https://docs.cherry-ai.com/advanced-basic/mcp/buildin"
target="_blank"
rel="noopener noreferrer">
<Tag color="warning" style={{ borderRadius: 20, margin: 0, fontWeight: 500 }}>
{t('settings.mcp.requiresConfig')}
</Tag>
</a>
)}
</ServerFooter>
</ServerCard>

View File

@ -81,6 +81,7 @@ const ANTHROPIC_COMPATIBLE_PROVIDER_IDS = [
SystemProviderIds.silicon,
SystemProviderIds.qiniu,
SystemProviderIds.dmxapi,
SystemProviderIds.mimo,
SystemProviderIds.ppio
] as const
type AnthropicCompatibleProviderId = (typeof ANTHROPIC_COMPATIBLE_PROVIDER_IDS)[number]

View File

@ -34,6 +34,10 @@ import {
getProviderByModel,
getQuickModel
} from './AssistantService'
import { ConversationService } from './ConversationService'
import { injectUserMessageWithKnowledgeSearchPrompt } from './KnowledgeService'
import type { BlockManager } from './messageStreaming'
import type { StreamProcessorCallbacks } from './StreamProcessingService'
// import { processKnowledgeSearch } from './KnowledgeService'
// import {
// filterContextMessages,
@ -79,6 +83,59 @@ export async function fetchMcpTools(assistant: Assistant) {
return mcpTools
}
/**
* LLM可以理解的格式并发送请求
* @param request -
* @param onChunkReceived -
*/
// 目前先按照函数来写,后续如果有需要到class的地方就改回来
export async function transformMessagesAndFetch(
request: {
messages: Message[]
assistant: Assistant
blockManager: BlockManager
assistantMsgId: string
callbacks: StreamProcessorCallbacks
topicId?: string // 添加 topicId 用于 trace
options: {
signal?: AbortSignal
timeout?: number
headers?: Record<string, string>
}
},
onChunkReceived: (chunk: Chunk) => void
) {
const { messages, assistant } = request
try {
const { modelMessages, uiMessages } = await ConversationService.prepareMessagesForModel(messages, assistant)
// replace prompt variables
assistant.prompt = await replacePromptVariables(assistant.prompt, assistant.model?.name)
// inject knowledge search prompt into model messages
await injectUserMessageWithKnowledgeSearchPrompt({
modelMessages,
assistant,
assistantMsgId: request.assistantMsgId,
topicId: request.topicId,
blockManager: request.blockManager,
setCitationBlockId: request.callbacks.setCitationBlockId!
})
await fetchChatCompletion({
messages: modelMessages,
assistant: assistant,
topicId: request.topicId,
requestOptions: request.options,
uiMessages,
onChunkReceived
})
} catch (error: any) {
onChunkReceived({ type: ChunkType.ERROR, error })
}
}
export async function fetchChatCompletion({
messages,
prompt,

View File

@ -38,7 +38,8 @@ export const DEFAULT_ASSISTANT_SETTINGS = {
enableTopP: false,
// It would gracefully fallback to prompt if not supported by model.
toolUseMode: 'function',
customParameters: []
customParameters: [],
reasoning_effort: 'default'
} as const satisfies AssistantSettings
export function getDefaultAssistant(): Assistant {
@ -186,7 +187,7 @@ export const getAssistantSettings = (assistant: Assistant): AssistantSettings =>
streamOutput: assistant?.settings?.streamOutput ?? true,
toolUseMode: assistant?.settings?.toolUseMode ?? 'function',
defaultModel: assistant?.defaultModel ?? undefined,
reasoning_effort: assistant?.settings?.reasoning_effort ?? undefined,
reasoning_effort: assistant?.settings?.reasoning_effort ?? 'default',
customParameters: assistant?.settings?.customParameters ?? []
}
}

View File

@ -2,10 +2,13 @@ import { loggerService } from '@logger'
import type { Span } from '@opentelemetry/api'
import { ModernAiProvider } from '@renderer/aiCore'
import AiProvider from '@renderer/aiCore/legacy'
import { getMessageContent } from '@renderer/aiCore/plugins/searchOrchestrationPlugin'
import { DEFAULT_KNOWLEDGE_DOCUMENT_COUNT, DEFAULT_KNOWLEDGE_THRESHOLD } from '@renderer/config/constant'
import { getEmbeddingMaxContext } from '@renderer/config/embedings'
import { REFERENCE_PROMPT } from '@renderer/config/prompts'
import { addSpan, endSpan } from '@renderer/services/SpanManagerService'
import store from '@renderer/store'
import type { Assistant } from '@renderer/types'
import {
type FileMetadata,
type KnowledgeBase,
@ -16,13 +19,17 @@ import {
} from '@renderer/types'
import type { Chunk } from '@renderer/types/chunk'
import { ChunkType } from '@renderer/types/chunk'
import { MessageBlockStatus, MessageBlockType } from '@renderer/types/newMessage'
import { routeToEndpoint } from '@renderer/utils'
import type { ExtractResults } from '@renderer/utils/extract'
import { createCitationBlock } from '@renderer/utils/messageUtils/create'
import { isAzureOpenAIProvider, isGeminiProvider } from '@renderer/utils/provider'
import type { ModelMessage, UserModelMessage } from 'ai'
import { isEmpty } from 'lodash'
import { getProviderByModel } from './AssistantService'
import FileManager from './FileManager'
import type { BlockManager } from './messageStreaming'
const logger = loggerService.withContext('RendererKnowledgeService')
@ -338,3 +345,128 @@ export function processKnowledgeReferences(
}
}
}
export const injectUserMessageWithKnowledgeSearchPrompt = async ({
modelMessages,
assistant,
assistantMsgId,
topicId,
blockManager,
setCitationBlockId
}: {
modelMessages: ModelMessage[]
assistant: Assistant
assistantMsgId: string
topicId?: string
blockManager: BlockManager
setCitationBlockId: (blockId: string) => void
}) => {
if (assistant.knowledge_bases?.length && modelMessages.length > 0) {
const lastUserMessage = modelMessages[modelMessages.length - 1]
const isUserMessage = lastUserMessage.role === 'user'
if (!isUserMessage) {
return
}
const knowledgeReferences = await getKnowledgeReferences({
assistant,
lastUserMessage,
topicId: topicId
})
if (knowledgeReferences.length === 0) {
return
}
await createKnowledgeReferencesBlock({
assistantMsgId,
knowledgeReferences,
blockManager,
setCitationBlockId
})
const question = getMessageContent(lastUserMessage) || ''
const references = JSON.stringify(knowledgeReferences, null, 2)
const knowledgeSearchPrompt = REFERENCE_PROMPT.replace('{question}', question).replace('{references}', references)
if (typeof lastUserMessage.content === 'string') {
lastUserMessage.content = knowledgeSearchPrompt
} else if (Array.isArray(lastUserMessage.content)) {
const textPart = lastUserMessage.content.find((part) => part.type === 'text')
if (textPart) {
textPart.text = knowledgeSearchPrompt
} else {
lastUserMessage.content.push({
type: 'text',
text: knowledgeSearchPrompt
})
}
}
}
}
export const getKnowledgeReferences = async ({
assistant,
lastUserMessage,
topicId
}: {
assistant: Assistant
lastUserMessage: UserModelMessage
topicId?: string
}) => {
// 如果助手没有知识库,返回空字符串
if (!assistant || isEmpty(assistant.knowledge_bases)) {
return []
}
// 获取知识库ID
const knowledgeBaseIds = assistant.knowledge_bases?.map((base) => base.id)
// 获取用户消息内容
const question = getMessageContent(lastUserMessage) || ''
// 获取知识库引用
const knowledgeReferences = await processKnowledgeSearch(
{
knowledge: {
question: [question],
rewrite: ''
}
},
knowledgeBaseIds,
topicId!
)
// 返回提示词
return knowledgeReferences
}
export const createKnowledgeReferencesBlock = async ({
assistantMsgId,
knowledgeReferences,
blockManager,
setCitationBlockId
}: {
assistantMsgId: string
knowledgeReferences: KnowledgeReference[]
blockManager: BlockManager
setCitationBlockId: (blockId: string) => void
}) => {
// 创建引用块
const citationBlock = createCitationBlock(
assistantMsgId,
{ knowledge: knowledgeReferences },
{ status: MessageBlockStatus.SUCCESS }
)
// 处理引用块
blockManager.handleBlockTransition(citationBlock, MessageBlockType.CITATION)
// 设置引用块ID
setCitationBlockId(citationBlock.id)
// 返回引用块
return citationBlock
}

View File

@ -1,91 +0,0 @@
import type { Assistant, Message } from '@renderer/types'
import type { Chunk } from '@renderer/types/chunk'
import { ChunkType } from '@renderer/types/chunk'
import { replacePromptVariables } from '@renderer/utils/prompt'
import { fetchChatCompletion } from './ApiService'
import { ConversationService } from './ConversationService'
/**
* The request object for handling a user message.
*/
export interface OrchestrationRequest {
messages: Message[]
assistant: Assistant
options: {
signal?: AbortSignal
timeout?: number
headers?: Record<string, string>
}
topicId?: string // 添加 topicId 用于 trace
}
/**
* The OrchestrationService is responsible for orchestrating the different services
* to handle a user's message. It contains the core logic of the application.
*/
// NOTE暂时没有用到这个类
export class OrchestrationService {
constructor() {
// In the future, this could be a singleton, but for now, a new instance is fine.
// this.conversationService = new ConversationService()
}
/**
* This is the core method to handle user messages.
* It takes the message context and an events object for callbacks,
* and orchestrates the call to the LLM.
* The logic is moved from `messageThunk.ts`.
* @param request The orchestration request containing messages and assistant info.
* @param events A set of callbacks to report progress and results to the UI layer.
*/
async transformMessagesAndFetch(request: OrchestrationRequest, onChunkReceived: (chunk: Chunk) => void) {
const { messages, assistant } = request
try {
const { modelMessages, uiMessages } = await ConversationService.prepareMessagesForModel(messages, assistant)
await fetchChatCompletion({
messages: modelMessages,
assistant: assistant,
requestOptions: request.options,
onChunkReceived,
topicId: request.topicId,
uiMessages: uiMessages
})
} catch (error: any) {
onChunkReceived({ type: ChunkType.ERROR, error })
}
}
}
/**
* LLM可以理解的格式并发送请求
* @param request -
* @param onChunkReceived -
*/
// 目前先按照函数来写,后续如果有需要到class的地方就改回来
export async function transformMessagesAndFetch(
request: OrchestrationRequest,
onChunkReceived: (chunk: Chunk) => void
) {
const { messages, assistant } = request
try {
const { modelMessages, uiMessages } = await ConversationService.prepareMessagesForModel(messages, assistant)
// replace prompt variables
assistant.prompt = await replacePromptVariables(assistant.prompt, assistant.model?.name)
await fetchChatCompletion({
messages: modelMessages,
assistant: assistant,
requestOptions: request.options,
onChunkReceived,
topicId: request.topicId,
uiMessages
})
} catch (error: any) {
onChunkReceived({ type: ChunkType.ERROR, error })
}
}

View File

@ -34,6 +34,10 @@ export interface StreamProcessorCallbacks {
onLLMWebSearchInProgress?: () => void
// LLM Web search complete
onLLMWebSearchComplete?: (llmWebSearchResult: WebSearchResponse) => void
// Get citation block ID
getCitationBlockId?: () => string | null
// Set citation block ID
setCitationBlockId?: (blockId: string) => void
// Image generation chunk received
onImageCreated?: () => void
onImageDelta?: (imageData: GenerateImageResponse) => void

View File

@ -121,6 +121,11 @@ export const createCitationCallbacks = (deps: CitationCallbacksDependencies) =>
},
// 暴露给外部的方法用于textCallbacks中获取citationBlockId
getCitationBlockId: () => citationBlockId
getCitationBlockId: () => citationBlockId,
// 暴露给外部的方法,用于 KnowledgeService 中设置 citationBlockId
setCitationBlockId: (blockId: string) => {
citationBlockId = blockId
}
}
}

View File

@ -67,7 +67,7 @@ const persistedReducer = persistReducer(
{
key: 'cherry-studio',
storage,
version: 186,
version: 187,
blacklist: ['runtime', 'messages', 'messageBlocks', 'tabs', 'toolPermissions'],
migrate
},

View File

@ -183,6 +183,16 @@ export const builtinMCPServers: BuiltinMCPServer[] = [
provider: 'CherryAI',
installSource: 'builtin',
isTrusted: true
},
{
id: nanoid(),
name: BuiltinMCPServerNames.nowledgeMem,
reference: 'https://mem.nowledge.co/',
type: 'inMemory',
isActive: false,
provider: 'Nowledge',
installSource: 'builtin',
isTrusted: true
}
] as const

View File

@ -3043,6 +3043,21 @@ const migrateConfig = {
logger.error('migrate 186 error', error as Error)
return state
}
},
'187': (state: RootState) => {
try {
state.assistants.assistants.forEach((assistant) => {
if (assistant.settings && assistant.settings.reasoning_effort === undefined) {
assistant.settings.reasoning_effort = 'default'
}
})
addProvider(state, 'mimo')
logger.info('migrate 187 success')
return state
} catch (error) {
logger.error('migrate 187 error', error as Error)
return state
}
}
}

View File

@ -2,12 +2,11 @@ import { loggerService } from '@logger'
import { AiSdkToChunkAdapter } from '@renderer/aiCore/chunk/AiSdkToChunkAdapter'
import { AgentApiClient } from '@renderer/api/agent'
import db from '@renderer/databases'
import { fetchMessagesSummary } from '@renderer/services/ApiService'
import { fetchMessagesSummary, transformMessagesAndFetch } from '@renderer/services/ApiService'
import { DbService } from '@renderer/services/db/DbService'
import FileManager from '@renderer/services/FileManager'
import { BlockManager } from '@renderer/services/messageStreaming/BlockManager'
import { createCallbacks } from '@renderer/services/messageStreaming/callbacks'
import { transformMessagesAndFetch } from '@renderer/services/OrchestrateService'
import { endSpan } from '@renderer/services/SpanManagerService'
import { createStreamProcessor, type StreamProcessorCallbacks } from '@renderer/services/StreamProcessingService'
import store from '@renderer/store'
@ -814,6 +813,9 @@ const fetchAndProcessAssistantResponseImpl = async (
messages: messagesForContext,
assistant,
topicId,
blockManager,
assistantMsgId,
callbacks,
options: {
signal: abortController.signal,
timeout: 30000,

View File

@ -95,21 +95,23 @@ const ThinkModelTypes = [
'gpt52pro',
'grok',
'grok4_fast',
'gemini',
'gemini_pro',
'gemini3',
'gemini2_flash',
'gemini2_pro',
'gemini3_flash',
'gemini3_pro',
'qwen',
'qwen_thinking',
'doubao',
'doubao_no_auto',
'doubao_after_251015',
'mimo',
'hunyuan',
'zhipu',
'perplexity',
'deepseek_hybrid'
] as const
export type ReasoningEffortOption = NonNullable<OpenAI.ReasoningEffort> | 'auto'
export type ReasoningEffortOption = NonNullable<OpenAI.ReasoningEffort> | 'auto' | 'default'
export type ThinkingOption = ReasoningEffortOption
export type ThinkingModelType = (typeof ThinkModelTypes)[number]
export type ThinkingOptionConfig = Record<ThinkingModelType, ThinkingOption[]>
@ -121,6 +123,8 @@ export function isThinkModelType(type: string): type is ThinkingModelType {
}
export const EFFORT_RATIO: EffortRatio = {
// 'default' is not expected to be used.
default: 0,
none: 0.01,
minimal: 0.05,
low: 0.05,
@ -141,12 +145,11 @@ export type AssistantSettings = {
streamOutput: boolean
defaultModel?: Model
customParameters?: AssistantSettingCustomParameters[]
reasoning_effort?: ReasoningEffortOption
/** 使 reasoning effort, .
*
* TODO: 目前 reasoning_effort === undefined
* / cache
*
reasoning_effort: ReasoningEffortOption
/**
* Preserve the effective reasoning effort (not 'default') from the last use of a thinking model which supports thinking control,
* and restore it when switching back from a non-thinking or fixed reasoning model.
* FIXME: It should be managed by external cache service instead of being stored in the assistant
*/
reasoning_effort_cache?: ReasoningEffortOption
qwenThinkMode?: boolean
@ -751,7 +754,8 @@ export const BuiltinMCPServerNames = {
difyKnowledge: '@cherry/dify-knowledge',
python: '@cherry/python',
didiMCP: '@cherry/didi-mcp',
browser: '@cherry/browser'
browser: '@cherry/browser',
nowledgeMem: '@cherry/nowledge-mem'
} as const
export type BuiltinMCPServerName = (typeof BuiltinMCPServerNames)[keyof typeof BuiltinMCPServerNames]

View File

@ -1,7 +1,7 @@
import type { Model, ModelTag } from '@renderer/types'
import { describe, expect, it, vi } from 'vitest'
import { getModelTags, isFreeModel } from '../model'
import { getModelTags, isFreeModel, parseModelId } from '../model'
// Mock the model checking functions from @renderer/config/models
vi.mock('@renderer/config/models', () => ({
@ -92,4 +92,85 @@ describe('model', () => {
expect(getModelTags(models_2)).toStrictEqual(expected_2)
})
})
describe('parseModelId', () => {
it('should parse model identifiers with single colon', () => {
expect(parseModelId('anthropic:claude-3-sonnet')).toEqual({
providerId: 'anthropic',
modelId: 'claude-3-sonnet'
})
expect(parseModelId('openai:gpt-4')).toEqual({
providerId: 'openai',
modelId: 'gpt-4'
})
})
it('should parse model identifiers with multiple colons', () => {
expect(parseModelId('openrouter:anthropic/claude-3.5-sonnet:free')).toEqual({
providerId: 'openrouter',
modelId: 'anthropic/claude-3.5-sonnet:free'
})
expect(parseModelId('provider:model:suffix:extra')).toEqual({
providerId: 'provider',
modelId: 'model:suffix:extra'
})
})
it('should handle model identifiers without provider prefix', () => {
expect(parseModelId('claude-3-sonnet')).toEqual({
providerId: undefined,
modelId: 'claude-3-sonnet'
})
expect(parseModelId('gpt-4')).toEqual({
providerId: undefined,
modelId: 'gpt-4'
})
})
it('should return undefined for invalid inputs', () => {
expect(parseModelId(undefined)).toBeUndefined()
expect(parseModelId('')).toBeUndefined()
expect(parseModelId(' ')).toBeUndefined()
})
it('should handle edge cases with colons', () => {
// Colon at start - treat as modelId without provider
expect(parseModelId(':missing-provider')).toEqual({
providerId: undefined,
modelId: ':missing-provider'
})
// Colon at end - treat everything before as modelId
expect(parseModelId('missing-model:')).toEqual({
providerId: undefined,
modelId: 'missing-model'
})
// Only colon - treat as modelId without provider
expect(parseModelId(':')).toEqual({
providerId: undefined,
modelId: ':'
})
})
it('should handle edge cases', () => {
expect(parseModelId('a:b')).toEqual({
providerId: 'a',
modelId: 'b'
})
expect(parseModelId('provider:model-with-dashes')).toEqual({
providerId: 'provider',
modelId: 'model-with-dashes'
})
expect(parseModelId('provider:model/with/slashes')).toEqual({
providerId: 'provider',
modelId: 'model/with/slashes'
})
})
})
})

View File

@ -14,7 +14,7 @@ export {
withoutTrailingApiVersion,
withoutTrailingSharp,
withoutTrailingSlash
} from '@shared/api'
} from '@shared/utils/url'
/**
* API key

View File

@ -81,3 +81,57 @@ export const apiModelAdapter = (model: ApiModel): AdaptedApiModel => {
origin: model
}
}
/**
* Parse a model identifier in the format "provider:modelId"
* where modelId may contain additional colons (e.g., "openrouter:anthropic/claude-3.5-sonnet:free")
*
* @param modelIdentifier - The full model identifier string
* @returns Object with providerId and modelId. If no provider prefix found, providerId will be undefined
*
* @example
* parseModelId("openrouter:anthropic/claude-3.5-sonnet:free")
* // => { providerId: "openrouter", modelId: "anthropic/claude-3.5-sonnet:free" }
*
* @example
* parseModelId("anthropic:claude-3-sonnet")
* // => { providerId: "anthropic", modelId: "claude-3-sonnet" }
*
* @example
* parseModelId("claude-3-sonnet")
* // => { providerId: undefined, modelId: "claude-3-sonnet" }
*
* @example
* parseModelId("") // => undefined
*/
export function parseModelId(
modelIdentifier: string | undefined
): { providerId: string | undefined; modelId: string } | undefined {
if (!modelIdentifier || typeof modelIdentifier !== 'string' || modelIdentifier.trim() === '') {
return undefined
}
const colonIndex = modelIdentifier.indexOf(':')
// No colon found or colon at the start - treat entire string as modelId
if (colonIndex <= 0) {
return {
providerId: undefined,
modelId: modelIdentifier
}
}
// Colon at the end - treat everything before as modelId
if (colonIndex >= modelIdentifier.length - 1) {
return {
providerId: undefined,
modelId: modelIdentifier.substring(0, colonIndex)
}
}
// Standard format: "provider:modelId"
return {
providerId: modelIdentifier.substring(0, colonIndex),
modelId: modelIdentifier.substring(colonIndex + 1)
}
}

View File

@ -61,7 +61,19 @@ vi.mock('electron', () => ({
getPrimaryDisplay: vi.fn(),
getAllDisplays: vi.fn()
},
Notification: vi.fn()
Notification: vi.fn(),
net: {
fetch: vi.fn(() =>
Promise.resolve({
ok: true,
status: 200,
statusText: 'OK',
json: vi.fn(() => Promise.resolve({})),
text: vi.fn(() => Promise.resolve('')),
headers: new Headers()
})
)
}
}))
// Mock Winston for LoggerService dependencies
@ -97,15 +109,40 @@ vi.mock('winston-daily-rotate-file', () => {
}))
})
// Mock Node.js modules
vi.mock('node:os', () => ({
platform: vi.fn(() => 'darwin'),
arch: vi.fn(() => 'x64'),
version: vi.fn(() => '20.0.0'),
cpus: vi.fn(() => [{ model: 'Mock CPU' }]),
totalmem: vi.fn(() => 8 * 1024 * 1024 * 1024) // 8GB
// Mock main process services
vi.mock('@main/services/AnthropicService', () => ({
default: {}
}))
vi.mock('@main/services/CopilotService', () => ({
default: {}
}))
vi.mock('@main/services/ReduxService', () => ({
reduxService: {
selectSync: vi.fn()
}
}))
vi.mock('@main/integration/cherryai', () => ({
generateSignature: vi.fn()
}))
// Mock Node.js modules
vi.mock('node:os', async () => {
const actual = await vi.importActual<typeof import('node:os')>('node:os')
return {
...actual,
default: actual,
platform: vi.fn(() => 'darwin'),
arch: vi.fn(() => 'x64'),
version: vi.fn(() => '20.0.0'),
cpus: vi.fn(() => [{ model: 'Mock CPU' }]),
totalmem: vi.fn(() => 8 * 1024 * 1024 * 1024), // 8GB
homedir: vi.fn(() => '/tmp')
}
})
vi.mock('node:path', async () => {
const actual = await vi.importActual('node:path')
return {

View File

@ -1,8 +1,15 @@
import '@testing-library/jest-dom/vitest'
import { createRequire } from 'node:module'
import { styleSheetSerializer } from 'jest-styled-components/serializer'
import { expect, vi } from 'vitest'
const require = createRequire(import.meta.url)
const bufferModule = require('buffer')
if (!bufferModule.SlowBuffer) {
bufferModule.SlowBuffer = bufferModule.Buffer
}
expect.addSnapshotSerializer(styleSheetSerializer)
// Mock LoggerService globally for renderer tests
@ -48,3 +55,29 @@ vi.stubGlobal('api', {
writeWithId: vi.fn().mockResolvedValue(undefined)
}
})
if (typeof globalThis.localStorage === 'undefined' || typeof (globalThis.localStorage as any).getItem !== 'function') {
let store = new Map<string, string>()
const localStorageMock = {
getItem: (key: string) => store.get(key) ?? null,
setItem: (key: string, value: string) => {
store.set(key, String(value))
},
removeItem: (key: string) => {
store.delete(key)
},
clear: () => {
store.clear()
},
key: (index: number) => Array.from(store.keys())[index] ?? null,
get length() {
return store.size
}
}
vi.stubGlobal('localStorage', localStorageMock)
if (typeof window !== 'undefined') {
Object.defineProperty(window, 'localStorage', { value: localStorageMock })
}
}

131
yarn.lock
View File

@ -102,6 +102,18 @@ __metadata:
languageName: node
linkType: hard
"@ai-sdk/anthropic@npm:2.0.56":
version: 2.0.56
resolution: "@ai-sdk/anthropic@npm:2.0.56"
dependencies:
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.19"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/f2b6029c92443f831a2d124420e805d057668003067b1f677a4292d02f27aa3ad533374ea996d77ede7746a42c46fb94a8f2d8c0e7758a4555ea18c8b532052c
languageName: node
linkType: hard
"@ai-sdk/azure@npm:^2.0.87":
version: 2.0.87
resolution: "@ai-sdk/azure@npm:2.0.87"
@ -166,42 +178,42 @@ __metadata:
languageName: node
linkType: hard
"@ai-sdk/google-vertex@npm:^3.0.79":
version: 3.0.79
resolution: "@ai-sdk/google-vertex@npm:3.0.79"
"@ai-sdk/google-vertex@npm:^3.0.94":
version: 3.0.94
resolution: "@ai-sdk/google-vertex@npm:3.0.94"
dependencies:
"@ai-sdk/anthropic": "npm:2.0.49"
"@ai-sdk/google": "npm:2.0.43"
"@ai-sdk/anthropic": "npm:2.0.56"
"@ai-sdk/google": "npm:2.0.49"
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.17"
google-auth-library: "npm:^9.15.0"
"@ai-sdk/provider-utils": "npm:3.0.19"
google-auth-library: "npm:^10.5.0"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/a86949b8d4a855409acdf7dc8d93ad9ea8ccf2bc3849acbe1ecbe4d6d66f06bcb5242f0df8eea24214e78732618b71ec8a019cbbeab16366f9ad3c860c5d8d30
checksum: 10c0/68e2ee9e6525a5e43f90304980e64bf2a4227fd3ce74a7bf17e5ace094ea1bca8f8f18a8cc332a492fee4b912568a768f7479a4eed8148b84e7de1adf4104ad0
languageName: node
linkType: hard
"@ai-sdk/google@npm:2.0.43":
version: 2.0.43
resolution: "@ai-sdk/google@npm:2.0.43"
"@ai-sdk/google@npm:2.0.49":
version: 2.0.49
resolution: "@ai-sdk/google@npm:2.0.49"
dependencies:
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.17"
"@ai-sdk/provider-utils": "npm:3.0.19"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/5a421a9746cf8cbdf3bb7fb49426453a4fe0e354ea55a0123e628afb7acf9bb19959d512c0f8e6d7dbefbfa7e1cef4502fc146149007258a8eeb57743ac5e9e5
checksum: 10c0/f3f8acfcd956edc7d807d22963d5eff0f765418f1f2c7d18615955ccdfcebb4d43cc26ce1f712c6a53572f1d8becc0773311b77b1f1bf1af87d675c5f017d5a4
languageName: node
linkType: hard
"@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.43#~/.yarn/patches/@ai-sdk-google-npm-2.0.43-689ed559b3.patch":
version: 2.0.43
resolution: "@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.43#~/.yarn/patches/@ai-sdk-google-npm-2.0.43-689ed559b3.patch::version=2.0.43&hash=4dde1e"
"@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.49#~/.yarn/patches/@ai-sdk-google-npm-2.0.49-84720f41bd.patch":
version: 2.0.49
resolution: "@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.49#~/.yarn/patches/@ai-sdk-google-npm-2.0.49-84720f41bd.patch::version=2.0.49&hash=406c25"
dependencies:
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.17"
"@ai-sdk/provider-utils": "npm:3.0.19"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/4cfd17e9c47f2b742d8a0b1ca3532b4dc48753088363b74b01a042f63652174fa9a3fbf655a23f823974c673121dffbd2d192bb0c1bf158da4e2bf498fc76527
checksum: 10c0/8d4d881583c2301dce8a4e3066af2ba7d99b30520b6219811f90271c93bf8a07dc23e752fa25ffd0e72c6ec56e97d40d32e04072a362accf7d01a745a2d2a352
languageName: node
linkType: hard
@ -10051,8 +10063,8 @@ __metadata:
"@ai-sdk/anthropic": "npm:^2.0.49"
"@ai-sdk/cerebras": "npm:^1.0.31"
"@ai-sdk/gateway": "npm:^2.0.15"
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.43#~/.yarn/patches/@ai-sdk-google-npm-2.0.43-689ed559b3.patch"
"@ai-sdk/google-vertex": "npm:^3.0.79"
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.49#~/.yarn/patches/@ai-sdk-google-npm-2.0.49-84720f41bd.patch"
"@ai-sdk/google-vertex": "npm:^3.0.94"
"@ai-sdk/huggingface": "npm:^0.0.10"
"@ai-sdk/mistral": "npm:^2.0.24"
"@ai-sdk/openai": "patch:@ai-sdk/openai@npm%3A2.0.85#~/.yarn/patches/@ai-sdk-openai-npm-2.0.85-27483d1d6a.patch"
@ -11246,7 +11258,7 @@ __metadata:
languageName: node
linkType: hard
"buffer-equal-constant-time@npm:1.0.1":
"buffer-equal-constant-time@npm:^1.0.1":
version: 1.0.1
resolution: "buffer-equal-constant-time@npm:1.0.1"
checksum: 10c0/fb2294e64d23c573d0dd1f1e7a466c3e978fe94a4e0f8183937912ca374619773bef8e2aceb854129d2efecbbc515bbd0cc78d2734a3e3031edb0888531bbc8e
@ -15499,6 +15511,18 @@ __metadata:
languageName: node
linkType: hard
"gaxios@npm:^7.0.0":
version: 7.1.3
resolution: "gaxios@npm:7.1.3"
dependencies:
extend: "npm:^3.0.2"
https-proxy-agent: "npm:^7.0.1"
node-fetch: "npm:^3.3.2"
rimraf: "npm:^5.0.1"
checksum: 10c0/a4a1cdf9a392c0c22e9734a40dca5a77a2903f505b939a50f1e68e312458b1289b7993d2f72d011426e89657cae77a3aa9fc62fb140e8ba90a1faa31fdbde4d2
languageName: node
linkType: hard
"gcp-metadata@npm:^6.1.0":
version: 6.1.1
resolution: "gcp-metadata@npm:6.1.1"
@ -15510,6 +15534,17 @@ __metadata:
languageName: node
linkType: hard
"gcp-metadata@npm:^8.0.0":
version: 8.1.2
resolution: "gcp-metadata@npm:8.1.2"
dependencies:
gaxios: "npm:^7.0.0"
google-logging-utils: "npm:^1.0.0"
json-bigint: "npm:^1.0.0"
checksum: 10c0/15a61231a9410dc11c2828d2c9fdc8b0a939f1af746195c44edc6f2ffea0acab52cef3a7b9828069a36fd5d68bda730f7328a415fe42a01258f6e249dfba6908
languageName: node
linkType: hard
"gensync@npm:^1.0.0-beta.2":
version: 1.0.0-beta.2
resolution: "gensync@npm:1.0.0-beta.2"
@ -15733,7 +15768,22 @@ __metadata:
languageName: node
linkType: hard
"google-auth-library@npm:^9.14.2, google-auth-library@npm:^9.15.0, google-auth-library@npm:^9.15.1, google-auth-library@npm:^9.4.2":
"google-auth-library@npm:^10.5.0":
version: 10.5.0
resolution: "google-auth-library@npm:10.5.0"
dependencies:
base64-js: "npm:^1.3.0"
ecdsa-sig-formatter: "npm:^1.0.11"
gaxios: "npm:^7.0.0"
gcp-metadata: "npm:^8.0.0"
google-logging-utils: "npm:^1.0.0"
gtoken: "npm:^8.0.0"
jws: "npm:^4.0.0"
checksum: 10c0/49d3931d20b1f4a4d075216bf5518e2b3396dcf441a8f1952611cf3b6080afb1261c3d32009609047ee4a1cc545269a74b4957e6bba9cce840581df309c4b145
languageName: node
linkType: hard
"google-auth-library@npm:^9.14.2, google-auth-library@npm:^9.15.1, google-auth-library@npm:^9.4.2":
version: 9.15.1
resolution: "google-auth-library@npm:9.15.1"
dependencies:
@ -15754,6 +15804,13 @@ __metadata:
languageName: node
linkType: hard
"google-logging-utils@npm:^1.0.0":
version: 1.1.3
resolution: "google-logging-utils@npm:1.1.3"
checksum: 10c0/e65201c7e96543bd1423b9324013736646b9eed60941e0bfa47b9bfd146d2f09cf3df1c99ca60b7d80a726075263ead049ee72de53372cb8458c3bc55c2c1e59
languageName: node
linkType: hard
"gopd@npm:^1.0.1, gopd@npm:^1.2.0":
version: 1.2.0
resolution: "gopd@npm:1.2.0"
@ -15842,6 +15899,16 @@ __metadata:
languageName: node
linkType: hard
"gtoken@npm:^8.0.0":
version: 8.0.0
resolution: "gtoken@npm:8.0.0"
dependencies:
gaxios: "npm:^7.0.0"
jws: "npm:^4.0.0"
checksum: 10c0/058538e5bbe081d30ada5f1fd34d3a8194357c2e6ecbf7c8a98daeefbf13f7e06c15649c7dace6a1d4cc3bc6dc5483bd484d6d7adc5852021896d7c05c439f37
languageName: node
linkType: hard
"hachure-fill@npm:^0.5.2":
version: 0.5.2
resolution: "hachure-fill@npm:0.5.2"
@ -17178,24 +17245,24 @@ __metadata:
languageName: node
linkType: hard
"jwa@npm:^2.0.0":
version: 2.0.0
resolution: "jwa@npm:2.0.0"
"jwa@npm:^2.0.1":
version: 2.0.1
resolution: "jwa@npm:2.0.1"
dependencies:
buffer-equal-constant-time: "npm:1.0.1"
buffer-equal-constant-time: "npm:^1.0.1"
ecdsa-sig-formatter: "npm:1.0.11"
safe-buffer: "npm:^5.0.1"
checksum: 10c0/6baab823b93c038ba1d2a9e531984dcadbc04e9eb98d171f4901b7a40d2be15961a359335de1671d78cb6d987f07cbe5d350d8143255977a889160c4d90fcc3c
checksum: 10c0/ab3ebc6598e10dc11419d4ed675c9ca714a387481466b10e8a6f3f65d8d9c9237e2826f2505280a739cf4cbcf511cb288eeec22b5c9c63286fc5a2e4f97e78cf
languageName: node
linkType: hard
"jws@npm:^4.0.0":
version: 4.0.0
resolution: "jws@npm:4.0.0"
version: 4.0.1
resolution: "jws@npm:4.0.1"
dependencies:
jwa: "npm:^2.0.0"
jwa: "npm:^2.0.1"
safe-buffer: "npm:^5.0.1"
checksum: 10c0/f1ca77ea5451e8dc5ee219cb7053b8a4f1254a79cb22417a2e1043c1eb8a569ae118c68f24d72a589e8a3dd1824697f47d6bd4fb4bebb93a3bdf53545e721661
checksum: 10c0/6be1ed93023aef570ccc5ea8d162b065840f3ef12f0d1bb3114cade844de7a357d5dc558201d9a65101e70885a6fa56b17462f520e6b0d426195510618a154d0
languageName: node
linkType: hard
@ -22778,7 +22845,7 @@ __metadata:
languageName: node
linkType: hard
"rimraf@npm:^5.0.10":
"rimraf@npm:^5.0.1, rimraf@npm:^5.0.10":
version: 5.0.10
resolution: "rimraf@npm:5.0.10"
dependencies: