* feat(options): implement deep merging for provider options
Add deep merge functionality to preserve nested properties when combining provider options. The new implementation handles object merging recursively while maintaining type safety.
* refactor(tsconfig): reorganize include paths in tsconfig files
Clean up and reorder include paths for better maintainability and consistency between tsconfig.node.json and tsconfig.web.json
* test: add aiCore test configuration and script
Add new test configuration for aiCore package and corresponding test script in package.json to enable running tests specifically for the aiCore module.
* fix: format
* fix(aiCore): resolve test failures and update test infrastructure
- Add vitest setup file with global mocks for @cherrystudio/ai-sdk-provider
- Fix context assertions: use 'model' instead of 'modelId' in plugin tests
- Fix error handling tests: update expected error messages to match actual behavior
- Fix streamText tests: use 'maxOutputTokens' instead of 'maxTokens'
- Fix schemas test: update expected provider list to match actual implementation
- Fix mock-responses: use AI SDK v5 format (inputTokens/outputTokens)
- Update vi.mock to use importOriginal for preserving jsonSchema export
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(aiCore): add alias mock for @cherrystudio/ai-sdk-provider in tests
The vi.mock in setup file doesn't work for source code imports.
Use vitest resolve.alias to mock the external package properly.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(aiCore): disable unused-vars warnings in mock file
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(aiCore): use import.meta.url for ESM compatibility in vitest config
__dirname is not available in ESM modules, use fileURLToPath instead.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(aiCore): use absolute paths in vitest config for workspace compatibility
- Use path.resolve for setupFiles and all alias paths
- Extend aiCore vitest.config.ts from root workspace config
- Change aiCore test environment to 'node' instead of 'jsdom'
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* docs(factory): improve mergeProviderOptions documentation
Add detailed explanation of merge behavior with examples
* test(factory): add tests for mergeProviderOptions behavior
Add test cases to verify mergeProviderOptions correctly handles primitive values, arrays, and nested objects during merging
* refactor(tests): clean up mock responses test fixtures
Remove unused mock streaming chunks and error responses to simplify test fixtures
Update warning details structure in mock complete responses
* docs(test): clarify comment in generateImage test
Update comment to use consistent 'model id' terminology instead of 'modelId'
* test(factory): verify array replacement in mergeProviderOptions
---------
Co-authored-by: suyao <sy20010504@gmail.com>
Co-authored-by: Claude <noreply@anthropic.com>
* feat: enhance support for AWS Bedrock and Azure OpenAI providers
* fix: resolve PR review issues for AWS Bedrock support
- Fix header.ts logic bug: change && to || for Vertex/Bedrock provider check
- Fix regex in reasoning.ts to match AWS Bedrock model format (anthropic.claude-*)
- Add test coverage for AWS Bedrock format in isClaude4SeriesModel
- Add Bedrock provider tests including anthropicBeta parameter
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
* fix(provider): update service tier support logic for OpenAI and Azure providers
* fix(settings): enhance OpenAI settings visibility logic with verbosity support
Previously, JSON-type custom parameters were incorrectly parsed and stored
as objects in the UI layer, causing API requests to fail when getCustomParameters()
attempted to JSON.parse() an already-parsed object.
Changes:
- AssistantModelSettings.tsx: Remove JSON.parse() in onChange handler, store as string
- reasoning.ts: Add comments explaining JSON parsing flow
- BaseApiClient.ts: Add comments for legacy API clients
* feat(test): e2e framework
Add Playwright-based e2e testing framework for Electron app with:
- Custom fixtures for electronApp and mainWindow
- Page Object Model (POM) pattern implementation
- 15 example test cases covering app launch, navigation, settings, and chat
- Comprehensive README for humans and AI assistants
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* refactor(tests): update imports and improve code readability
- Changed imports from 'import { Page, Locator }' to 'import type { Locator, Page }' for better type clarity across multiple page files.
- Reformatted waitFor calls in ChatPage and HomePage for improved readability.
- Updated index.ts to correct the export order of ChatPage and SidebarPage.
- Minor adjustments in electron.fixture.ts and electron-app.ts for consistency in import statements.
These changes enhance the maintainability and clarity of the test codebase.
* chore: update linting configuration to include tests directory
- Added 'tests/**' to the ignore patterns in .oxlintrc.json and eslint.config.mjs to ensure test files are not linted.
- Minor adjustment in electron.fixture.ts to improve the fixture definition.
These changes streamline the linting process and enhance code organization.
* fix(test): select main window by title to fix flaky e2e tests on Mac
On Mac, the app may create miniWindow for QuickAssistant alongside mainWindow.
Using firstWindow() could randomly select the wrong window, causing test failures.
Now we wait for the window with title "Cherry Studio" to ensure we get the main window.
Also removed unused electron-app.ts utility file.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
* refactor(models): improve verbosity level handling for GPT-5 models
Replace hardcoded verbosity configuration with validator functions
Add support for GPT-5.1 series models
* test(models): restructure model utility tests into logical groups
Improve test organization by grouping related test cases under descriptive describe blocks for better maintainability and readability. Each model utility function now has its own dedicated test section with clear subcategories for different behaviors.
* fix: add null check for model in getModelSupportedVerbosity
Handle null model case defensively by returning default verbosity
* refactor(config): remove redundant as const from MODEL_SUPPORTED_VERBOSITY array
* refactor(models): simplify validator function in MODEL_SUPPORTED_VERBOSITY
* test(model utils): add tests for undefined/null input handling
* fix(models): handle undefined/null input in getModelSupportedVerbosity
Remove ts-expect-error comments and update type signature to explicitly handle undefined/null inputs. Also add support for GPT-5.1 series models.
* test(models): add test case for gpt-5-pro variant model
- Add `chcp 65001` to Windows batch file to switch CMD.exe to UTF-8 code page,
fixing CLI tool launch failure when working directory contains Chinese or
other non-ASCII characters
- Add directory existence validation before launching terminal to provide
immediate error feedback instead of delayed failure
Closes#11483🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
When updating assistant preset settings, if agent.settings was undefined,
it was assigned the DEFAULT_ASSISTANT_SETTINGS object directly. Since this
object is defined with `as const`, it is readonly and subsequent property
assignments would fail with "Cannot assign to read only property".
Fixed by creating a shallow copy of DEFAULT_ASSISTANT_SETTINGS instead of
referencing it directly.
Closes#11490🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
Move color and font-size styles from p selector to container level
in UpdateNotesWrapper. This ensures all content (including li elements
not wrapped in p tags) uses consistent color.
The issue occurred because .replace(/\n/g, '\n\n') creates a "loose list"
in Markdown where most list items get wrapped in <p> tags, but the last
item (without trailing newline) may not, causing it to inherit a different
color from the parent .markdown class.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
- Updated version to 1.7.0-rc.3 in package.json
- Added new features including support for Silicon provider and AIHubMix
- Consolidated bug fixes related to providers, models, UI, and settings
- Improved SDK integration with upgraded dependencies
* Initial plan
* feat: Add proper Poe API reasoning parameters support for GPT-5 and other models
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
* test: Add comprehensive tests for Poe API reasoning support
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
* fix: Add missing isGPT5SeriesModel import in reasoning.ts
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
* fix: Use correct extra_body format for Poe API reasoning parameters
Per Poe API documentation, custom bot parameters like reasoning_effort
and thinking_budget should be passed directly in extra_body, not as
nested structures.
Changed from:
- reasoning_effort: 'low' -> extra_body: { reasoning_effort: 'low' }
- thinking: { type: 'enabled', budget_tokens: X } -> extra_body: { thinking_budget: X }
- extra_body: { google: { thinking_config: {...} } } -> extra_body: { thinking_budget: X }
Updated tests to match the corrected implementation.
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
* fix: Update reasoning parameters and improve type definitions for GPT-5 support
* fix lint
* docs
* fix(reasoning): handle edge cases for models without token limit configuration
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
Co-authored-by: suyao <sy20010504@gmail.com>
* fix(anthropic): prevent duplicate /v1 in API endpoints
Anthropic SDK automatically appends /v1 to endpoints, so we should not add it in our formatting. This change ensures URLs are correctly formatted without duplicate path segments.
* fix(anthropic): strip /v1 suffix in getSdkClient to prevent duplicate in models endpoint
The issue was:
- AI SDK (for chat) needs baseURL with /v1 suffix
- Anthropic SDK (for listModels) automatically appends /v1 to all endpoints
Solution:
- Keep /v1 in formatProviderApiHost for AI SDK compatibility
- Strip /v1 in getSdkClient before passing to Anthropic SDK
- This ensures chat works correctly while preventing /v1/v1/models duplication
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(anthropic): correct preview URL to match actual request behavior
The preview now correctly shows:
- Input: https://api.siliconflow.cn/v2
- Preview: https://api.siliconflow.cn/v2/messages (was incorrectly showing /v2/v1/messages)
- Actual: https://api.siliconflow.cn/v2/messages
This matches the actual behavior where getSdkClient strips /v1 suffix before
passing to Anthropic SDK, which then appends /v1/messages.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(anthropic): strip all API version suffixes, not just /v1
The Anthropic SDK always appends /v1 to endpoints, regardless of the baseURL.
Previously we only stripped /v1 suffix, causing issues with custom versions like /v2.
Now we strip all version suffixes (/v1, /v2, /v1beta, etc.) before passing to Anthropic SDK.
Examples:
- Input: https://api.siliconflow.cn/v2/
- After strip: https://api.siliconflow.cn
- Actual request: https://api.siliconflow.cn/v1/messages✅🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(anthropic): correct preview to show AI SDK behavior, not Anthropic SDK
The preview was showing the wrong URL because it was reflecting Anthropic SDK behavior
(which strips versions and uses /v1), but checkApi and chat use AI SDK which preserves
the user's version path.
Now preview correctly shows:
- Input: https://api.siliconflow.cn/v2/
- AI SDK (checkApi/chat): https://api.siliconflow.cn/v2/messages✅
- Preview: https://api.siliconflow.cn/v2/messages✅
Note: Anthropic SDK (for listModels) still strips versions to use /v1/models,
but this is not shown in preview since it's a different code path.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* refactor(checkApi): remove unnecessary legacy fallback
The legacy fallback logic in checkApi was:
1. Complex and hard to maintain
2. Never actually triggered in practice for Modern SDK supported providers
3. Could cause duplicate API requests
Since Modern AI SDK now handles all major providers correctly,
we can simplify by directly throwing errors instead of falling back.
This also removes unused imports: AiProvider and CompletionsParams.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(anthropic): restore version stripping in getSdkClient for Anthropic SDK
The Anthropic SDK (used for listModels) always appends /v1 to endpoints,
so we need to strip version suffixes from baseURL to avoid duplication.
This only affects Anthropic SDK operations (like listModels).
AI SDK operations (chat/checkApi) use provider.apiHost directly via
providerToAiSdkConfig, which preserves the user's version path.
Examples:
- AI SDK (chat): https://api.siliconflow.cn/v1 -> /v1/messages ✅
- Anthropic SDK (models): https://api.siliconflow.cn/v1 -> strip v1 -> /v1/models ✅🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(anthropic): ensure AI SDK gets /v1 in baseURL, strip for Anthropic SDK
The correct behavior is:
1. formatProviderApiHost: Add /v1 to apiHost (for AI SDK compatibility)
2. AI SDK (chat/checkApi): Use apiHost with /v1 -> /v1/messages ✅
3. Anthropic SDK (listModels): Strip /v1 from baseURL -> SDK adds /v1/models ✅
4. Preview: Show AI SDK behavior (main use case) -> /v1/messages ✅
Examples:
- Input: https://api.siliconflow.cn
- Formatted: https://api.siliconflow.cn/v1 (added by formatApiHost)
- AI SDK: https://api.siliconflow.cn/v1/messages✅
- Anthropic SDK: https://api.siliconflow.cn (stripped) + /v1/models ✅
- Preview: https://api.siliconflow.cn/v1/messages✅🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* refactor(ai): simplify AiProviderNew initialization and improve docs
Update AiProviderNew constructor to automatically format URLs by default
Add comprehensive documentation explaining constructor behavior and usage
* chore: remove unused play.ts file
* fix(anthropic): strip api version from baseURL to avoid endpoint duplication
---------
Co-authored-by: Claude <noreply@anthropic.com>
* feat: add silicon provider support for Anthropic API compatibility
* fix: update handling of ANTHROPIC_BASE_URL for silicon provider compatibility
* fix: update anthropicApiHost for silicon provider to use the correct endpoint
* fix: remove silicon from CLAUDE_OFFICIAL_SUPPORTED_PROVIDERS
* chore: add comment to clarify silicon model fallback logic in CLAUDE_OFFICIAL_SUPPORTED_PROVIDERS
- Bumped @types/react from ^19.0.12 to ^19.2.7
- Bumped @types/react-dom from ^19.0.4 to ^19.2.3
- Updated csstype dependency from ^3.0.2 to ^3.2.2 in yarn.lock
These updates ensure compatibility with the latest React types and improve type definitions.
* fix: add claude-opus-4-5 pattern to THINKING_TOKEN_MAP
Adds missing regex pattern for claude-opus-4-5 models (e.g., claude-opus-4-5-20251101)
to the THINKING_TOKEN_MAP configuration. Without this pattern, the model was not
recognized, causing findTokenLimit() to return undefined and leading to an
AI_InvalidArgumentError when using Google Vertex AI Anthropic provider.
The fix adds the pattern 'claude-opus-4-5.*$': { min: 1024, max: 64_000 } to
match the existing claude-4 thinking token configuration.
Fixes AI_InvalidArgumentError: invalid anthropic provider options caused by
budgetTokens receiving NaN instead of a number.
Signed-off-by: Shuchen Luo (personal linux) <nemo0806@gmail.com>
* refactor: make THINKING_TOKEN_MAP constant private
* fix(reasoning): update claude model token limit regex patterns
- Consolidate claude model regex patterns to be more consistent
- Add comprehensive test cases for various claude model variants
- Ensure case insensitivity and proper handling of edge cases
* fix: format
* feat(models): extend claude model regex patterns to support AWS and GCP formats
Update regex patterns in THINKING_TOKEN_MAP to support additional Claude model ID formats used in AWS Bedrock and GCP Vertex AI
Add comprehensive test cases for new model ID formats and reorganize test suite
* fix: format
---------
Signed-off-by: Shuchen Luo (personal linux) <nemo0806@gmail.com>
Co-authored-by: icarus <eurfelux@gmail.com>
- Updated links in CONTRIBUTING.md and README.md to point to the correct Chinese documentation paths.
- Removed outdated files including the English and Chinese versions of the branching strategy, contributing guide, and test plan documents.
- Cleaned up references to non-existent documentation in the project structure to streamline the contributor experience.
* Initial plan
* fix(aiCore): extract AI SDK standard params from custom params for Gemini
Custom parameters like topK, frequencyPenalty, presencePenalty,
stopSequences, and seed should be passed as top-level streamText()
parameters, not in providerOptions. This fixes the issue where these
parameters were being ignored by the AI SDK's @ai-sdk/google module.
Changes:
- Add extractAiSdkStandardParams function to separate standard params
- Update buildProviderOptions to return both providerOptions and standardParams
- Update buildStreamTextParams to spread standardParams into params object
- Update tests to reflect new return structure
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
* refactor(aiCore): remove extractAiSdkStandardParams function and its tests, streamline parameter extraction logic
* chore: type
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
Co-authored-by: suyao <sy20010504@gmail.com>
* fix: update provider-utils and add patch for header merging logic
* fix: enhance header merging logic to deduplicate values
* fix: handle null values in header merging logic
* chore: update ai-sdk dependencies and remove obsolete patches
- Updated @ai-sdk/amazon-bedrock from 3.0.56 to 3.0.61
- Updated @ai-sdk/anthropic from 2.0.45 to 2.0.49
- Updated @ai-sdk/gateway from 2.0.13 to 2.0.15
- Updated @ai-sdk/google from 2.0.40 to 2.0.43
- Updated @ai-sdk/google-vertex from 3.0.72 to 3.0.79
- Updated @ai-sdk/openai from 2.0.71 to 2.0.72
- Updated @ai-sdk/provider-utils from patch version to 3.0.17
- Removed obsolete patches for @ai-sdk/openai and @ai-sdk/provider-utils
- Added reasoning_content field to OpenAIChat response and chunk schemas
- Enhanced OpenAIChatLanguageModel to handle reasoning content in responses
* chore
* feat(settings): show OpenAI settings for supported service tier providers
Add support for displaying OpenAI settings when provider supports service tiers.
This includes refactoring the condition check and fixing variable naming consistency.
* fix(settings): set openAI verbosity to undefined by default
* fix(store): bump version to 178 and disable verbosity for groq provider
Add migration to remove verbosity from groq provider and implement provider utility to check verbosity support
Update provider types to include verbosity support flag
* feat(provider): add verbosity option support for providers
Add verbosity parameter support in provider API options settings
* fix(aiCore): check provider support for verbosity before applying
Add provider validation and check for verbosity support to prevent errors when unsupported providers are used with verbosity settings
* feat(settings): add Groq settings group component and translations
add new GroqSettingsGroup component for managing Groq provider settings
update translations for Groq settings in both zh-cn and en-us locales
refactor OpenAISettingsGroup to separate Groq-specific logic
* feat(i18n): add groq settings and verbosity support translations
add translations for groq settings title and verbosity parameter support in multiple languages
* refactor(settings): simplify service tier mode fallback logic
Remove conditional service tier mode fallback and use provider-specific defaults directly
* fix(provider): remove redundant system provider check in verbosity support
* test(provider): add tests for verbosity support detection
* fix(OpenAISettingsGroup): add endpoint_type check for showSummarySetting condition
Add model.endpoint_type check to properly determine when to show summary setting for OpenAI models
* refactor(selector): simplify selector option types and add utility functions
remove undefined and null from selector option types
add utility functions to convert between option values and real values
update groq and openai settings groups to use new utilities
add new translation for "ignore" option
* fix(ApiOptionsSettings): correct checked state for verbosity toggle
* feat(i18n): add "ignore" translation for multiple languages
* refactor(groq): remove unused model prop and related checks
Clean up GroqSettingsGroup component by removing unused model prop and unnecessary service tier checks
refactor(models): improve text delta support check for qwen-mt models
Replace direct qwen-mt model check with regex pattern matching
Add comprehensive test cases for isNotSupportTextDeltaModel
Update all references to use new function name
The previous implementation used `a = preset` inside forEach, which only
reassigns the local variable and doesn't actually update the array element.
Changed to use findIndex + direct array assignment to properly update
the preset in the state.
Fixes#11451🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
* fix: respect enableMaxTokens setting when maxTokens is not configured
When enableMaxTokens is disabled, getMaxTokens() should return undefined
to let the API use its own default value, instead of forcing 4096 tokens.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(modelParameters): handle max tokens when feature is disabled
Check if max tokens feature is enabled before returning undefined to ensure proper API behavior
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: icarus <eurfelux@gmail.com>
* feat: update Google and OpenAI SDKs with new features and fixes
- Updated Google SDK to ensure model paths are correctly formatted.
- Enhanced OpenAI SDK to include support for image URLs in chat responses.
- Added reasoning content handling in OpenAI chat responses and chunks.
- Introduced Azure Anthropic provider configuration for Claude integration.
* fix: azure error
* fix: lint
* fix: test
* fix: test
* fix type
* fix comment
* fix: redundant
* chore resolution
* fix: test
* fix: comment
* fix: comment
* fix
* feat: 添加 OpenRouter 推理中间件以支持内容过滤
* refactor: improve model filtering with todo for robust conversion
* refactor(aiCore): add AiSdkConfig type and update provider config handling
- Introduce new AiSdkConfig type in aiCoreTypes for better type safety
- Update provider factory and config to use AiSdkConfig consistently
- Simplify getAiSdkProviderId return type to string
- Add config validation in ModernAiProvider
* refactor(aiCore): move ai core types to dedicated module
Consolidate AI core type definitions into a dedicated module under aiCore/types. This improves code organization by keeping related types together and removes circular dependencies between modules. The change includes:
- Moving AiSdkConfig to aiCore/types
- Updating all imports to reference the new location
- Removing duplicate type definitions
* refactor(provider): add return type to createAiSdkProvider function
* feat(messages): add filter for error-only messages and their related pairs
Add new filter function to remove assistant messages containing only error blocks along with their associated user messages, identified by askId. This improves conversation quality by cleaning up error-only responses.
* refactor(ConversationService): improve message filtering pipeline readability
Break down complex message filtering chain into clearly labeled steps
Add comments explaining each filtering step's purpose
Maintain same functionality while improving code maintainability
* test(messageUtils): add test cases for message filter utilities
* docs(messageUtils): correct jsdoc for filterUsefulMessages
* refactor(ConversationService): extract message filtering logic into pipeline method
Move message filtering steps into a dedicated static method to improve testability and maintainability. Add comprehensive tests to verify pipeline behavior.
* refactor(ConversationService): add logging and improve message filtering readability
Add logger service to track message pipeline output
Split filterUserRoleStartMessages into separate variable for better debugging