* feat: implement auto-renaming feature for notes
* feat: motion effects for auto renaming in notes
* feat: add i18n for zh-tw for auto renaming in notes
* chore: lint
* feat: add GitHub Copilot CLI integration to coding tools
- Add githubCopilotCli to codeTools enum
- Support @github/copilot package installation
- Add 'copilot' executable command mapping
- Update Redux store to include GitHub Copilot CLI state
- Add GitHub Copilot CLI option to UI with proper provider mapping
- Implement environment variable handling for GitHub authentication
- Fix model selection logic to disable model choice for GitHub Copilot CLI
- Update launch validation to not require model selection for GitHub Copilot CLI
- Fix prepareLaunchEnvironment and executeLaunch to handle no-model scenario
This enables users to launch GitHub Copilot CLI directly from Cherry Studio's
code tools interface without needing to select a model, as GitHub Copilot CLI
uses GitHub's built-in models and authentication.
Signed-off-by: LeaderOnePro <leaderonepro@outlook.com>
* style: apply code formatting for GitHub Copilot CLI integration
Auto-fix code style inconsistencies using project's Biome formatter.
Resolves semicolon, comma, and quote style issues to match project standards.
Signed-off-by: LeaderOnePro <leaderonepro@outlook.com>
* feat: conditionally render model selector for GitHub Copilot CLI
- Hide model selector component when GitHub Copilot CLI is selected
- Maintain validation logic to allow GitHub Copilot CLI without model selection
- Improve UX by removing empty model dropdown for GitHub Copilot CLI
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Signed-off-by: LeaderOnePro <leaderonepro@outlook.com>
Co-authored-by: Claude <noreply@anthropic.com>
- Unify buildClaudeCodeSystemMessage implementation in shared package
- Refactor MessagesService to provide comprehensive message processing API
- Extract streaming logic, error handling, and header preparation into service methods
- Remove duplicate anthropic config from renderer, use shared implementation
- Update ClaudeCodeService to use append mode for custom instructions
- Improve type safety and request validation in message processing
- Bumped versions for several @ai-sdk packages in package.json and yarn.lock to their latest releases, including @ai-sdk/amazon-bedrock, @ai-sdk/google-vertex, @ai-sdk/mistral, and @ai-sdk/perplexity.
- Updated ai package version from 5.0.44 to 5.0.59.
- Updated aiCore package version from 1.0.0-alpha.18 to 1.0.1 and adjusted dependencies accordingly.
- Ensured compatibility with the latest zod version in multiple packages.
- Replace @anthropic-ai/claude-code with @anthropic-ai/claude-agent-sdk@0.1.1
- Update all import statements across 4 files
- Migrate patch for Electron compatibility (fork vs spawn)
- Handle breaking changes: replace appendSystemPrompt with systemPrompt preset
- Add settingSources configuration for filesystem settings
- Update vendor path in build scripts
- Update package name mapping in CodeToolsService
* feat(models): add gpt5_codex model support
Add support for gpt5_codex model type in model configuration and type definitions. Update getThinkModelType to handle codex variant of gpt5 models.
* feat(models): add gpt-5-codex model logo and update logo mapping
Add new GPT-5-Codex model logo image and include it in the logo mapping configuration
Implement functionality to show files/folders in system explorer through IPC. Includes channel definition, preload API, main handler, and error handling for non-existent paths.
* add new provider: OVMS(openvino model server)
Signed-off-by: Ma, Kejiang <kj.ma@intel.com>
* remove useless comments
* add note: support windows only
* fix eslint error; add migrate for ovms provider
Signed-off-by: Ma, Kejiang <kj.ma@intel.com>
* fix ci error after rebase
Signed-off-by: Ma, Kejiang <kj.ma@intel.com>
* modifications base on reviewers' comments
Signed-off-by: Ma, Kejiang <kj.ma@intel.com>
* show intel-ovms provider only on windows and intel cpu
Signed-off-by: Ma, Kejiang <kj.ma@intel.com>
* complete i18n for intel ovms
Signed-off-by: Ma, Kejiang <kj.ma@intel.com>
* update ovms 2025.3; apply patch for model qwen3-8b on local
Signed-off-by: Ma, Kejiang <kj.ma@intel.com>
* fix lint issues
Signed-off-by: Ma, Kejiang <kj.ma@intel.com>
* fix issues for format, type checking
Signed-off-by: Ma, Kejiang <kj.ma@intel.com>
* remove test code
Signed-off-by: Ma, Kejiang <kj.ma@intel.com>
* fix issues after rebase
Signed-off-by: Ma, Kejiang <kj.ma@intel.com>
---------
Signed-off-by: Ma, Kejiang <kj.ma@intel.com>
* feat: add LongCat provider support
- Add LongCat to SystemProviderIds enum
- Add LongCat provider logo and configuration
- Configure API endpoints and URLs based on official docs
- Add two models: LongCat-Flash-Chat and LongCat-Flash-Thinking
- Update provider mappings for proper integration
The LongCat provider uses OpenAI-compatible API format and supports
up to 8K tokens output with daily free quota of 500K tokens.
Signed-off-by: LeaderOnePro <leaderonepro@outlook.com>
* feat: add migration for LongCat provider
- Add migration version 158 for LongCat provider
- Ensure existing users get LongCat provider on app update
- Follow standard migration pattern for simple provider additions
Signed-off-by: LeaderOnePro <leaderonepro@outlook.com>
---------
Signed-off-by: LeaderOnePro <leaderonepro@outlook.com>
- Introduced a new utility function to determine if a tool is an agent tool, simplifying the tool selection logic in MessageTool.
- Refactored MessageAgentTools to improve rendering logic and added an UnknownToolRenderer for better handling of unrecognized tools.
- Updated BashOutputTool to remove unnecessary Card components, enhancing layout consistency.
- Improved overall code clarity and maintainability by reducing redundancy and adhering to existing patterns.
- Updated the resolution and checksum for the @ai-sdk/google patch in yarn.lock.
- Enhanced the getModelPath function to check for "models/" in the modelId before returning the path, improving its robustness.