* chore(env): add .env.example file and update .gitignore - Introduced a new .env.example file with NODE_OPTIONS configuration. - Updated .gitignore to exclude .env.example from being ignored. - Added instructions in dev.md for copying .env.example to .env. * fix(MessageTools): improve error handling and logging in message preview rendering (#8453) - Enhanced the rendering logic for message previews by adding a try-catch block to handle JSON parsing errors more gracefully. - Updated the error handling to provide clearer error messages in the preview when exceptions occur. - Added debug logging to track the rendering process of message content. * refactor(Theme): update theme management to use setTheme function - Replaced toggleTheme with setTheme for more explicit theme handling. - Removed unused SunMoon icon from TabContainer and Sidebar components. - Updated theme icon rendering logic to directly reflect the current theme state. - Adjusted ThemeProvider to include setTheme in context for better theme management. * refactor(ModelList): streamline button layout and improve accessibility - Removed tooltip wrappers from manage and add model buttons for a cleaner UI. - Introduced a new Flex container for primary and default buttons, enhancing layout consistency. - Updated button rendering to improve accessibility and user experience. * feat(ModelList): add bulk add/remove functionality for models with confirmation dialog - Implemented onAddAll and onRemoveAll functions to handle bulk actions for models. - Added confirmation dialog for adding all models to the list, enhancing user experience. - Updated translations for confirmation messages in multiple languages. * chore(languages): update languages with a script (#8445) * chore(languages): update languages with a script * refactor: update languages and merge it into constants * refactor: add usf and ush * fix(ModelEdit): enhance model type management and introduce new selection logic (#8420) * fix(ModelEdit): enhance model type management and introduce new selection logic - Added support for 'rerank' model type in the ModelEditContent component. - Refactored type selection logic to utilize new utility functions for finding differences and unions in model types. - Updated model type handling to include user selection status, improving user experience in type management. - Adjusted migration logic to initialize newType for existing models, ensuring backward compatibility. - Introduced isUserSelectedModelType utility to streamline model type checks across the application. * refactor(isFunctionCallingModel): simplify model type check logic - Replaced the inline check for 'function_calling' model type with a call to the new utility function isUserSelectedModelType, enhancing code clarity and maintainability. * feat(collection): add utility functions for array operations - Introduced `findIntersection`, `findDifference`, and `findUnion` functions to handle array operations with support for custom key selectors and comparison functions. - Removed previous implementations from `index.ts` to streamline utility exports. - Added comprehensive tests for new functions covering basic types and object types with various edge cases. * refactor(collection): rename utility functions for clarity - Renamed `findIntersection`, `findDifference`, and `findUnion` to `getIntersection`, `getDifference`, and `getUnion` respectively for improved clarity and consistency in naming. - Updated corresponding tests to reflect the new function names, ensuring all tests pass with the updated utility functions. * refactor(ModelEditContent): update model type management and improve selection logic - Replaced utility function calls to `findDifference` and `findUnion` with `getDifference` and `getUnion` for consistency. - Introduced temporary state management for model types to enhance user selection handling. - Added a reset functionality for model type selections, improving user experience. - Updated the rendering logic to conditionally disable certain model types based on user selections. * fix(ModelEditContent): enhance model type selection logic with conditional disabling - Introduced logic to conditionally disable 'rerank' and 'embedding' model types based on user selections. - Updated the state management for model types to ensure correct user selection handling. - Improved the confirmation modal to reflect the updated selection logic for better user experience. * fix(ModelEditContent): refine model type selection and update confirmation logic - Enhanced the logic for model type selection to ensure accurate user selections for 'rerank' and 'embedding'. - Updated the confirmation modal to reflect changes in selection handling, improving user experience. - Adjusted state management to correctly handle updates based on selected model types. * fix(models): update model support logic to include 'qwen3-235b-a22b-instruct' * refactor(models): rename 'newType' to 'capabilities' and update related logic in ModelEditContent and migration scripts * refactor(ModelEditContent): remove maskClosable prop for improved modal behavior * fix(ThinkingTagExtraction): add new tag configuration for 'kimi-vl-a3b-thinking' model (#8459) * feat(ThinkingTagExtraction): add new tag configuration for 'kimi-vl-a3b-thinking' model and update model regex patterns in config - Introduced a new tag configuration for the 'kimi-vl-a3b-thinking' model in ThinkingTagExtractionMiddleware. - Updated models.ts to include regex patterns for 'kimi-vl-a3b-thinking', 'llama-guard-4', and 'llama-4' to enhance model compatibility. * feat(models): add regex pattern for 'gemma3' model to enhance model compatibility * fix(RawStreamListenerMiddleware): update model check (#8433) * fix(RawStreamListenerMiddleware): update model check for Anthropic API integration - Replaced provider type check with model ID check to enhance compatibility with Claude models. - Improved clarity in the middleware logic for handling raw output from the SDK. * refactor(RawStreamListenerMiddleware): enhance model identification for Anthropic integration - Introduced a new utility function `isAnthropicModel` to streamline model checks across the codebase. - Updated middleware logic to utilize the new function for improved clarity and maintainability. - Adjusted related tests to ensure compatibility with the updated model identification approach. * test(ApiService.test): add mock for isAnthropicModel to enhance test coverage for model identification * refactor(ChatNavbar, Navbar): simplify toggle functions and remove unused fullscreen hook - Removed unnecessary useCallback functions for toggling assistants and topics, directly using the toggle functions instead. - Eliminated the unused fullscreen hook import to clean up the code. - Updated click handlers in the Navbar components for better readability and efficiency. * chore(version): 1.5.3 * style(MinAppsPage): adjust padding for AppsContainerWrapper based on navbar position - Increased padding for the AppsContainerWrapper to 50px for better spacing. - Added conditional padding for when the navbar is positioned at the top, reverting to 20px for improved layout consistency. * fix(AiProvider): remove unnecessary middleware removal logic for… (#8437) * refactor(AiProvider): remove unnecessary middleware removal logic for improved clarity * feat(PPIOAPIClient): add compatibility type check for OpenAIAPIClient * refactor(ModelEditContent): rename state variable for clarity and update model capabilities handling - Renamed `tempModelTypes` to `modelCapabilities` for improved clarity in the ModelEditContent component. - Updated state management and logic to consistently use the new `modelCapabilities` variable throughout the component, enhancing readability and maintainability. * Feat/vertex-claude-support (#7564) * feat(migrate): add default settings for assistants during migration - Introduced a new migration step to assign default settings for assistants that lack configuration. - Default settings include temperature, context count, and other parameters to ensure consistent behavior across the application. * chore(store): increment version number to 115 for persisted reducer * feat(vertex-sdk): integrate Anthropic Vertex SDK and add access token retrieval - Added support for the new `@anthropic-ai/vertex-sdk` in the project. - Introduced a new IPC channel `VertexAI_GetAccessToken` to retrieve access tokens. - Implemented `getAccessToken` method in `VertexAIService` to handle service account authentication. - Updated the `IpcChannel` enum and related IPC handlers to support the new functionality. - Enhanced the `VertexAPIClient` to utilize the `AnthropicVertexClient` for model handling. - Refactored existing code to accommodate the integration of the Vertex SDK and improve modularity. * feat(vertex-ai): enhance VertexAI settings and API host management - Added a new method to format the API host URL in both AnthropicVertexClient and VertexAPIClient. - Updated getBaseURL methods to utilize the new formatting logic. - Enhanced VertexAISettings component to include an input for API host configuration, with help text for user guidance. - Updated localization files to include new help text for the API host field in multiple languages. * fix(vertex-sdk): update baseURL handling and patch dependencies - Refactored baseURL assignment in AnthropicVertexClient to ensure it defaults to undefined when the URL is empty. - Updated yarn.lock to reflect changes in dependency resolution and checksum for @anthropic-ai/vertex-sdk patch. * refactor(VertexAISetting): use provider.id rather than provider * refactor: improve API host formatting in AnthropicVertexClient - Updated the `formatApiHost` method to streamline host URL handling. - Introduced a helper function to determine if the original host should be used based on its format. - Ensured consistent appending of the `/v1/` path for valid API requests. * fix: handle empty host in AnthropicVertexClient - Added a check in the `getBaseURL` method to return the host if it is empty, preventing potential errors. - Included a console log for the base URL to aid in debugging and verification of the URL formatting. * feat(AnthropicVertexClient): add logging for authentication errors and mock client in tests - Introduced logging functionality in AnthropicVertexClient to replace console.error with logger service for better error tracking. - Added mock implementation for AnthropicVertexClient in tests to enhance testing capabilities. - Updated package.json to include the @aws-sdk/client-s3 dependency. * feat(tests): add comprehensive tests for client compatibility types - Introduced a new test file to validate compatibility types for various API clients including OpenAI, Anthropic, Gemini, Aihubmix, NewAPI, and Vertex. - Implemented mock services to facilitate testing and ensure isolation of client behavior. - Added tests for both direct API clients and decorator pattern clients, ensuring correct compatibility type returns. - Enhanced middleware compatibility logic tests to verify correct identification of compatible clients. --------- Co-authored-by: one <wangan.cs@gmail.com> * fix(mcp-tools): enhance tool lookup logic to support partial matches (#8473) * fix(mcp-tools): enhance tool lookup logic to support partial matches - Updated the tool lookup logic in `geminiFunctionCallToMcpTool` to include partial matches for both tool IDs and names, improving the flexibility of tool identification. * refactor(mcp-tools): simplify tool lookup logic for improved clarity - Refactored the tool lookup logic in `geminiFunctionCallToMcpTool` to streamline the identification process by consolidating checks for tool IDs and names into a single variable. This enhances readability and maintains functionality for partial matches. * chore(deps): update vite to rolldown-vite (#8460) * chore(deps): update vite to rolldown-vite and add new dependencies - Updated vite dependency to rolldown-vite@latest in package.json. - Added new dependencies including @emnapi/core, @emnapi/runtime, and @napi-rs/wasm-runtime in yarn.lock. - Introduced patches for atomically and file-stream-rotator packages. - Added process import in index.ts for improved functionality. * updrade vitest * not package graceful-fs * update yarn.lock * fix(AppUpdater): improve update handling and logging for test plans - Enhanced the update logic in AppUpdater to prevent sending an 'update not available' event when a test plan is enabled and the channel is not the latest. - Refactored the feed URL and channel setting into a private method for better code organization. - Added logging to provide clearer insights into the update check results and channel settings, particularly when the test plan is active. --------- Co-authored-by: kangfenmao <kangfenmao@qq.com> Co-authored-by: SuYao <sy20010504@gmail.com> Co-authored-by: one <wangan.cs@gmail.com> |
||
|---|---|---|
| .github | ||
| .husky | ||
| .vscode | ||
| .yarn | ||
| build | ||
| docs | ||
| packages | ||
| resources | ||
| scripts | ||
| src | ||
| tests | ||
| .editorconfig | ||
| .env.example | ||
| .git-blame-ignore-revs | ||
| .gitattributes | ||
| .gitignore | ||
| .npmrc | ||
| .prettierignore | ||
| .prettierrc | ||
| .yarnrc.yml | ||
| CODE_OF_CONDUCT.md | ||
| CONTRIBUTING.md | ||
| dev-app-update.yml | ||
| electron-builder.yml | ||
| electron.vite.config.ts | ||
| eslint.config.mjs | ||
| LICENSE | ||
| package.json | ||
| playwright.config.ts | ||
| README.md | ||
| SECURITY.md | ||
| tsconfig.json | ||
| tsconfig.node.json | ||
| tsconfig.web.json | ||
| vitest.config.ts | ||
| yarn.lock | ||
English | 中文 | Official Site | Documents | Development | Feedback
🍒 Cherry Studio
Cherry Studio is a desktop client that supports multiple LLM providers, available on Windows, Mac and Linux.
👏 Join Telegram Group|Discord | QQ Group(575014769)
❤️ Like Cherry Studio? Give it a star 🌟 or Sponsor to support the development!
🌠 Screenshot
🌟 Key Features
- Diverse LLM Provider Support:
- ☁️ Major LLM Cloud Services: OpenAI, Gemini, Anthropic, and more
- 🔗 AI Web Service Integration: Claude, Peplexity, Poe, and others
- 💻 Local Model Support with Ollama, LM Studio
- AI Assistants & Conversations:
- 📚 300+ Pre-configured AI Assistants
- 🤖 Custom Assistant Creation
- 💬 Multi-model Simultaneous Conversations
- Document & Data Processing:
- 📄 Supports Text, Images, Office, PDF, and more
- ☁️ WebDAV File Management and Backup
- 📊 Mermaid Chart Visualization
- 💻 Code Syntax Highlighting
- Practical Tools Integration:
- 🔍 Global Search Functionality
- 📝 Topic Management System
- 🔤 AI-powered Translation
- 🎯 Drag-and-drop Sorting
- 🔌 Mini Program Support
- ⚙️ MCP(Model Context Protocol) Server
- Enhanced User Experience:
- 🖥️ Cross-platform Support for Windows, Mac, and Linux
- 📦 Ready to Use - No Environment Setup Required
- 🎨 Light/Dark Themes and Transparent Window
- 📝 Complete Markdown Rendering
- 🤲 Easy Content Sharing
📝 Roadmap
We're actively working on the following features and improvements:
- 🎯 Core Features
- Selection Assistant with smart content selection enhancement
- Deep Research with advanced research capabilities
- Memory System with global context awareness
- Document Preprocessing with improved document handling
- MCP Marketplace for Model Context Protocol ecosystem
- 🗂 Knowledge Management
- Notes and Collections
- Dynamic Canvas visualization
- OCR capabilities
- TTS (Text-to-Speech) support
- 📱 Platform Support
- HarmonyOS Edition (PC)
- Android App (Phase 1)
- iOS App (Phase 1)
- Multi-Window support
- Window Pinning functionality
- 🔌 Advanced Features
- Plugin System
- ASR (Automatic Speech Recognition)
- Assistant and Topic Interaction Refactoring
Track our progress and contribute on our project board.
Want to influence our roadmap? Join our GitHub Discussions to share your ideas and feedback!
🌈 Theme
- Theme Gallery: https://cherrycss.com
- Aero Theme: https://github.com/hakadao/CherryStudio-Aero
- PaperMaterial Theme: https://github.com/rainoffallingstar/CherryStudio-PaperMaterial
- Claude dynamic-style: https://github.com/bjl101501/CherryStudio-Claudestyle-dynamic
- Maple Neon Theme: https://github.com/BoningtonChen/CherryStudio_themes
Welcome PR for more themes
🤝 Contributing
We welcome contributions to Cherry Studio! Here are some ways you can contribute:
- Contribute Code: Develop new features or optimize existing code.
- Fix Bugs: Submit fixes for any bugs you find.
- Maintain Issues: Help manage GitHub issues.
- Product Design: Participate in design discussions.
- Write Documentation: Improve user manuals and guides.
- Community Engagement: Join discussions and help users.
- Promote Usage: Spread the word about Cherry Studio.
Refer to the Branching Strategy for contribution guidelines
Getting Started
- Fork the Repository: Fork and clone it to your local machine.
- Create a Branch: For your changes.
- Submit Changes: Commit and push your changes.
- Open a Pull Request: Describe your changes and reasons.
For more detailed guidelines, please refer to our Contributing Guide.
Thank you for your support and contributions!
🔧 Developer Co-creation Program
We are launching the Cherry Studio Developer Co-creation Program to foster a healthy and positive-feedback loop within the open-source ecosystem. We believe that great software is built collaboratively, and every merged pull request breathes new life into the project.
We sincerely invite you to join our ranks of contributors and shape the future of Cherry Studio with us.
Contributor Rewards Program
To give back to our core contributors and create a virtuous cycle, we have established the following long-term incentive plan.
The inaugural tracking period for this program will be Q3 2025 (July, August, September). Rewards for this cycle will be distributed on October 1st.
Within any tracking period (e.g., July 1st to September 30th for the first cycle), any developer who contributes more than 30 meaningful commits to any of Cherry Studio's open-source projects on GitHub will be eligible for the following benefits:
- Cursor Subscription Sponsorship: Receive a $70 USD credit or reimbursement for your Cursor subscription, making AI your most efficient coding partner.
- Unlimited Model Access: Get unlimited API calls for the DeepSeek and Qwen models.
- Cutting-Edge Tech Access: Enjoy occasional perks, including API access to models like Claude, Gemini, and OpenAI, keeping you at the forefront of technology.
Growing Together & Future Plans
A vibrant community is the driving force behind any sustainable open-source project. As Cherry Studio grows, so will our rewards program. We are committed to continuously aligning our benefits with the best-in-class tools and resources in the industry. This ensures our core contributors receive meaningful support, creating a positive cycle where developers, the community, and the project grow together.
Moving forward, the project will also embrace an increasingly open stance to give back to the entire open-source community.
How to Get Started?
We look forward to your first Pull Request!
You can start by exploring our repositories, picking up a good first issue, or proposing your own enhancements. Every commit is a testament to the spirit of open source.
Thank you for your interest and contributions.
Let's build together.
🏢 Enterprise Edition
Building on the Community Edition, we are proud to introduce Cherry Studio Enterprise Edition—a privately-deployable AI productivity and management platform designed for modern teams and enterprises.
The Enterprise Edition addresses core challenges in team collaboration by centralizing the management of AI resources, knowledge, and data. It empowers organizations to enhance efficiency, foster innovation, and ensure compliance, all while maintaining 100% control over their data in a secure environment.
Core Advantages
- Unified Model Management: Centrally integrate and manage various cloud-based LLMs (e.g., OpenAI, Anthropic, Google Gemini) and locally deployed private models. Employees can use them out-of-the-box without individual configuration.
- Enterprise-Grade Knowledge Base: Build, manage, and share team-wide knowledge bases. Ensures knowledge retention and consistency, enabling team members to interact with AI based on unified and accurate information.
- Fine-Grained Access Control: Easily manage employee accounts and assign role-based permissions for different models, knowledge bases, and features through a unified admin backend.
- Fully Private Deployment: Deploy the entire backend service on your on-premises servers or private cloud, ensuring your data remains 100% private and under your control to meet the strictest security and compliance standards.
- Reliable Backend Services: Provides stable API services and enterprise-grade data backup and recovery mechanisms to ensure business continuity.
✨ Online Demo
🚧 Public Beta Notice
The Enterprise Edition is currently in its early public beta stage, and we are actively iterating and optimizing its features. We are aware that it may not be perfectly stable yet. If you encounter any issues or have valuable suggestions during your trial, we would be very grateful if you could contact us via email to provide feedback.
Version Comparison
| Feature | Community Edition | Enterprise Edition |
|---|---|---|
| Open Source | ✅ Yes | ⭕️ Partially released to customers |
| Cost | Free for Personal Use / Commercial License | Buyout / Subscription Fee |
| Admin Backend | — | ● Centralized Model Access ● Employee Management ● Shared Knowledge Base ● Access Control ● Data Backup |
| Server | — | ✅ Dedicated Private Deployment |
Get the Enterprise Edition
We believe the Enterprise Edition will become your team's AI productivity engine. If you are interested in Cherry Studio Enterprise Edition and would like to learn more, request a quote, or schedule a demo, please feel free to contact us.
- For Business Inquiries & Purchasing: 📧 bd@cherry-ai.com
🔗 Related Projects
-
one-api: LLM API management and distribution system supporting mainstream models like OpenAI, Azure, and Anthropic. Features a unified API interface, suitable for key management and secondary distribution.
-
ublacklist: Blocks specific sites from appearing in Google search results