🍒 Cherry Studio is a desktop client that supports for multiple LLM providers.
Go to file
LiuVaayne 6d15b0dfd1
feat(mcp): add MCP Hub server for multi-server tool orchestration (#12192)
* feat(mcp): add hub server type definitions

- Add 'hub' to BuiltinMCPServerNames enum as '@cherry/hub'
- Create GeneratedTool, SearchQuery, ExecInput, ExecOutput types
- Add ExecutionContext and ConsoleMethods interfaces

Amp-Thread-ID: https://ampcode.com/threads/T-019b4e7d-86a3-770d-82f8-9e646e7e597e
Co-authored-by: Amp <amp@ampcode.com>

* feat(mcp): implement hub server core components

- generator.ts: Convert MCP tools to JS functions with JSDoc
- tool-registry.ts: In-memory cache with 10-min TTL
- search.ts: Comma-separated keyword search with ranking
- runtime.ts: Code execution with parallel/settle/console helpers

Amp-Thread-ID: https://ampcode.com/threads/T-019b4e7d-86a3-770d-82f8-9e646e7e597e
Co-authored-by: Amp <amp@ampcode.com>

* feat(mcp): integrate hub server with MCP infrastructure

- Create HubServer class with search/exec tools
- Implement mcp-bridge for calling tools via MCPService
- Register hub server in factory with dependency injection
- Initialize hub dependencies in MCPService constructor
- Add hub server description label for i18n

Amp-Thread-ID: https://ampcode.com/threads/T-019b4e7d-86a3-770d-82f8-9e646e7e597e
Co-authored-by: Amp <amp@ampcode.com>

* test(mcp): add unit tests for hub server

- generator.test.ts: Test schema conversion and JSDoc generation
- search.test.ts: Test keyword matching, ranking, and limits
- runtime.test.ts: Test code execution, helpers, and error handling

Amp-Thread-ID: https://ampcode.com/threads/T-019b4e7d-86a3-770d-82f8-9e646e7e597e
Co-authored-by: Amp <amp@ampcode.com>

* docs(mcp): add hub server documentation

- Document search/exec tool usage and parameters
- Explain configuration and caching behavior
- Include architecture diagram and file structure

Amp-Thread-ID: https://ampcode.com/threads/T-019b4e7d-86a3-770d-82f8-9e646e7e597e
Co-authored-by: Amp <amp@ampcode.com>

* ♻️ refactor(hub): simplify dependency injection for HubServer

- Remove HubServerDependencies interface and setHubServerDependencies from factory
- Add initHubBridge() to mcp-bridge for direct initialization
- Make HubServer constructor parameterless (uses pre-initialized bridge)
- MCPService now calls initHubBridge() directly instead of factory setter
- Add integration tests for full search → exec flow

* 📝 docs(hub): add comments explaining why hub is not in builtin list

- Add JSDoc to HubServer class explaining its purpose and design
- Add comment to builtinMCPServers explaining hub exclusion
- Hub is a meta-server for LLM code mode, auto-enabled internally

*  feat: add available tools section to HUB_MODE_SYSTEM_PROMPT

- Add shared utility for generating MCP tool function names (serverName_toolName format)
- Update hub server to use consistent function naming across search, exec and prompt
- Add fetchAllActiveServerTools to ApiService for renderer process
- Update parameterBuilder to include available tools in auto/hub mode prompt
- Use CacheService for 1-minute tools caching in hub server
- Remove ToolRegistry in favor of direct fetching with caching
- Update search ranking to include server name matching
- Fix tests to use new naming format

Amp-Thread-ID: https://ampcode.com/threads/T-019b6971-d5c9-7719-9245-a89390078647
Co-authored-by: Amp <amp@ampcode.com>

* ♻️ refactor: consolidate MCP tool name utilities into shared module

- Merge buildFunctionCallToolName from src/main/utils/mcp.ts into packages/shared/mcp.ts
- Create unified buildMcpToolName base function with options for prefix, delimiter, maxLength, existingNames
- Fix toCamelCase to normalize uppercase snake case (MY_SERVER → myServer)
- Fix maxLength + existingNames interaction to respect length limit when adding collision suffix
- Add comprehensive JSDoc documentation
- Update tests and hub.test.ts for new lowercase normalization behavior

*  feat: isolate hub exec worker and filter disabled tools

* 🐛 fix: inline hub worker source

* 🐛 fix: sync hub tool cache and map

* Update import path for buildFunctionCallToolName in BaseService

*  feat: refine hub mode system prompt

* 🐛 fix: propagate hub tool errors

* 📝 docs: clarify hub exec return

*  feat(hub): improve prompts and tool descriptions for better LLM success rate

- Rewrite HUB_MODE_SYSTEM_PROMPT_BASE with Critical Rules section
- Add Common Mistakes to Avoid section with examples
- Update exec tool description with IMPORTANT return requirement
- Improve search tool description clarity
- Simplify generator output with return reminder in header
- Add per-field @param JSDoc with required/optional markers

Fixes issue where LLMs forgot to return values from exec code

* ♻️ refactor(hub): return empty string when no tools available

*  feat(hub): add dedicated AUTO_MODE_SYSTEM_PROMPT for auto mode

- Create self-contained prompt teaching XML tool_use format
- Only shows search/exec tools (no generic examples)
- Add complete workflow example with common mistakes
- Update parameterBuilder to use getAutoModeSystemPrompt()
- User prompt comes first, then auto mode instructions
- Skip hub prompt when no tools available

* ♻️ refactor: move hub prompts to dedicated prompts-code-mode.ts

- Create src/renderer/src/config/prompts-code-mode.ts
- Move HUB_MODE_SYSTEM_PROMPT_BASE and AUTO_MODE_SYSTEM_PROMPT_BASE
- Move getHubModeSystemPrompt() and getAutoModeSystemPrompt()
- Extract shared buildToolsSection() helper
- Update parameterBuilder.ts import

* ♻️ refactor: add mcpMode support to promptToolUsePlugin

- Add mcpMode parameter to PromptToolUseConfig and defaultBuildSystemPrompt
- Pass mcpMode through middleware config to plugin builder
- Consolidate getAutoModeSystemPrompt into getHubModeSystemPrompt
- Update parameterBuilder to use getHubModeSystemPrompt

* ♻️ refactor: move getHubModeSystemPrompt to shared package

- Create @cherrystudio/shared workspace package with exports
- Move getHubModeSystemPrompt and ToolInfo to packages/shared/prompts
- Add @cherrystudio/shared dependency to @cherrystudio/ai-core
- Update promptToolUsePlugin to import from shared package
- Update renderer prompts-code-mode.ts to re-export from shared
- Add toolSetToToolInfoArray converter for type compatibility

* Revert "♻️ refactor: move getHubModeSystemPrompt to shared package"

This reverts commit 894b2fd487.

* Remove duplicate Tool Use Examples header from system prompt

* fix: add handleModeChange call in MCPToolsButton for manual mode activation

* style: update AssistantMCPSettings to use min-height instead of overflow for better layout control

* feat(i18n): add MCP server modes and truncate messages in multiple languages

- Introduced new "mode" options for MCP servers: auto, disabled, and manual with corresponding descriptions and labels.
- Added translations for "base64DataTruncated" and "truncated" messages across various language files.
- Enhanced user experience by providing clearer feedback on data truncation.

* Normalize tool names for search and exec in parser

* Clarify tool usage rules in code mode prompts and examples

* Clarify code execution instructions and update example usage

* refactor: simplify JSDoc description handling by removing unnecessary truncation

* refactor: optimize listAllActiveServerTools method to use Promise.allSettled for improved error handling and performance

---------

Co-authored-by: Amp <amp@ampcode.com>
Co-authored-by: kangfenmao <kangfenmao@qq.com>
2026-01-07 16:35:51 +08:00
.github refactor: switch yarn to pnpm (#12260) 2026-01-05 22:16:34 +08:00
.husky refactor: switch yarn to pnpm (#12260) 2026-01-05 22:16:34 +08:00
.vscode chore: add tailwindCSS class attributes to vscode settings 2025-09-24 03:21:19 +08:00
build fix(installer): auto-install VC++ Redistributable without user prompt (#11927) 2025-12-17 19:23:56 +08:00
config ♻️ refactor: implement config-based update system with version compatibility control (#11147) 2025-11-14 17:49:40 +08:00
docs refactor: switch yarn to pnpm (#12260) 2026-01-05 22:16:34 +08:00
packages feat(mcp): add MCP Hub server for multi-server tool orchestration (#12192) 2026-01-07 16:35:51 +08:00
patches ⬆️ chore(deps): upgrade @anthropic-ai/claude-agent-sdk to 0.1.76 (#12317) 2026-01-07 14:48:27 +08:00
resources feat:upgrade ovms to 2025.4, add preset-model Qwen3-4B-int4-ov (#12045) 2025-12-21 17:22:59 +08:00
scripts refactor: switch yarn to pnpm (#12260) 2026-01-05 22:16:34 +08:00
src feat(mcp): add MCP Hub server for multi-server tool orchestration (#12192) 2026-01-07 16:35:51 +08:00
tests feat(mcp): add MCP Hub server for multi-server tool orchestration (#12192) 2026-01-07 16:35:51 +08:00
.editorconfig style: set eol to lf, code formatting (#7923) 2025-07-08 09:50:33 +08:00
.env.example feat: 移除特性标志相关代码,简化消息存储逻辑 2025-09-23 17:40:29 +08:00
.git-blame-ignore-revs chore: git blame ignore (#7925) 2025-07-08 14:23:55 +08:00
.gitattributes style: set eol to lf, code formatting (#7923) 2025-07-08 09:50:33 +08:00
.gitignore Merge remote-tracking branch 'origin/main' into feat/agents-new 2025-09-21 17:25:02 +08:00
.npmrc refactor: switch yarn to pnpm (#12260) 2026-01-05 22:16:34 +08:00
.oxlintrc.json feat(test): e2e framework (#11494) 2025-11-27 19:52:31 +08:00
AGENTS.md feat(sessions): add session message creation endpoint and update session details 2025-09-14 17:09:32 +08:00
app-upgrade-config.json ♻️ refactor: implement config-based update system with version compatibility control (#11147) 2025-11-14 17:49:40 +08:00
biome.jsonc refactor: switch yarn to pnpm (#12260) 2026-01-05 22:16:34 +08:00
CLAUDE.md refactor: switch yarn to pnpm (#12260) 2026-01-05 22:16:34 +08:00
CODE_OF_CONDUCT.md docs: update documentation for a more inclusive environment and added japanese and chinese documentation 2024-10-25 00:09:01 +08:00
CONTRIBUTING.md docs: update docs directory structure 2025-11-26 13:17:01 +08:00
dev-app-update.yml feat: add after-build script for renaming files and updating latest.yml 2025-04-14 17:14:45 +08:00
electron-builder.yml fix: disable differential package for nsis and dmg (#12335) 2026-01-07 16:22:40 +08:00
electron.vite.config.ts refactor: switch yarn to pnpm (#12260) 2026-01-05 22:16:34 +08:00
eslint.config.mjs Add browser CDP MCP server with session management (#11844) 2025-12-16 09:29:30 +08:00
LICENSE chore: update LICENSE file to include full text of GNU AGPL-3.0 2025-10-21 22:56:27 +08:00
package.json ⬆️ chore(deps): upgrade @anthropic-ai/claude-agent-sdk to 0.1.76 (#12317) 2026-01-07 14:48:27 +08:00
playwright.config.ts feat(test): e2e framework (#11494) 2025-11-27 19:52:31 +08:00
pnpm-lock.yaml ⬆️ chore(deps): upgrade @anthropic-ai/claude-agent-sdk to 0.1.76 (#12317) 2026-01-07 14:48:27 +08:00
pnpm-workspace.yaml refactor: switch yarn to pnpm (#12260) 2026-01-05 22:16:34 +08:00
README.md fix: update links in README and AboutSettings for correct documentation paths 2026-01-04 16:07:56 +08:00
SECURITY.md refactor: model list and health check (#7997) 2025-07-21 15:57:08 +08:00
tsconfig.json feat: integrate HeroUI and Tailwind CSS for enhanced styling (#9973) 2025-09-07 16:49:30 +08:00
tsconfig.node.json fix: preserve openrouter reasoning with web search (#11505) 2025-11-28 13:56:46 +08:00
tsconfig.web.json fix: preserve openrouter reasoning with web search (#11505) 2025-11-28 13:56:46 +08:00
vitest.config.ts fix: prevent OOM when handling large base64 image data (#12244) 2026-01-06 00:34:14 +08:00

banner

English | 中文 | Official Site | Documents | Development | Feedback

Featured|HelloGitHub CherryHQ%2Fcherry-studio | Trendshift Cherry Studio - AI Chatbots, AI Desktop Client | Product Hunt

🍒 Cherry Studio

Cherry Studio is a desktop client that supports multiple LLM providers, available on Windows, Mac and Linux.

👏 Join Telegram GroupDiscord | QQ Group(575014769)

❤️ Like Cherry Studio? Give it a star 🌟 or Sponsor to support the development!

🌠 Screenshot

🌟 Key Features

  1. Diverse LLM Provider Support:
  • ☁️ Major LLM Cloud Services: OpenAI, Gemini, Anthropic, and more
  • 🔗 AI Web Service Integration: Claude, Perplexity, Poe, and others
  • 💻 Local Model Support with Ollama, LM Studio
  1. AI Assistants & Conversations:
  • 📚 300+ Pre-configured AI Assistants
  • 🤖 Custom Assistant Creation
  • 💬 Multi-model Simultaneous Conversations
  1. Document & Data Processing:
  • 📄 Supports Text, Images, Office, PDF, and more
  • ☁️ WebDAV File Management and Backup
  • 📊 Mermaid Chart Visualization
  • 💻 Code Syntax Highlighting
  1. Practical Tools Integration:
  • 🔍 Global Search Functionality
  • 📝 Topic Management System
  • 🔤 AI-powered Translation
  • 🎯 Drag-and-drop Sorting
  • 🔌 Mini Program Support
  • ⚙️ MCP(Model Context Protocol) Server
  1. Enhanced User Experience:
  • 🖥️ Cross-platform Support for Windows, Mac, and Linux
  • 📦 Ready to Use - No Environment Setup Required
  • 🎨 Light/Dark Themes and Transparent Window
  • 📝 Complete Markdown Rendering
  • 🤲 Easy Content Sharing

📝 Roadmap

We're actively working on the following features and improvements:

  1. 🎯 Core Features
  • Selection Assistant with smart content selection enhancement
  • Deep Research with advanced research capabilities
  • Memory System with global context awareness
  • Document Preprocessing with improved document handling
  • MCP Marketplace for Model Context Protocol ecosystem
  1. 🗂 Knowledge Management
  • Notes and Collections
  • Dynamic Canvas visualization
  • OCR capabilities
  • TTS (Text-to-Speech) support
  1. 📱 Platform Support
  • HarmonyOS Edition (PC)
  • Android App (Phase 1)
  • iOS App (Phase 1)
  • Multi-Window support
  • Window Pinning functionality
  • Intel AI PC (Core Ultra) Support
  1. 🔌 Advanced Features
  • Plugin System
  • ASR (Automatic Speech Recognition)
  • Assistant and Topic Interaction Refactoring

Track our progress and contribute on our project board.

Want to influence our roadmap? Join our GitHub Discussions to share your ideas and feedback!

🌈 Theme

Welcome PR for more themes

🤝 Contributing

We welcome contributions to Cherry Studio! Here are some ways you can contribute:

  1. Contribute Code: Develop new features or optimize existing code.
  2. Fix Bugs: Submit fixes for any bugs you find.
  3. Maintain Issues: Help manage GitHub issues.
  4. Product Design: Participate in design discussions.
  5. Write Documentation: Improve user manuals and guides.
  6. Community Engagement: Join discussions and help users.
  7. Promote Usage: Spread the word about Cherry Studio.

Refer to the Branching Strategy for contribution guidelines

Getting Started

  1. Fork the Repository: Fork and clone it to your local machine.
  2. Create a Branch: For your changes.
  3. Submit Changes: Commit and push your changes.
  4. Open a Pull Request: Describe your changes and reasons.

For more detailed guidelines, please refer to our Contributing Guide.

Thank you for your support and contributions!

🔧 Developer Co-creation Program

We are launching the Cherry Studio Developer Co-creation Program to foster a healthy and positive-feedback loop within the open-source ecosystem. We believe that great software is built collaboratively, and every merged pull request breathes new life into the project.

We sincerely invite you to join our ranks of contributors and shape the future of Cherry Studio with us.

Contributor Rewards Program

To give back to our core contributors and create a virtuous cycle, we have established the following long-term incentive plan.

The inaugural tracking period for this program will be Q3 2025 (July, August, September). Rewards for this cycle will be distributed on October 1st.

Within any tracking period (e.g., July 1st to September 30th for the first cycle), any developer who contributes more than 30 meaningful commits to any of Cherry Studio's open-source projects on GitHub will be eligible for the following benefits:

  • Cursor Subscription Sponsorship: Receive a $70 USD credit or reimbursement for your Cursor subscription, making AI your most efficient coding partner.
  • Unlimited Model Access: Get unlimited API calls for the DeepSeek and Qwen models.
  • Cutting-Edge Tech Access: Enjoy occasional perks, including API access to models like Claude, Gemini, and OpenAI, keeping you at the forefront of technology.

Growing Together & Future Plans

A vibrant community is the driving force behind any sustainable open-source project. As Cherry Studio grows, so will our rewards program. We are committed to continuously aligning our benefits with the best-in-class tools and resources in the industry. This ensures our core contributors receive meaningful support, creating a positive cycle where developers, the community, and the project grow together.

Moving forward, the project will also embrace an increasingly open stance to give back to the entire open-source community.

How to Get Started?

We look forward to your first Pull Request!

You can start by exploring our repositories, picking up a good first issue, or proposing your own enhancements. Every commit is a testament to the spirit of open source.

Thank you for your interest and contributions.

Let's build together.

🏢 Enterprise Edition

Building on the Community Edition, we are proud to introduce Cherry Studio Enterprise Edition—a privately-deployable AI productivity and management platform designed for modern teams and enterprises.

The Enterprise Edition addresses core challenges in team collaboration by centralizing the management of AI resources, knowledge, and data. It empowers organizations to enhance efficiency, foster innovation, and ensure compliance, all while maintaining 100% control over their data in a secure environment.

Core Advantages

  • Unified Model Management: Centrally integrate and manage various cloud-based LLMs (e.g., OpenAI, Anthropic, Google Gemini) and locally deployed private models. Employees can use them out-of-the-box without individual configuration.
  • Enterprise-Grade Knowledge Base: Build, manage, and share team-wide knowledge bases. Ensures knowledge retention and consistency, enabling team members to interact with AI based on unified and accurate information.
  • Fine-Grained Access Control: Easily manage employee accounts and assign role-based permissions for different models, knowledge bases, and features through a unified admin backend.
  • Fully Private Deployment: Deploy the entire backend service on your on-premises servers or private cloud, ensuring your data remains 100% private and under your control to meet the strictest security and compliance standards.
  • Reliable Backend Services: Provides stable API services and enterprise-grade data backup and recovery mechanisms to ensure business continuity.

Online Demo

🔗 Cherry Studio Enterprise

Version Comparison

Feature Community Edition Enterprise Edition
Open Source Yes Partially released to customers
Cost AGPL-3.0 License Buyout / Subscription Fee
Admin Backend ● Centralized Model Access
Employee Management
● Shared Knowledge Base
Access Control
Data Backup
Server Dedicated Private Deployment

Get the Enterprise Edition

We believe the Enterprise Edition will become your team's AI productivity engine. If you are interested in Cherry Studio Enterprise Edition and would like to learn more, request a quote, or schedule a demo, please feel free to contact us.

🔗 Related Projects

  • new-api: The next-generation LLM gateway and AI asset management system supports multiple languages.

  • one-api: LLM API management and distribution system supporting mainstream models like OpenAI, Azure, and Anthropic. Features a unified API interface, suitable for key management and secondary distribution.

  • Poe: Poe gives you access to the best AI, all in one place. Explore GPT-5, Claude Opus 4.1, DeepSeek-R1, Veo 3, ElevenLabs, and millions of others.

  • ublacklist: Blocks specific sites from appearing in Google search results

🚀 Contributors



📊 GitHub Stats

Stats

Star History

Star History Chart

📜 License

The Cherry Studio Community Edition is governed by the standard GNU Affero General Public License v3.0 (AGPL-3.0), available at https://www.gnu.org/licenses/agpl-3.0.html.

Use of the Cherry Studio Community Edition for commercial purposes is permitted, subject to full compliance with the terms and conditions of the AGPL-3.0 license.

Should you require a commercial license that provides an exemption from the AGPL-3.0 requirements, please contact us at bd@cherry-ai.com.