* feat(ProxyManager): implement SelectiveDispatcher for localhost handling - Added SelectiveDispatcher to manage proxy and direct connections based on the hostname. - Introduced isLocalhost function to check for localhost addresses. - Updated ProxyManager to bypass proxy for localhost in dispatch methods and set proxy bypass rules. - Enhanced global dispatcher setup to utilize SelectiveDispatcher for both EnvHttpProxyAgent and SOCKS dispatcher. * refactor(ProxyManager): update axios configuration to use fetch adapter - Changed axios to use the 'fetch' adapter for proxy requests. - Removed previous proxy settings for axios, streamlining the configuration. - Updated HTTP methods to bind with the new proxy agent. * feat(Proxy): add support for proxy bypass rules - Updated IPC handler to accept optional bypass rules for proxy configuration. - Enhanced ProxyManager to store and utilize bypass rules for localhost and other specified addresses. - Modified settings and UI components to allow users to input and manage bypass rules. - Added translations for bypass rules in multiple languages. * feat(ProxyManager): add HTTP_PROXY environment variable support - Added support for the HTTP_PROXY environment variable in ProxyManager to enhance proxy configuration capabilities. * lint * refactor(ProxyManager): optimize bypass rules handling - Updated bypass rules initialization to split the rules string into an array for improved performance. - Simplified the isByPass function to directly check against the array of bypass rules. - Enhanced configuration handling to ensure bypass rules are correctly parsed from the provided settings. * refactor(ProxyManager): streamline bypass rules initialization - Consolidated the initialization of bypass rules by directly splitting the default rules string into an array. - Updated configuration handling to ensure bypass rules are correctly assigned without redundant splitting. * style(GeneralSettings): adjust proxy bypass rules input width to improve UI layout * refactor(ProxyManager): enhance proxy configuration logging and handling - Added proxy bypass rules to the configuration method for improved flexibility. - Updated logging to include bypass rules for better debugging. - Refactored the setGlobalProxy method to accept configuration parameters directly, streamlining proxy setup. - Adjusted the useAppInit hook to handle proxy settings more cleanly. * refactor(ProxyManager): implement close and destroy methods for proxy dispatcher - Added close and destroy methods to the SelectiveDispatcher class for better resource management. - Updated ProxyManager to handle the lifecycle of the proxyDispatcher, ensuring proper closure and destruction. - Enhanced error handling during dispatcher closure and destruction to prevent resource leaks. * refactor(ProxyManager): manage proxy agent lifecycle - Introduced proxyAgent property to ProxyManager for better management of the proxy agent. - Implemented error handling during the destruction of the proxy agent to prevent potential issues. - Updated the proxy setup process to ensure the proxy agent is correctly initialized and cleaned up. * refactor(ProxyManager): centralize default bypass rules management - Moved default bypass rules to a shared constant for consistency across components. - Updated ProxyManager and GeneralSettings to utilize the centralized bypass rules. - Adjusted migration logic to set default bypass rules from the shared constant, ensuring uniformity in configuration. |
||
|---|---|---|
| .github | ||
| .husky | ||
| .vscode | ||
| .yarn | ||
| build | ||
| docs | ||
| packages | ||
| resources | ||
| scripts | ||
| src | ||
| tests | ||
| .editorconfig | ||
| .env.example | ||
| .git-blame-ignore-revs | ||
| .gitattributes | ||
| .gitignore | ||
| .npmrc | ||
| .prettierignore | ||
| .prettierrc | ||
| .yarnrc.yml | ||
| AGENT.md | ||
| CLAUDE.md | ||
| CODE_OF_CONDUCT.md | ||
| CONTRIBUTING.md | ||
| dev-app-update.yml | ||
| electron-builder.yml | ||
| electron.vite.config.ts | ||
| eslint.config.mjs | ||
| LICENSE | ||
| package.json | ||
| playwright.config.ts | ||
| README.md | ||
| SECURITY.md | ||
| tsconfig.json | ||
| tsconfig.node.json | ||
| tsconfig.web.json | ||
| vitest.config.ts | ||
| yarn.lock | ||
English | 中文 | Official Site | Documents | Development | Feedback
🍒 Cherry Studio
Cherry Studio is a desktop client that supports multiple LLM providers, available on Windows, Mac and Linux.
👏 Join Telegram Group|Discord | QQ Group(575014769)
❤️ Like Cherry Studio? Give it a star 🌟 or Sponsor to support the development!
🌠 Screenshot
🌟 Key Features
- Diverse LLM Provider Support:
- ☁️ Major LLM Cloud Services: OpenAI, Gemini, Anthropic, and more
- 🔗 AI Web Service Integration: Claude, Peplexity, Poe, and others
- 💻 Local Model Support with Ollama, LM Studio
- AI Assistants & Conversations:
- 📚 300+ Pre-configured AI Assistants
- 🤖 Custom Assistant Creation
- 💬 Multi-model Simultaneous Conversations
- Document & Data Processing:
- 📄 Supports Text, Images, Office, PDF, and more
- ☁️ WebDAV File Management and Backup
- 📊 Mermaid Chart Visualization
- 💻 Code Syntax Highlighting
- Practical Tools Integration:
- 🔍 Global Search Functionality
- 📝 Topic Management System
- 🔤 AI-powered Translation
- 🎯 Drag-and-drop Sorting
- 🔌 Mini Program Support
- ⚙️ MCP(Model Context Protocol) Server
- Enhanced User Experience:
- 🖥️ Cross-platform Support for Windows, Mac, and Linux
- 📦 Ready to Use - No Environment Setup Required
- 🎨 Light/Dark Themes and Transparent Window
- 📝 Complete Markdown Rendering
- 🤲 Easy Content Sharing
📝 Roadmap
We're actively working on the following features and improvements:
- 🎯 Core Features
- Selection Assistant with smart content selection enhancement
- Deep Research with advanced research capabilities
- Memory System with global context awareness
- Document Preprocessing with improved document handling
- MCP Marketplace for Model Context Protocol ecosystem
- 🗂 Knowledge Management
- Notes and Collections
- Dynamic Canvas visualization
- OCR capabilities
- TTS (Text-to-Speech) support
- 📱 Platform Support
- HarmonyOS Edition (PC)
- Android App (Phase 1)
- iOS App (Phase 1)
- Multi-Window support
- Window Pinning functionality
- 🔌 Advanced Features
- Plugin System
- ASR (Automatic Speech Recognition)
- Assistant and Topic Interaction Refactoring
Track our progress and contribute on our project board.
Want to influence our roadmap? Join our GitHub Discussions to share your ideas and feedback!
🌈 Theme
- Theme Gallery: https://cherrycss.com
- Aero Theme: https://github.com/hakadao/CherryStudio-Aero
- PaperMaterial Theme: https://github.com/rainoffallingstar/CherryStudio-PaperMaterial
- Claude dynamic-style: https://github.com/bjl101501/CherryStudio-Claudestyle-dynamic
- Maple Neon Theme: https://github.com/BoningtonChen/CherryStudio_themes
Welcome PR for more themes
🤝 Contributing
We welcome contributions to Cherry Studio! Here are some ways you can contribute:
- Contribute Code: Develop new features or optimize existing code.
- Fix Bugs: Submit fixes for any bugs you find.
- Maintain Issues: Help manage GitHub issues.
- Product Design: Participate in design discussions.
- Write Documentation: Improve user manuals and guides.
- Community Engagement: Join discussions and help users.
- Promote Usage: Spread the word about Cherry Studio.
Refer to the Branching Strategy for contribution guidelines
Getting Started
- Fork the Repository: Fork and clone it to your local machine.
- Create a Branch: For your changes.
- Submit Changes: Commit and push your changes.
- Open a Pull Request: Describe your changes and reasons.
For more detailed guidelines, please refer to our Contributing Guide.
Thank you for your support and contributions!
🔧 Developer Co-creation Program
We are launching the Cherry Studio Developer Co-creation Program to foster a healthy and positive-feedback loop within the open-source ecosystem. We believe that great software is built collaboratively, and every merged pull request breathes new life into the project.
We sincerely invite you to join our ranks of contributors and shape the future of Cherry Studio with us.
Contributor Rewards Program
To give back to our core contributors and create a virtuous cycle, we have established the following long-term incentive plan.
The inaugural tracking period for this program will be Q3 2025 (July, August, September). Rewards for this cycle will be distributed on October 1st.
Within any tracking period (e.g., July 1st to September 30th for the first cycle), any developer who contributes more than 30 meaningful commits to any of Cherry Studio's open-source projects on GitHub will be eligible for the following benefits:
- Cursor Subscription Sponsorship: Receive a $70 USD credit or reimbursement for your Cursor subscription, making AI your most efficient coding partner.
- Unlimited Model Access: Get unlimited API calls for the DeepSeek and Qwen models.
- Cutting-Edge Tech Access: Enjoy occasional perks, including API access to models like Claude, Gemini, and OpenAI, keeping you at the forefront of technology.
Growing Together & Future Plans
A vibrant community is the driving force behind any sustainable open-source project. As Cherry Studio grows, so will our rewards program. We are committed to continuously aligning our benefits with the best-in-class tools and resources in the industry. This ensures our core contributors receive meaningful support, creating a positive cycle where developers, the community, and the project grow together.
Moving forward, the project will also embrace an increasingly open stance to give back to the entire open-source community.
How to Get Started?
We look forward to your first Pull Request!
You can start by exploring our repositories, picking up a good first issue, or proposing your own enhancements. Every commit is a testament to the spirit of open source.
Thank you for your interest and contributions.
Let's build together.
🏢 Enterprise Edition
Building on the Community Edition, we are proud to introduce Cherry Studio Enterprise Edition—a privately-deployable AI productivity and management platform designed for modern teams and enterprises.
The Enterprise Edition addresses core challenges in team collaboration by centralizing the management of AI resources, knowledge, and data. It empowers organizations to enhance efficiency, foster innovation, and ensure compliance, all while maintaining 100% control over their data in a secure environment.
Core Advantages
- Unified Model Management: Centrally integrate and manage various cloud-based LLMs (e.g., OpenAI, Anthropic, Google Gemini) and locally deployed private models. Employees can use them out-of-the-box without individual configuration.
- Enterprise-Grade Knowledge Base: Build, manage, and share team-wide knowledge bases. Ensures knowledge retention and consistency, enabling team members to interact with AI based on unified and accurate information.
- Fine-Grained Access Control: Easily manage employee accounts and assign role-based permissions for different models, knowledge bases, and features through a unified admin backend.
- Fully Private Deployment: Deploy the entire backend service on your on-premises servers or private cloud, ensuring your data remains 100% private and under your control to meet the strictest security and compliance standards.
- Reliable Backend Services: Provides stable API services and enterprise-grade data backup and recovery mechanisms to ensure business continuity.
✨ Online Demo
🚧 Public Beta Notice
The Enterprise Edition is currently in its early public beta stage, and we are actively iterating and optimizing its features. We are aware that it may not be perfectly stable yet. If you encounter any issues or have valuable suggestions during your trial, we would be very grateful if you could contact us via email to provide feedback.
Version Comparison
| Feature | Community Edition | Enterprise Edition |
|---|---|---|
| Open Source | ✅ Yes | ⭕️ Partially released to customers |
| Cost | Free for Personal Use / Commercial License | Buyout / Subscription Fee |
| Admin Backend | — | ● Centralized Model Access ● Employee Management ● Shared Knowledge Base ● Access Control ● Data Backup |
| Server | — | ✅ Dedicated Private Deployment |
Get the Enterprise Edition
We believe the Enterprise Edition will become your team's AI productivity engine. If you are interested in Cherry Studio Enterprise Edition and would like to learn more, request a quote, or schedule a demo, please feel free to contact us.
- For Business Inquiries & Purchasing: 📧 bd@cherry-ai.com
🔗 Related Projects
-
one-api: LLM API management and distribution system supporting mainstream models like OpenAI, Azure, and Anthropic. Features a unified API interface, suitable for key management and secondary distribution.
-
ublacklist: Blocks specific sites from appearing in Google search results