mirror of
https://github.com/CherryHQ/cherry-studio.git
synced 2025-12-19 06:30:10 +08:00
ede2b75cd0
84 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
ede2b75cd0
|
feat: 移除特性标志相关代码,简化消息存储逻辑 | ||
|
|
b493172090
|
feat: 添加提取会话ID的功能并更新消息发送逻辑 | ||
|
|
f5a41e9c78
|
feat: add agent session renaming functionality based on message summary | ||
|
|
e40e1d0b36
|
feat: refactor message persistence and block update logic for agent sessions | ||
|
|
e5aa58722c
|
Optimize agent message streaming with throttled persistence
- Prevent unnecessary message reloads by checking existing messages before loading session messages - Implement LRU cache and throttled persistence for streaming agent messages to reduce backend load - Add streaming state detection and proper cleanup for complete messages to improve performance |
||
|
|
15f216b050
|
Implement agent session message persistence and streaming state management
- Add comprehensive solution documentation for status persistence and streaming state - Implement message update functionality in AgentMessageDataSource for agent sessions - Remove redundant persistAgentExchange logic to eliminate duplicate saves - Streamline message persistence flow to use appendMessage and updateMessageAndBlocks consistently |
||
|
|
a17a198912
|
Add V2 database service integration with feature flag support
- Integrate V2 implementations for message operations (save, update, delete, clear) with feature flag control - Add topic creation fallback in DexieMessageDataSource when loading non-existent topics - Create integration status documentation tracking completed and pending V2 migrations - Update Topic type to include TopicType enum for proper topic classification |
||
|
|
1d5761b1fd
|
WIP | ||
|
|
86a2780e2c
|
Add initial chunk emission for agent response processing
- Emit `ChunkType.LLM_RESPONSE_CREATED` event to mirror assistant behavior - Ensure UI reflects pending state immediately during response processing - Maintain consistency with existing stream processing callback structure |
||
|
|
1fd44a68b0
|
Refactor agent streaming from EventEmitter to ReadableStream
Replaced EventEmitter-based agent streaming with ReadableStream for better compatibility with AI SDK patterns. Modified SessionMessageService to return stream/completion pair instead of event emitter, updated HTTP handlers to use stream pumping, and added IPC contract for renderer-side message persistence. |
||
|
|
4d1d3e316f
|
feat: use oxlint to speed up lint (#10168)
* build: add eslint-plugin-oxlint dependency Add new eslint plugin to enhance linting capabilities with oxlint rules * build(eslint): add oxlint plugin to eslint config Add oxlint plugin as recommended in the documentation to enhance linting capabilities * build: add oxlint v1.15.0 as a dependency * build: add oxlint to linting commands Add oxlint alongside eslint in test:lint and lint scripts for enhanced static analysis * build: add oxlint configuration file Configure oxlint with a comprehensive set of rules for JavaScript/TypeScript code quality checks * chore: update oxlint configuration and related settings - Add oxc to editor code actions on save - Update oxlint configs to use eslint, typescript, and unicorn presets - Extend ignore patterns in oxlint configuration - Simplify oxlint command in package.json scripts - Add oxlint-tsgolint dependency * fix: lint warning * chore: update oxlintrc from eslint.recommended * refactor(lint): update eslint and oxlint configurations - Add src/preload to eslint ignore patterns - Update oxlint env to es2022 and add environment overrides - Adjust several lint rule severities and configurations * fix: lint error * fix(file): replace eslint-disable with oxlint-disable in sanitizeFilename The linter was changed from ESLint to oxlint, so the directive needs to be updated accordingly. * fix: enforce stricter linting by failing on warnings in test:lint script * feat: add recommended ts-eslint rules into exlint * docs: remove outdated comment in oxlint config file * style: disable typescript/no-require-imports rule in oxlint config * docs(utils): fix comment typo from NODE to NOTE * fix(MessageErrorBoundary): correct error description display condition The error description was incorrectly showing in production and hiding in development. Fix the logic to show detailed errors only in development mode * chore: add oxc-vscode extension to recommended list * ci(workflows): reorder format check step in pr-ci.yml * chore: update yarn.lock |
||
|
|
80afb3a86e
|
feat: add Perplexity SDK integration (#10137)
* feat: add Perplexity SDK integration * feat: enhance AiSdkToChunkAdapter with web search capabilities - Added support for web search in AiSdkToChunkAdapter, allowing for dynamic link conversion based on provider type. - Updated constructor to accept provider type and web search enablement flag. - Improved link handling logic to buffer and process incomplete links. - Enhanced message block handling in the store to accommodate new message structure. - Updated middleware configuration to include web search option. * fix * fix * chore: remove unuseful code * fix: ci * chore: log |
||
|
|
cf7584bb63
|
refactor(toast): migrate message to toast (#10023)
* feat(toast): add toast utility functions to global interface Expose error, success, warning, and info toast functions globally for consistent notification handling * refactor(toast): use ToastPropsColored type to enforce color consistency Create ToastPropsColored type to explicitly omit color property from ToastProps * refactor(toast): simplify toast functions using factory pattern Create a factory function to generate toast functions instead of repeating similar code. This improves maintainability and reduces code duplication. * fix: replace window.message with window.toast for copy notifications * refactor(toast): update type definition to use Parameters utility Use Parameters utility type to derive ToastPropsColored from addToast parameters for better type safety * feat(types): add RequireSome utility type for making specific properties required * feat(toast): add loading toast functionality Add loading toast type to support promises in toast notifications. This enables showing loading states for async operations. * chore: add claude script to package.json * build(eslint): add packages dist folder to ignore patterns * refactor: migrate message to toast * refactor: update toast import path from @heroui/react to @heroui/toast * docs(toast): add JSDoc comments for toast functions * fix(toast): set default timeout for loading toasts Make loading toasts disappear immediately by default when timeout is not specified * fix(translate): replace window.message with window.toast for consistency Use window.toast consistently across the translation page for displaying notifications to maintain a uniform user experience and simplify the codebase. * refactor: remove deprecated message interface from window The MessageInstance interface from antd was marked as deprecated and is no longer needed in the window object. * refactor(toast): consolidate toast utilities into single export Move all toast-related functions into a single utility export to reduce code duplication and improve maintainability. Update all imports to use the new utility function. * docs(useOcr): remove redundant comment in ocr function * docs: update comments from Chinese to English Update error log messages in CodeToolsPage to use English instead of Chinese for better consistency and maintainability * feat(toast): add no-drag style and adjust toast placement add custom CSS class to disable drag on toast elements move toast placement to top-center and adjust timeout settings |
||
|
|
a227f6dcb9
|
Feat/aisdk package (#7404)
* feat: enhance AI SDK middleware integration and support
- Added AiSdkMiddlewareBuilder for dynamic middleware construction based on various conditions.
- Updated ModernAiProvider to utilize new middleware configuration, improving flexibility in handling completions.
- Refactored ApiService to pass middleware configuration during AI completions, enabling better control over processing.
- Introduced new README documentation for the middleware builder, outlining usage and supported conditions.
* feat: enhance AI core functionality with smoothStream integration
- Added smoothStream to the middleware exports in index.ts for improved streaming capabilities.
- Updated PluginEnabledAiClient to conditionally apply middlewares, removing the default simulateStreamingMiddleware.
- Modified ModernAiProvider to utilize smoothStream in streamText, enhancing text processing with configurable chunking and delay options.
* feat: enhance AI SDK documentation and client functionality
- Added detailed usage examples for the native provider registry in the README.md, demonstrating how to create and utilize custom provider registries.
- Updated ApiClientFactory to enforce type safety for model instances.
- Refactored PluginEnabledAiClient methods to support both built-in logic and custom registry usage for text and object generation, improving flexibility and usability.
* refactor: update AiSdkToChunkAdapter and middleware for improved chunk handling
- Modified convertAndEmitChunk method to handle new chunk types and streamline processing.
- Adjusted thinkingTimeMiddleware to remove unnecessary parameters and enhance clarity.
- Updated middleware integration in AiSdkMiddlewareBuilder for better middleware management.
* feat: add Cherry Studio transformation and settings plugins
- Introduced cherryStudioTransformPlugin for converting Cherry Studio messages to AI SDK format, enhancing compatibility.
- Added cherryStudioSettingsPlugin to manage Assistant settings like temperature and TopP.
- Implemented createCherryStudioContext function for preparing context metadata for Cherry Studio calls.
* fix: refine experimental_transform handling and improve chunking logic
- Updated PluginEnabledAiClient to streamline the handling of experimental_transform parameters.
- Adjusted ModernAiProvider's smoothStream configuration for better chunking of text, enhancing processing efficiency.
- Re-enabled block updates in messageThunk for improved state management.
* refactor: update ApiClientFactory and index_new for improved type handling and provider mapping
- Changed the type of options in ClientConfig to 'any' for flexibility.
- Overloaded createImageClient method to support different provider settings.
- Added vertexai mapping to the provider type mapping in index_new.ts for enhanced compatibility.
* feat: enhance AI SDK documentation and client functionality
- Added detailed usage examples for the native provider registry in the README.md, demonstrating how to create and utilize custom provider registries.
- Updated ApiClientFactory to enforce type safety for model instances.
- Refactored PluginEnabledAiClient methods to support both built-in logic and custom registry usage for text and object generation, improving flexibility and usability.
* feat: add openai-compatible provider and enhance provider configuration
- Introduced the @ai-sdk/openai-compatible package to support compatibility with OpenAI.
- Added a new ProviderConfigFactory and ProviderConfigBuilder for streamlined provider configuration.
- Updated the provider registry to include the new Google Vertex AI import path.
- Enhanced the index.ts to export new provider configuration utilities for better type safety and usability.
- Refactored ApiService and middleware to integrate the new provider configurations effectively.
* feat: enhance Vertex AI provider integration and configuration
- Added support for Google Vertex AI credentials in the provider configuration.
- Refactored the VertexAPIClient to handle both standard and VertexProvider types.
- Implemented utility functions to check Vertex AI configuration completeness and create VertexProvider instances.
- Updated provider mapping in index_new.ts to ensure proper handling of Vertex AI settings.
* feat: enhance provider options and examples for AI SDK
- Introduced new utility functions for creating and merging provider options, improving type safety and usability.
- Added comprehensive examples for OpenAI, Anthropic, Google, and generic provider options to demonstrate usage.
- Refactored existing code to streamline provider configuration and enhance clarity in the options management.
- Updated the PluginEnabledAiClient to simplify the handling of model parameters and improve overall functionality.
* feat: add patch for Google Vertex AI and enhance private key handling
- Introduced a patch for the @ai-sdk/google-vertex package to improve URL handling based on region.
- Added a new utility function to format private keys, ensuring correct PEM structure and validation.
- Updated the ProviderConfigBuilder to utilize the new private key formatting function for Google credentials.
- Created a pnpm workspace configuration to manage patched dependencies effectively.
* feat: add OpenAI Compatible provider and enhance provider configuration
- Introduced a new OpenAI Compatible provider to the AiProviderRegistry, allowing for integration with the @ai-sdk/openai-compatible package.
- Updated provider configuration logic to support the new provider, including adjustments to API host formatting and options management.
- Refactored middleware to streamline handling of OpenAI-specific configurations.
* fix: enhance anthropic provider configuration and middleware handling
- Updated providerToAiSdkConfig to support both OpenAI and Anthropic providers, improving flexibility in API host formatting.
- Refactored thinkingTimeMiddleware to ensure all chunks are correctly enqueued, enhancing middleware functionality.
- Corrected parameter naming in getAnthropicReasoningParams for consistency and clarity in configuration.
* feat: enhance AI SDK chunk handling and tool call processing
- Introduced ToolCallChunkHandler for managing tool call events and results, improving the handling of tool interactions.
- Updated AiSdkToChunkAdapter to utilize the new handler, streamlining the processing of tool call chunks.
- Refactored transformParameters to support dynamic tool integration and improved parameter handling.
- Adjusted provider mapping in factory.ts to include new provider types, enhancing compatibility with various AI services.
- Removed obsolete cherryStudioTransformPlugin to clean up the codebase and focus on more relevant functionality.
* feat: enhance provider ID resolution in AI SDK
- Updated getAiSdkProviderId function to include mapping for provider types, improving compatibility with third-party SDKs.
- Refined return logic to ensure correct provider ID resolution, enhancing overall functionality and support for various providers.
* refactor: restructure AI Core architecture and enhance client functionality
- Updated the AI Core documentation to reflect the new architecture and design principles, emphasizing modularity and type safety.
- Refactored the client structure by removing obsolete files and consolidating client creation logic into a more streamlined format.
- Introduced a new core module for managing execution and middleware, improving the overall organization of the codebase.
- Enhanced the orchestration layer to provide a clearer API for users, integrating the creation and execution processes more effectively.
- Added comprehensive type definitions and utility functions for better type safety and usability across the SDK.
* refactor: simplify AI Core architecture and enhance runtime execution
- Restructured the AI Core documentation to reflect a simplified two-layer architecture, focusing on clear responsibilities between models and runtime layers.
- Removed the orchestration layer and consolidated its functionality into the runtime layer, streamlining the API for users.
- Introduced a new runtime executor for managing plugin-enhanced AI calls, improving the handling of execution and middleware.
- Updated the core modules to enhance type safety and usability, including comprehensive type definitions for model creation and execution configurations.
- Removed obsolete files and refactored existing code to improve organization and maintainability across the SDK.
* feat: enhance AI Core runtime with advanced model handling and middleware support
- Introduced new high-level APIs for model creation and configuration, improving usability for advanced users.
- Enhanced the RuntimeExecutor to support both direct model usage and model ID resolution, allowing for more flexible execution options.
- Updated existing methods to accept middleware configurations, streamlining the integration of custom processing logic.
- Refactored the plugin system to better accommodate middleware, enhancing the overall extensibility of the AI Core.
- Improved documentation to reflect the new capabilities and usage patterns for the runtime APIs.
* feat: enhance plugin system with new reasoning and text plugins
- Introduced `reasonPlugin` and `textPlugin` to improve chunk processing and handling of reasoning content.
- Updated `transformStream` method signatures for better type safety and usability.
- Enhanced `ThinkingTimeMiddleware` to accurately track thinking time using `performance.now()`.
- Refactored `ThinkingBlock` component to utilize block thinking time directly, improving performance and clarity.
- Added logging for middleware builder to assist in debugging and monitoring middleware configurations.
* refactor: update RuntimeExecutor and introduce MCP Prompt Plugin
- Changed `pluginClient` to `pluginEngine` in `RuntimeExecutor` for clarity and consistency.
- Updated method calls in `RuntimeExecutor` to use the new `pluginEngine`.
- Enhanced `AiSdkMiddlewareBuilder` to include `mcpTools` in the middleware configuration.
- Added `MCPPromptPlugin` to support tool calls within prompts, enabling recursive processing and improved handling of tool interactions.
- Updated `ApiService` to pass `mcpTools` during chat completion requests, enhancing integration with the new plugin system.
* feat: enhance MCP Prompt plugin and recursive call capabilities
- Updated `tsconfig.web.json` to support wildcard imports for `@cherrystudio/ai-core`.
- Enhanced `package.json` to include type definitions and imports for built-in plugins.
- Introduced recursive call functionality in `PluginManager` and `PluginEngine`, allowing for improved handling of tool interactions.
- Added `MCPPromptPlugin` to facilitate tool calls within prompts, enabling recursive processing of tool results.
- Refactored `transformStream` methods across plugins to accommodate new parameters and improve type safety.
* <type>: <subject>
<body>
<footer>
用來簡要描述影響本次變動,概述即可
* feat: enhance MCP Prompt plugin with recursive call support and context handling
- Updated `AiRequestContext` to enforce `recursiveCall` and added `isRecursiveCall` for better state management.
- Modified `createContext` to initialize `recursiveCall` with a placeholder function.
- Enhanced `MCPPromptPlugin` to utilize a custom `createSystemMessage` function for improved message handling during recursive calls.
- Refactored `PluginEngine` to manage recursive call states, ensuring proper execution flow and context integrity.
* feat: update package dependencies and introduce new patches for AI SDK tools
- Added patches for `@ai-sdk/google-vertex` and `@ai-sdk/openai-compatible` to enhance functionality and fix issues.
- Updated `package.json` to reflect new dependency versions and patch paths.
- Refactored `transformParameters` and `ApiService` to support new tool configurations and improve parameter handling.
- Introduced utility functions for setting up tools and managing options, enhancing the overall integration of tools within the AI SDK.
* refactor: disable console logging in MCP Prompt plugin for cleaner output
- Commented out console log statements in the `createMCPPromptPlugin` to reduce noise during execution.
- Maintained the structure and functionality of the plugin while improving readability and performance.
* feat: enhance ModernAiProvider with new reasoning plugins and dynamic middleware construction
- Introduced `reasoningTimePlugin` and `smoothReasoningPlugin` to improve reasoning content handling and processing.
- Refactored `ModernAiProvider` to dynamically build plugin arrays based on middleware configuration, enhancing flexibility.
- Removed the obsolete `ThinkingTimeMiddleware` to streamline middleware management.
- Updated `buildAiSdkMiddlewares` to reflect changes in middleware handling and improve clarity in the configuration process.
- Enhanced logging for better visibility into plugin and middleware configurations during execution.
* feat: introduce MCP Prompt Plugin and refactor built-in plugin structure
- Added `mcpPromptPlugin.ts` to encapsulate MCP Prompt functionality, providing a structured approach for tool calls within prompts.
- Updated `index.ts` to reference the new `mcpPromptPlugin`, enhancing modularity and clarity in the built-in plugins.
- Removed the outdated `example-plugins.ts` file to streamline the plugin directory and focus on essential components.
* refactor: rename and restructure message handling in Conversation and Orchestrate services
- Renamed `prepareMessagesForLlm` to `prepareMessagesForModel` in `ConversationService` for clarity.
- Updated `OrchestrationService` to use the new method name and introduced a new function `transformMessagesAndFetch` for improved message processing.
- Adjusted imports in `messageThunk` to reflect the changes in the orchestration service, enhancing code readability and maintainability.
* refactor: streamline error handling and logging in ModernAiProvider
- Commented out the try-catch block in the `ModernAiProvider` class to simplify the code structure.
- Enhanced readability by removing unnecessary error logging while maintaining the core functionality of the AI processing flow.
- Updated `messageThunk` to incorporate an abort controller for improved request management during message processing.
* fix: update reasoningTimePlugin and smoothReasoningPlugin for improved performance tracking
- Changed the invocation of `reasoningTimePlugin` to a direct reference in `ModernAiProvider`.
- Initialized `thinkingStartTime` with `performance.now()` in `reasoningTimePlugin` for accurate timing.
- Removed `thinking_millsec` from the enqueued chunks in `smoothReasoningPlugin` to streamline data handling.
- Added console logging for performance tracking in `reasoningTimePlugin` to aid in debugging.
* feat: extend buildStreamTextParams to include capabilities for enhanced AI functionality
- Updated the return type of `buildStreamTextParams` to include `capabilities` for reasoning, web search, and image generation.
- Modified `fetchChatCompletion` to utilize the new capabilities structure, improving middleware configuration based on model capabilities.
* refactor: simplify provider validation and enhance plugin configuration
- Commented out the provider support check in `RuntimeExecutor` to streamline initialization.
- Updated `providerToAiSdkConfig` to utilize `AiCore.isSupported` for improved provider validation.
- Enhanced middleware configuration in `ModernAiProvider` to ensure tools are only added when enabled and available.
- Added comments in `transformParameters` for clarity on parameter handling and plugin activation.
* refactor: remove OpenRouter provider support and streamline reasoning logic
- Commented out the OpenRouter provider in `registry.ts` and related configurations due to excessive bugs.
- Simplified reasoning logic in `transformParameters.ts` and `options.ts` by removing unnecessary checks for `enableReasoning`.
- Enhanced logging in `transformParameters.ts` to provide better insights into reasoning capabilities.
- Updated `getReasoningEffort` to handle cases where reasoning effort is not defined, improving model compatibility.
* chore: update OpenRouter provider to version 0.7.2 and add support functions
- Updated the OpenRouter provider dependency in `package.json` and `yarn.lock` to version 0.7.2.
- Added a new function `createOpenRouterOptions` in `factory.ts` for creating OpenRouter provider options.
- Updated type definitions in `types.ts` and `registry.ts` to include OpenRouter provider settings, enhancing provider management.
* refactor: streamline reasoning plugins and remove unused components
- Removed the `reasoningTimePlugin` and `mcpPromptPlugin` to simplify the plugin architecture.
- Updated the `smoothReasoningPlugin` to enhance its functionality and reduce delay in processing.
- Adjusted the `textPlugin` to align with the new delay settings for smoother output.
- Modified the `ModernAiProvider` to utilize the updated `smoothReasoningPlugin` without the removed plugins.
* refactor: update reasoning plugins and enhance performance
- Replaced `smoothReasoningPlugin` with `reasoningTimePlugin` to improve reasoning time tracking.
- Commented out the unused `textPlugin` in the plugin list for better clarity.
- Adjusted delay settings in both `smoothReasoningPlugin` and `textPlugin` for optimized processing.
- Enhanced logging in reasoning plugins for better debugging and performance insights.
* feat: implement useSmoothStream hook for dynamic text rendering
- Added a new custom hook `useSmoothStream` to manage smooth text streaming with adjustable delays.
- Integrated the `useSmoothStream` hook into the `Markdown` component to enhance content display during streaming.
- Improved state management for displayed content and stream completion status in the `Markdown` component.
* feat: enhance OpenAI provider handling and add providerParams utility module
- Updated the `createBaseModel` function to handle OpenAI provider responses in strict mode.
- Modified `providerToAiSdkConfig` to include specific options for OpenAI when in strict mode.
- Introduced a new utility module `providerParams.ts` for managing provider-specific parameters, including OpenAI, Anthropic, and Gemini configurations.
- Added functions to retrieve service tiers, specific parameters, and reasoning efforts for various providers, improving overall provider management.
* feat: enhance OpenAI model handling with utility function
- Introduced `isOpenAIChatCompletionOnlyModel` utility function to determine if a model ID corresponds to OpenAI's chat completion-only models.
- Updated `createBaseModel` function to utilize the new utility for improved handling of OpenAI provider responses in strict mode.
- Refactored reasoning parameters in `getOpenAIReasoningParams` for consistency and clarity.
* refactor: remove providerParams utility module
* feat: add web search plugin for enhanced AI provider capabilities
- Introduced a new `webSearchPlugin` to provide unified web search functionality across multiple AI providers.
- Added helper functions for adapting web search parameters for OpenAI, Gemini, and Anthropic providers.
- Updated the built-in plugin index to export the new web search plugin and its configuration type.
- Created a new `helper.ts` file to encapsulate web search adaptation logic and support checks for provider compatibility.
* chore: migrate to v5
* refactor: migrate to v5 patch-1
* fix: unexpected chunk
* refactor: streamline model configuration and factory functions
- Updated the `createModel` function to accept a simplified `ModelConfig` interface, enhancing clarity and usability.
- Refactored `createBaseModel` to destructure parameters for better readability and maintainability.
- Removed the `ModelCreator.ts` file as its functionality has been integrated into the factory functions.
- Adjusted type definitions in `types.ts` to reflect changes in model configuration structure, ensuring consistency across the codebase.
* refactor: migrate to v5 patch-2
* fix(provider): config error
* fix(provider): config error patch-1
* feat: support image
* refactor: enhance model configuration and plugin execution
- Simplified the `createModel` function to directly accept the `ModelConfig` object, improving clarity.
- Updated `createBaseModel` to include `extraModelConfig` for extended configuration options.
- Introduced `executeConfigureContext` method in `PluginManager` to handle context configuration for plugins.
- Adjusted type definitions in `types.ts` to ensure consistency with the new configuration structure.
- Refactored plugin execution methods in `PluginEngine` to utilize the resolved model directly, enhancing the flow of data through the plugin system.
* fix: openai-gemini support
* refactor: update type exports and enhance web search functionality
- Added `ReasoningPart`, `FilePart`, and `ImagePart` to type exports in `index.ts`.
- Refactored `transformParameters.ts` to include `enableWebSearch` option and integrate web search tools.
- Introduced new utility `getWebSearchTools` in `websearch.ts` to manage web search tool configurations based on model type.
- Commented out deprecated code in `smoothReasoningPlugin.ts` and `textPlugin.ts` for potential removal.
* fix: format apihost
* feat: add XAI provider options and enhance web search plugin
- Introduced `createXaiOptions` function for XAI provider configuration.
- Added `XaiProviderOptions` type and validation schema in `xai.ts`.
- Updated `ProviderOptionsMap` to include XAI options.
- Enhanced `webSearchPlugin` to support XAI-specific search parameters.
- Refactored helper functions to integrate new XAI options into provider configurations.
* feat: conditionally enable web search plugin based on configuration
- Updated the logic to add the `webSearchPlugin` only if `middlewareConfig.enableWebSearch` is true.
- Added comments to clarify the use of default search parameters and configuration options.
* feat: aihubmix support
* refactor: improve web search plugin and middleware integration
- Cleaned up the web search plugin code by commenting out unused sections for clarity.
- Enhanced middleware handling for the OpenAI provider by wrapping the logic in a block for better readability.
- Removed redundant imports from various files to streamline the codebase.
- Added `enableWebSearch` parameter to the fetchChatCompletion function for improved functionality.
* fix: azure-openai provider
* feat: enhance web search plugin configuration
- Added `sources` array to the default web search configuration, allowing for multiple source types including 'web', 'x', and 'news'.
- This update improves the flexibility and functionality of the web search plugin.
* chore: update ai package version to 5.0.0-beta.9 in package.json and yarn.lock
* feat: enhance model handling and provider integration
- Updated `createBaseModel` to differentiate between OpenAI chat and response models.
- Introduced new utility functions for model identification: `isOpenAIReasoningModel`, `isOpenAILLMModel`, and `getModelToProviderId`.
- Improved `transformParameters` to conditionally set the system prompt based on the assistant's prompt.
- Refactored `getAiSdkProviderIdForAihubmix` to simplify provider identification logic.
- Enhanced `getAiSdkProviderId` to support provider type checks.
* feat: enhance web search plugin and tool handling
- Refactored `helper.ts` to export new types `AnthropicSearchInput` and `AnthropicSearchOutput` for better integration with the web search plugin.
- Updated `index.ts` to include the new types in the exports for improved type safety.
- Modified `AiSdkToChunkAdapter.ts` to handle tool calls more flexibly by introducing a `GenericProviderTool` type, allowing for better differentiation between MCP tools and provider-executed tools.
- Adjusted `handleTooCallChunk.ts` to accommodate the new tool type structure, enhancing the handling of tool call responses.
- Updated type definitions in `index.ts` to reflect changes in tool handling logic.
* feat: enhance provider settings and model configuration
- Updated `ModelConfig` to include a `mode` property for better differentiation between 'chat' and 'responses'.
- Modified `createBaseModel` to conditionally set the provider based on the new `mode` property in `providerSettings`.
- Refactored `RuntimeExecutor` to utilize the updated `ModelConfig` for improved type safety and clarity in provider settings.
- Adjusted imports in `executor.ts` and `types.ts` to align with the new model configuration structure.
* feat: enhance AiSdkToChunkAdapter for web search results handling
- Updated `AiSdkToChunkAdapter` to include `webSearchResults` in the final output structure for improved web search integration.
- Modified `convertAndEmitChunk` method to handle `finish-step` events, differentiating between Google and other web search results.
- Adjusted the handling of `source` events to accumulate web search results for better processing.
- Enhanced citation formatting in `messageBlock.ts` to support new web search result structures.
* feat: integrate web search tool and enhance tool handling
- Added `webSearchTool` to facilitate web search functionality within the SDK.
- Updated `AiSdkToChunkAdapter` to utilize `BaseTool` for improved type handling.
- Refactored `transformParameters` to support `webSearchProviderId` for enhanced web search integration.
- Introduced new `BaseTool` type structure to unify tool definitions across the codebase.
- Adjusted imports and type definitions to align with the new tool handling logic.
* feat: enhance web search functionality and tool integration
- Introduced `extractSearchKeywords` function to facilitate keyword extraction from user messages for web searches.
- Updated `webSearchTool` to streamline the execution of web searches without requiring a request ID.
- Enhanced `WebSearchService` methods to be static for improved accessibility and clarity.
- Modified `ApiService` to pass `webSearchProviderId` for better integration with the web search functionality.
- Improved `ToolCallChunkHandler` to handle built-in tools more effectively.
* feat: add type property to server tools in MCPService
- Enhanced the server tool structure by adding a `type` property set to 'mcp' for better identification and handling of tools within the MCPService.
* chore: update dependencies and versions in package.json and yarn.lock
- Upgraded various SDK packages to their latest beta versions for improved functionality and compatibility.
- Updated `@ai-sdk/provider-utils` to version 3.0.0-beta.3.
- Adjusted dependencies in `package.json` to reflect the latest versions, including `@ai-sdk/amazon-bedrock`, `@ai-sdk/anthropic`, `@ai-sdk/azure`, and others.
- Removed outdated versions from `yarn.lock` and ensured consistency across the project.
* fix: update provider identification logic in aiCore
- Refactored the provider identification in `index_new.ts` to use `actualProvider.type` instead of `actualProvider.id` for better clarity and accuracy in determining OpenAI response modes.
- Removed redundant type checks in `factory.ts` to streamline the provider ID retrieval process.
* chore: update package dependencies and improve AI SDK chunk handling
- Bumped versions of several dependencies in package.json, including `@swc/plugin-styled-components` to 8.0.4 and `@vitejs/plugin-react-swc` to 3.10.2.
- Enhanced `AiSdkToChunkAdapter` to streamline chunk processing, including better handling of text and reasoning events.
- Added console logging for debugging in `BlockManager` and `messageThunk` to track state changes and callback executions.
- Updated integration tests to reflect changes in message structure and types.
* refactor: reorganize AiSdkToChunkAdapter and enhance tool call handling
- Moved AiSdkToChunkAdapter to a new directory structure for better organization.
- Implemented detailed handling for tool call events in ToolCallChunkHandler, including creation, updates, and completions.
- Added a new method to handle tool call creation and improved state management for active tool calls.
- Updated StreamProcessingService to support new chunk types and callbacks for block creation.
- Enhanced type definitions and added comments for clarity in the new chunk handling logic.
* refactor: enhance provider settings and update web search plugin configuration
- Updated providerSettings to allow optional 'mode' parameter for various providers, enhancing flexibility in model configuration.
- Refactored web search plugin to integrate Google search capabilities and streamline provider options handling.
- Removed deprecated code and improved type definitions for better clarity and maintainability.
- Added console logging for debugging purposes in the provider configuration process.
* chore: add repository metadata and homepage to package.json
- Included repository URL, bugs URL, and homepage in package.json for better project visibility and issue tracking.
- This update enhances the package's metadata, making it easier for users to find relevant resources and report issues.
* chore: remove deprecated patches for @ai-sdk/google-vertex and @ai-sdk/openai-compatible
- Deleted outdated patch files for @ai-sdk/google-vertex and @ai-sdk/openai-compatible from the project.
- Updated package.json to reflect the removal of these patches, streamlining dependency management.
* chore: update package.json and add tsdown configuration for build process
- Changed the main and types entries in package.json to point to the dist directory for better output management.
- Added a new tsdown.config.ts file to define the build configuration, specifying entry points, output directory, and formats for the project.
* chore(aiCore/version): update version to 1.0.0-alpha.0
* feat: enhance AI core functionality and introduce new tool components
- Updated README to reflect the addition of a powerful plugin system and built-in web search capabilities.
- Refactored tool call handling in `ToolCallChunkHandler` to improve state management and response formatting.
- Introduced new components `MessageMcpTool`, `MessageTool`, and `MessageTools` for better handling of tool responses and user interactions.
- Updated type definitions to support new tool response structures and improved overall code organization.
- Enhanced spinner component to accept React nodes for more flexible content rendering.
* feat: add React Native support to aiCore package
Add React Native compatibility configuration to package.json, including the
react-native field and updated exports mapping. Include documentation for
React Native usage with metro.config.js setup instructions.
* chore: bump @cherrystudio/ai-core version to 1.0.0-alpha.1
* chore: bump @cherrystudio/ai-core version to 1.0.0-alpha.2 and update exports
- Updated version in package.json to 1.0.0-alpha.2.
- Added new path mapping for @cherrystudio/ai-core in tsconfig.web.json.
- Refactored export paths in tsdown.config.ts and index.ts for consistency.
- Cleaned up type exports in index.ts and types.ts for better organization.
* refactor: reorganize provider and model exports for improved structure
- Updated exports in index.ts and related files to streamline provider and model management.
- Introduced a new ModelCreator module for better encapsulation of model creation logic.
- Refactored type imports to enhance clarity and maintainability across the codebase.
- Removed deprecated provider configurations and cleaned up unused code for better performance.
* chore: update @cherrystudio/ai-core version to 1.0.0-alpha.4 and clean up dependencies
- Bumped version in package.json to 1.0.0-alpha.4.
- Removed deprecated dependencies from package.json and yarn.lock for improved clarity.
- Updated README to reflect changes in supported providers and installation instructions.
- Refactored provider registration and usage examples for better clarity and usability.
* refactor: streamline AI provider registration by replacing dynamic imports with direct creator functions
- Updated the AiProviderRegistry to use direct references to creator functions for each AI provider, improving clarity and performance.
- Removed dynamic import statements for providers, simplifying the registration process and enhancing maintainability.
* chore: bump @cherrystudio/ai-core version to 1.0.0-alpha.5
- Updated version in package.json to 1.0.0-alpha.5.
- Enhanced provider configuration validation in createProvider function for improved error handling.
* docs: update AI SDK architecture and README for enhanced clarity and new features
- Revised AI SDK architecture diagram to reflect changes in component relationships, replacing PluginEngine with RuntimeExecutor.
- Updated README to highlight core features, including a refined plugin system, improved architecture design, and new built-in plugins.
- Added detailed examples for using built-in plugins and creating custom plugins, enhancing documentation for better usability.
- Included future version roadmap and related resources for user reference.
* feat: enhance web search tool functionality and type definitions
- Introduced new `WebSearchToolOutputSchema` type to standardize output from web search tools.
- Updated `webSearchTool` and `webSearchToolWithExtraction` to utilize Zod for input and output schema validation.
- Refactored tool execution logic to improve error handling and response formatting.
- Cleaned up unused type imports and comments for better code clarity.
* fix: conditionally enable reasoning middleware for OpenAI and Azure providers
- Added a check to enable the 'thinking-tag-extraction' middleware only if reasoning is enabled in the configuration for OpenAI and Azure providers.
- Commented out the provider type check in `getAiSdkProviderId` to prevent issues with retrieving provider options.
* feat: update OpenAI provider integration and enhance type definitions
- Bumped version of `@ai-sdk/openai-compatible` to 1.0.0-beta.8 in package.json.
- Introduced a new provider configuration for 'OpenAI Responses' in AiProviderRegistry, allowing for more flexible response handling.
- Updated type definitions to include 'openai-responses' in ProviderSettingsMap for improved type safety.
- Refactored getModelToProviderId function to return a more specific ProviderId type.
* feat: enhance ToolCallChunkHandler with detailed chunk handling and remove unused plugins
- Updated `handleToolCallCreated` method to support additional chunk types with optional provider metadata.
- Removed deprecated `smoothReasoningPlugin` and `textPlugin` files to clean up the codebase.
- Cleaned up unused type imports in `tool.ts` for improved clarity and maintainability.
* <type>: <subject>
<body>
<footer>
用來簡要描述影響本次變動,概述即可
* refactor: update message handling in searchOrchestrationPlugin for improved type safety
- Replaced `Message` type with `ModelMessage` in various functions to enhance type consistency.
- Refactored `getMessageContent` function to utilize the new `ModelMessage` type for better content extraction.
- Updated `storeConversationMemory` and `analyzeSearchIntent` functions to align with the new type definitions, ensuring clearer memory storage and intent analysis processes.
* chore: bump @cherrystudio/ai-core version to 1.0.0-alpha.6 and refactor web search tool
- Updated version in package.json to 1.0.0-alpha.6.
- Simplified response structure in ToolCallChunkHandler by removing unnecessary nesting.
- Refactored input schema for web search tool to enhance type safety and clarity.
- Cleaned up commented-out code in MessageTool for improved readability.
* refactor: consolidate queue utility imports in messageThunk.ts
- Combined separate imports of `getTopicQueue` and `waitForTopicQueue` from the queue utility into a single import statement for improved code clarity and organization.
* feat: implement knowledge search tool and enhance search orchestration logic
- Added a new `knowledgeSearchTool` to facilitate knowledge base searches based on user queries and intent analysis.
- Refactored `analyzeSearchIntent` to simplify message context construction and improve prompt formatting.
- Introduced a flag to prevent concurrent analysis processes in `searchOrchestrationPlugin`.
- Updated tool configuration logic to conditionally add the knowledge search tool based on the presence of knowledge bases and user settings.
- Cleaned up commented-out code for better readability and maintainability.
* refactor: enhance search orchestration and web search tool integration
- Updated `searchOrchestrationPlugin` to improve handling of assistant configurations and prevent concurrent analysis.
- Refactored `webSearchTool` to utilize pre-extracted keywords for more efficient web searches.
- Introduced a new `MessageKnowledgeSearch` component for displaying knowledge search results.
- Cleaned up commented-out code and improved type safety across various components.
- Enhanced the integration of web search results in the UI for better user experience.
* refactor: streamline async function syntax and enhance plugin event handling
- Simplified async function syntax in `RuntimeExecutor` and `PluginEngine` for improved readability.
- Updated `AiSdkToChunkAdapter` to refine condition checks for Google metadata.
- Enhanced `searchOrchestrationPlugin` to log conversation messages and improve memory storage logic.
- Improved memory processing by ensuring fallback for existing memories.
- Added new citation block handling in `toolCallbacks` for better integration with web search results.
* feat: integrate image generation capabilities and enhance testing framework
- Added support for image generation in the `RuntimeExecutor` with a new `generateImage` method.
- Updated `aiCore` package to include `vitest` for testing, with new test scripts added.
- Enhanced type definitions to accommodate image model handling in plugins.
- Introduced new methods for resolving and executing image generation with plugins.
- Updated package dependencies in `package.json` to include `vitest` and ensure compatibility with new features.
* refactor: enhance image generation handling and tool integration
- Updated image generation logic to support new model types and improved size handling.
- Refactored middleware configuration to better manage tool usage and reasoning capabilities.
- Introduced new utility functions for checking model compatibility with image generation.
- Enhanced the integration of plugins for improved functionality during image generation processes.
- Removed deprecated knowledge search tool to streamline the codebase.
* refactor: migrate to v5 patch-1
* fix: migrate to v5-patch2
* refactor: restructure aiCore for improved modularity and legacy support
- Introduced a new `index_new.ts` file to facilitate the modern AI provider while maintaining backward compatibility with the legacy `index.ts`.
- Created a `legacy` directory to house existing clients and middleware, ensuring a clear separation from new implementations.
- Updated import paths across various modules to reflect the new structure, enhancing code organization and maintainability.
- Added comprehensive middleware and utility functions to support the new architecture, improving overall functionality and extensibility.
- Enhanced plugin management with a dedicated `PluginBuilder` for better integration and configuration of AI plugins.
* chore(yarn.lock): remove deprecated provider entries and clean up dependencies
* chore(package.json): bump version to 1.0.0-alpha.7
* feat(tests): add unit tests for utility functions in utils.test.ts
- Implemented tests for `createErrorChunk`, `capitalize`, and `isAsyncIterable` functions.
- Ensured comprehensive coverage for various input scenarios, including error handling and edge cases.
* feat(aiCore): enhance AI SDK with tracing and telemetry support
- Integrated tracing capabilities into the ModernAiProvider, allowing for better tracking of AI completions and image generation processes.
- Added a new TelemetryPlugin to inject telemetry data into AI SDK requests, ensuring compatibility with existing tracing systems.
- Updated middleware and plugin configurations to support topic-based tracing, improving the overall observability of AI interactions.
- Introduced comprehensive logging throughout the AI SDK processes to facilitate debugging and performance monitoring.
- Added unit tests for new functionalities to ensure reliability and maintainability.
* fix: update MessageKnowledgeSearch to use knowledgeReferences
- Modified MessageKnowledgeSearch component to display additional context from toolInput.
- Updated the fetch complete message to reflect the count of knowledgeReferences instead of toolOutput.
- Adjusted the mapping of results to iterate over knowledgeReferences for rendering.
* refactor(aiCore): update ModernAiProvider constructor and clean up unused code
- Modified the constructor of ModernAiProvider to accept an optional provider parameter, enhancing flexibility in provider selection.
- Removed deprecated and unused functions related to search keyword extraction and search summary fetching, streamlining the codebase.
- Updated import statements and adjusted related logic to reflect the removal of obsolete functions, improving maintainability.
* feat(Dropdown): replace RightOutlined with ChevronRight icon and update useIcons to use ChevronDown
- Introduced a patch to replace the RightOutlined icon with ChevronRight in the Dropdown component for improved visual consistency.
- Updated the useIcons hook to utilize ChevronDown instead of DownOutlined, enhancing the icon set with Lucide icons.
- Adjusted icon properties for better customization and styling options.
* feat(aiCore): add enableUrlContext capability and new support function
- Enhanced the buildStreamTextParams function to include enableUrlContext in the capabilities object, improving the parameter set for AI interactions.
- Introduced a new isSupportedFlexServiceTier function to streamline model support checks, enhancing code clarity and maintainability.
* chore: update dependencies and remove unused patches
- Updated various package versions in yarn.lock for improved compatibility and performance.
- Removed obsolete patches for antd and openai, streamlining the dependency management.
- Adjusted icon imports in Dropdown and useIcons to utilize Lucide icons for better visual consistency.
* refactor(types): update ProviderId type definition for better compatibility
- Changed ProviderId type from a union of keyof ExtensibleProviderSettingsMap and string to an intersection, enhancing type safety.
- Renamed appendTrace method to appendMessageTrace in SpanManagerService for clarity and consistency.
- Updated references to appendTrace in useMessageOperations and ApiService to use the new method name.
- Added a new appendTrace method in SpanManagerService to bind existing traces, improving trace management.
- Adjusted topicId handling in fetchMessagesSummary to default to an empty string for better consistency.
* refactor(types): modify ProviderId type definition for improved flexibility
- Updated ProviderId type from an intersection of keyof ExtensibleProviderSettingsMap and string to a union, allowing for greater compatibility with dynamic provider settings.
* fix: file name
* fix: missing dependencies
* fix: remove default renderer from MessageTool
* feat(provider): enhance provider registration and validation system
- Introduced a new Zod-based schema for provider validation, improving type safety and consistency.
- Added support for dynamic provider IDs and enhanced the provider registration process.
- Updated the AiProviderRegistry to utilize the new validation functions, ensuring robust provider management.
- Added tests for the provider registry to validate dynamic imports and functionality.
- Updated yarn.lock to reflect the latest dependency versions.
* feat(toolUsePlugin): enhance tool parsing and extraction functionality
- Updated the `defaultParseToolUse` function to return both parsed results and remaining content, improving usability.
- Introduced a new `TagExtractor` class for flexible tag extraction, supporting various tag formats.
- Modified type definitions to reflect changes in parsing function signatures.
- Enhanced handling of tool call events in the `ToolCallChunkHandler` for better integration with the new parsing logic.
- Added `isBuiltIn` property to the `MCPTool` interface for clearer tool categorization.
* feat(aiCore): update vitest version and enhance provider validation
- Upgraded `vitest` dependency to version 3.2.4 in package.json and yarn.lock for improved testing capabilities.
- Removed console error logging in provider validation functions to streamline error handling.
- Added comprehensive tests for the AiProviderRegistry functionality, ensuring robust provider management and dynamic registration.
- Introduced new test cases for provider schemas to validate configurations and IDs.
- Deleted outdated registry test file to maintain a clean test suite.
* feat(toolUsePlugin): refactor tool execution and event management
- Extracted `StreamEventManager` and `ToolExecutor` classes from `promptToolUsePlugin.ts` to improve code organization and reduce complexity.
- Enhanced tool execution logic with better error handling and event management.
- Updated the `createPromptToolUsePlugin` function to utilize the new classes for cleaner implementation.
- Improved recursive call handling and result formatting for tool executions.
- Streamlined the overall flow of tool calls and event emissions within the plugin.
* feat(dependencies): update ai-sdk packages and improve logging
- Upgraded `@ai-sdk/gateway` to version 1.0.8 and `@ai-sdk/provider-utils` to version 3.0.4 in yarn.lock for enhanced functionality.
- Updated `ai` dependency in package.json to version ^5.0.16 for better compatibility.
- Added logging functionality in `AiSdkToChunkAdapter` to track chunk types and improve debugging.
- Refactored plugin imports to streamline code and enhance readability.
- Removed unnecessary console logs in `searchOrchestrationPlugin` to clean up the codebase.
* feat(dependencies): update ai-sdk packages and improve type safety
- Upgraded multiple `@ai-sdk` packages in `yarn.lock` and `package.json` to their latest versions for enhanced functionality and compatibility.
- Improved type safety in `searchOrchestrationPlugin` by adding optional chaining to handle potential undefined values in knowledge bases.
- Cleaned up dependency declarations to use caret (^) for versioning, ensuring compatibility with future updates.
* feat(aiCore): enhance tool response handling and type definitions
- Updated the ToolCallChunkHandler to support both MCPTool and NormalToolResponse types, improving flexibility in tool response management.
- Refactored type definitions for MCPToolResponse and introduced NormalToolResponse to better differentiate between tool response types.
- Enhanced logging in MCP utility functions for improved error tracking and debugging.
- Cleaned up type imports and ensured consistent handling of tool responses across various chunks.
* feat(tools): refactor MemorySearchTool and WebSearchTool for improved response handling
- Updated MemorySearchTool to utilize aiSdk for better integration and removed unused imports.
- Refactored WebSearchTool to streamline search results handling, changing from an array to a structured object for clarity.
- Adjusted MessageTool and MessageWebSearchTool components to reflect changes in tool response structure.
- Enhanced error handling and logging in tool callbacks for improved debugging and user feedback.
* feat(aiCore): update ai-sdk-provider and enhance message conversion logic
- Upgraded `@openrouter/ai-sdk-provider` to version ^1.1.2 in package.json and yarn.lock for improved functionality.
- Enhanced `convertMessageToSdkParam` and related functions to support additional model parameters, improving message conversion for various AI models.
- Integrated logging for error handling in file processing functions to aid in debugging and user feedback.
- Added support for native PDF input handling based on model capabilities, enhancing file processing features.
* feat(dependencies): update @ai-sdk/openai and @ai-sdk/provider-utils versions
- Upgraded `@ai-sdk/openai` to version 2.0.19 in `yarn.lock` and `package.json` for improved functionality and compatibility.
- Updated `@ai-sdk/provider-utils` to version 3.0.5, enhancing dependency management.
- Added `TypedToolError` type export in `index.ts` for better error handling.
- Removed unnecessary console logs in `webSearchPlugin` for cleaner code.
- Refactored type handling in `createProvider` to ensure proper type assertions.
- Enforced `topicId` as a required field in the `ModernAiProvider` configuration for stricter validation.
* [WIP]refactor(aiCore): restructure models and introduce ModelResolver
- Removed outdated ConfigManager and factory files to streamline model management.
- Added ModelResolver for improved model ID resolution, supporting both traditional and namespaced formats.
- Introduced DynamicProviderRegistry for dynamic provider management, enhancing flexibility in model handling.
- Updated index exports to reflect the new structure and maintain compatibility with existing functionality.
* feat(aiCore): introduce Hub Provider and enhance provider management
- Added a new example file demonstrating the usage of the Hub Provider for routing to multiple underlying providers.
- Implemented the Hub Provider to support model ID parsing and routing based on a specified format.
- Refactored provider management by introducing a Registry Management class for better organization and retrieval of provider instances.
- Updated the Provider Initializer to streamline the initialization and registration of providers, enhancing overall flexibility and usability.
- Removed outdated files related to provider creation and dynamic registration to simplify the codebase.
* feat(aiCore): enhance dynamic provider registration and refactor HubProvider
- Introduced dynamic provider registration functionality, allowing for flexible management of providers through a new registry system.
- Refactored HubProvider to streamline model resolution and improve error handling for unsupported models.
- Added utility functions for managing dynamic providers, including registration, cleanup, and alias resolution.
- Updated index exports to include new dynamic provider APIs, enhancing overall usability and integration.
- Removed outdated provider files and simplified the provider management structure for better maintainability.
* chore(aiCore): bump version to 1.0.0-alpha.9 in package.json
* feat(aiCore): add MemorySearchTool and WebSearchTool components
- Introduced MessageMemorySearch and MessageWebSearch components for handling memory and web search tool responses.
- Updated MemorySearchTool and WebSearchTool to improve response handling and integrate with the new components.
- Removed unused console logs and streamlined code for better readability and maintainability.
- Added new dependencies in package.json for enhanced functionality.
* refactor(aiCore): improve type handling and response structures
- Updated AiSdkToChunkAdapter to refine web search result handling.
- Modified McpToolChunkMiddleware to ensure consistent type usage for tool responses.
- Enhanced type definitions in chunk.ts and index.ts for better clarity and type safety.
- Adjusted MessageWebSearch styles for improved UI consistency.
- Refactored parseToolUse function to align with updated MCPTool response structures.
* feat(aiCore): enhance provider management and registration system
- Added support for a new provider configuration structure in package.json, enabling better integration of provider types.
- Updated tsdown.config.ts to include new entry points for provider modules, improving build organization.
- Refactored index.ts to streamline exports and enhance type handling for provider-related functionalities.
- Simplified provider initialization and registration processes, allowing for more flexible provider management.
- Improved type definitions and removed deprecated methods to enhance code clarity and maintainability.
* chore(aiCore): bump version to 1.0.0-alpha.10 in package.json
* fix(aiCore): update tool call status and enhance execution flow
- Changed tool call status from 'invoking' to 'pending' for better clarity in execution state.
- Updated the tool execution logic to include user confirmation for non-auto-approved tools, improving user interaction.
- Refactored the handling of experimental context in the tool execution parameters to support chunk streaming.
- Commented out unused tool input event cases in AiSdkToChunkAdapter for cleaner code.
* feat(types): add VertexProvider type for Google Cloud integration
Introduced a new VertexProvider type that includes properties for Google credentials and project details, enhancing type safety and support for Google Cloud functionalities.
* refactor(logging): 将console替换为logger以统一日志管理
* fix(i18n): 更新多语言文件中 websearch.fetch_complete 的翻译格式
统一将“已完成 X 次搜索”改为“X 个搜索结果”格式,并添加 avatar.builtin 字段翻译
* refactor(aiCore): streamline type exports and enhance provider registration
Removed unused type exports from the aiCore module and consolidated type definitions for better clarity. Updated provider registration tests to reflect new configurations and improved error handling for non-existent providers. Enhanced the overall structure of the provider management system, ensuring better type safety and consistency across the codebase.
* chore(dependencies): update ai and related packages to version 5.0.26 and 1.0.15
Updated the 'ai' package to version 5.0.26 and '@ai-sdk/gateway' to version 1.0.15. Also, updated '@ai-sdk/provider-utils' to version 3.0.7 and 'eventsource-parser' to version 3.0.5. Adjusted type definitions in aiCore for better type safety in plugin parameters and results.
* chore(config): add new aliases for ai-core in Vite and TypeScript configuration
Updated the Vite and TypeScript configuration files to include new path aliases for the ai-core package, enhancing module resolution for core providers and built-in plugins. This change improves the organization and accessibility of the ai-core components within the project.
* feat(release): update version to 1.6.0-beta.1 and enhance release notes with new features, improvements, and bug fixes
- Integrated a new AI SDK architecture for better performance
- Added OCR functionality for image text recognition
- Introduced a code tools page with environment variable settings
- Enhanced the MCP server list with search capabilities
- Improved SVG preview and HTML content rendering
- Fixed multiple issues including document preprocessing failures and path handling on Windows
- Optimized performance and memory usage across various components
* refactor(modelResolver): replace ':' with '|' as the default separator for model IDs
Updated the ModelResolver and related components to use '|' as the default separator instead of ':'. This change improves compatibility and resolves potential conflicts with model ID suffixes. Adjusted model resolution logic accordingly to ensure consistent behavior across the application.
* feat(aihubmix): add 'type' property to provider configuration for Gemini integration
* refactor(errorHandling): improve error serialization and update error handling in callbacks
- Updated the error handling in the `onError` callback to use `AISDKError` type for better type safety.
- Introduced a new `serializeError` function to standardize error serialization.
- Modified the `ErrorBlock` component to directly access the error message.
- Removed deprecated error formatting functions to streamline the error utility module.
* feat(i18n): add error detail translations and enhance error handling UI
- Added new translation keys for error details in multiple languages, including 'detail', 'details', 'message', 'requestBody', 'requestUrl', 'stack', and 'status'.
- Updated the ErrorBlock component to display a modal with detailed error information, allowing users to view and copy error details easily.
- Improved the user experience by providing a clear and accessible way to understand error messages and their context.
* feat(inputbar): enhance MCP tools visibility with prompt tool support
- Updated the Inputbar component to include the `isPromptToolUse` function, allowing for better visibility of MCP tools based on the assistant's capabilities.
- This change improves user experience by expanding the conditions under which MCP tools are displayed.
* refactor(aiCore): enhance completions methods with developer mode support
- Introduced a check for developer mode in the completions methods to enable tracing capabilities when a topic ID is provided.
- Updated the method signatures and internal calls to streamline the handling of completions with and without tracing.
- Improved code organization by making the completionsForTrace method private and renaming it for clarity.
* feat(ErrorBlock): add centered modal display for error details
- Updated the ErrorBlock component to include a centered modal for displaying error details, enhancing the user interface and accessibility of error information.
* chore(aiCore): update version to 1.0.0-alpha.11 and refactor model resolution logic
- Bumped the version of the ai-core package to 1.0.0-alpha.11.
- Removed the `isOpenAIChatCompletionOnlyModel` utility function to simplify model resolution.
- Adjusted the `providerToAiSdkConfig` function to accept a model parameter for improved configuration handling.
- Updated the `ModernAiProvider` class to utilize the new model parameter in its configuration.
- Cleaned up deprecated code related to search keyword extraction and reasoning parameters.
* fix(inputbar): conditionally display knowledge icon based on MCP tools visibility
- Updated the Inputbar component to conditionally show the knowledge icon only when both `showKnowledgeIcon` and `showMcpTools` are true. This change enhances the visibility logic for the knowledge icon, improving the user interface based on the current context.
* feat(aiCore): introduce provider configuration enhancements and initialization
- Added a new provider configuration module to handle special provider logic and formatting.
- Implemented asynchronous preparation of special provider configurations in the ModernAiProvider class.
- Refactored provider initialization logic to support dynamic registration of new AI providers.
- Updated utility functions to streamline provider option building and improve compatibility with new provider configurations.
* refactor(aiCore): streamline provider options and enhance OpenAI handling
- Simplified the OpenAI mode handling in the provider configuration.
- Added service tier settings to provider-specific options for better configuration management.
- Refactored the `buildOpenAIProviderOptions` function to remove redundant parameters and improve clarity.
* refactor(aiCore): enhance temperature and TopP parameter handling
- Updated `getTemperature` and `getTopP` functions to incorporate reasoning effort checks for Claude models.
- Refactored logic to ensure temperature and TopP settings are only returned when applicable.
- Improved clarity and maintainability of parameter retrieval functions.
* feat(releaseNotes): update release notes with new features and improvements
- Added a modal for detailed error information with multi-language support.
- Enhanced AI Core with version upgrade and improved parameter handling.
- Refactored error handling for better type safety and performance.
- Removed deprecated code and improved provider initialization logic.
* refactor(aiCore): clean up KnowledgeSearchTool and searchOrchestrationPlugin
- Commented out unused parameters and code in the KnowledgeSearchTool to improve clarity and maintainability.
- Removed the tool choice assignment in searchOrchestrationPlugin to streamline the logic.
- Updated instructions handling in KnowledgeSearchTool to eliminate unnecessary fields.
* chore(package): bump version to 1.6.0-beta.2
* refactor(transformParameters): 移除DEFAULT_MAX_TOKENS并替换console.log为logger.debug
移除未使用的DEFAULT_MAX_TOKENS常量,直接使用传入的maxTokens参数
将调试日志输出从console.log改为更规范的logger.debug
* docs(OrchestrateService): 添加注释说明暂时未使用的类和函数用途
* feat(OrchestrateService): 添加提示变量替换功能
在调用API前替换assistant.prompt中的变量,以支持动态提示内容
* refactor(providers): 重构基础 providers 定义和类型导出
将基础 provider IDs 和 schema 定义移到文件顶部
移除从 baseProviders 动态生成 IDs 的逻辑
使用 satisfies 约束 baseProviders 类型
* refactor(aiCore): 重构provider选项构建逻辑以支持更多provider类型
重构buildProviderOptions函数,使用schema验证providerId并分为基础provider和自定义provider处理
新增对deepseek、openai-compatible等provider的支持
* feat(reasoning): 增强模型推理控制逻辑,支持更多提供商和模型类型
添加对Hunyuan、DeepSeek混合推理等模型的支持
优化OpenRouter、Qwen等提供商的推理控制逻辑
统一不同提供商的思考控制方式
添加日志记录和错误处理
* refactor(aiCore): improve provider registration and model resolution logic
- Enhanced the model resolution logic to support dynamic provider IDs for chat modes.
- Updated provider registration to handle both OpenAI and Azure with a unified approach for chat variants.
- Refactored the `buildOpenAIProviderOptions` function to streamline parameter handling and removed unnecessary parameters.
- Added configuration options for Azure to manage API versions and deployment URLs effectively.
* feat(aiCore): 添加对智谱AI模型的支持
在getReasoningEffort函数中新增对智谱AI模型的支持逻辑,包括判断模型类型及设置对应的推理参数
* feat(翻译): 添加对Qwen MT模型翻译选项的特殊处理
添加对Qwen MT模型的支持,当助手类型为翻译时设置翻译选项。若目标语言不支持则抛出错误
* refactor(aiCore): 将console.log替换为logger.debug以改进日志记录
* refactor(aiCore): 提取 ModernAiProviderConfig 类型以复用
将多处重复的配置类型定义提取为 ModernAiProviderConfig 类型,提高代码可维护性
* refactor(services): 提取 FetchChatCompletionOptions 类型以提升代码可维护性
* refactor(ApiService): 重构 fetchChatCompletion 参数类型定义
将参数类型提取为独立的类型定义,支持 messages 或 prompt 两种参数形式
* fix(ApiService): 使FetchChatCompletionOptions参数变为可选
为了增加接口的灵活性,将options参数改为可选
* fix(ApiService): 修复prompt参数类型并添加消息转换逻辑
根据vercel/ai#8363问题,将prompt参数类型从StreamTextParams['prompt']改为string
当传入prompt参数时自动转换为messages格式
* refactor(translate): 重构语言检测功能,使用通用聊天接口替代专用接口
- 移除专用的语言检测接口 fetchLanguageDetection
- 使用现有的 fetchChatCompletion 接口实现语言检测功能
- 处理 QwenMT 模型的特殊逻辑
- 优化语言检测的提示词和参数处理
* refactor(TranslateService): 重构翻译服务使用新的API接口
移除旧的翻译逻辑,改用新的fetchChatCompletion接口实现翻译功能
简化代码结构并移除不再需要的依赖
* feat(abortController): 添加readyToAbort函数用于创建并注册AbortController
提供便捷方法创建AbortController并自动注册到全局映射中,简化取消操作的流程
* feat(aiCore): 添加对文本增量累积的可配置支持
根据模型是否支持文本增量来决定是否累积文本内容,新增accumulate参数控制行为
* fix(translate): 修复翻译过程中中止控制器的错误处理
修正abortController.ts中中止控制器的回调函数调用方式
统一翻译错误消息的格式化处理
改进翻译服务的中止逻辑和错误传播
* docs(i18n): 添加请求路径的国际化翻译
* fix(api): 修复API检查时的错误处理和取消逻辑
添加对API检查过程中错误的处理,并支持通过abortController取消请求
避免空系统提示词导致的报错,优化检查流程的健壮性
* feat(mini窗口): 添加ErrorBoundary组件包裹内容
为mini窗口内容添加错误边界组件,防止未捕获错误导致整个应用崩溃
* feat(release): 更新版本至1.6.0-beta.3并添加新功能和优化
- 新增多个功能,包括ErrorBoundary组件、智谱AI模型支持、系统OCR功能、文件页面批量删除等
- 重构翻译服务和语言检测功能,提升扩展性和类型安全
- 修复多个问题,改善API检查和翻译过程中的错误处理
- 优化构建配置和性能,提升初始化效率
* test: 更新ApiService测试中ApiClientFactory的模拟路径
* refactor(vertexai): 将VertexAI配置检查从ApiClientFactory移至VertexAPIClient
重构VertexAI客户端的配置检查逻辑,将其从工厂类移动到具体的客户端实现类中
* test(aiCore): 更新客户端兼容性测试的mock数据
添加isOpenAIModel等mock函数用于测试
修复AnthropicVertexClient的mock路径
添加注释说明服务层不应调用React Hook
* fix: 添加注释说明redux设置状态不应被服务层使用
* test(ActionUtils): 添加对ConversationService和models模块的mock测试
* test(Spinner): 更新快照
* test(ThinkingBlock): 移除不再需要的实时计时测试用例
* refactor(ThinkingBlock): 优化条件渲染逻辑以提高可读性
* test(streamCallback): 添加serializeError的mock实现
* fix(registry): enhance provider config validation and update error handling in tests
- Added a check for null or undefined config in registerProviderConfig function.
- Updated tests to ensure proper error messages are thrown when no providers are registered.
- Adjusted mock implementations in generateImage tests to reflect changes in provider model identifiers.
* refactor(aiCore): consolidate StreamTextParams type imports and enhance type definitions
- Moved StreamTextParams type definition to a new file for better organization.
- Updated imports across multiple files to reference the new location.
- Adjusted related type definitions in ApiService and ConversationService for consistency.
* feat(providerConfig): add AWS Bedrock support in provider configuration
- Implemented AWS Bedrock integration by adding access key, region, and secret access key retrieval.
- Enhanced providerToAiSdkConfig function to handle Bedrock-specific options.
* fix(providerConfig): update Google Vertex AI import and creator function name
- Changed the import path for Google Vertex AI to '@ai-sdk/google-vertex/edge'.
- Updated the creator function name from 'createGoogleVertex' to 'createVertex' for consistency.
* feat(providerConfig): implement API key rotation logic for providers
- Added a new function to handle rotating API keys for providers, reusing legacy multi-key logic.
- Updated the providerToAiSdkConfig function to utilize the new key rotation mechanism.
- Enhanced TranslateService imports for better organization.
* fix(aiCore): update file data and mediaType extraction in convertFileBlockToFilePart function
- Modified the conversion logic to correctly extract base64 data and MIME type from the API response.
- Ensured that the filename remains unchanged during the conversion process.
* feat(aiCore): enhance file processing logic in convertFileBlockToFilePart function
- Added support for handling image files and document types beyond PDF.
- Implemented file size limit checks and fallback mechanisms for text extraction.
- Improved logging for file processing outcomes, including success and failure cases.
- Introduced functions to check model support for image input and large file uploads.
* chore(release): update version to 1.6.0-beta.4 and enhance release notes
- Updated version in package.json to 1.6.0-beta.4.
- Revised release notes in electron-builder.yml to reflect new features, improvements, and bug fixes, including API key rotation, AWS Bedrock support, and enhanced file processing capabilities.
* refactor(aiCore): restructure parameter handling and file processing modules
- Removed the transformParameters module and replaced it with a more organized structure under prepareParams.
- Introduced new modules for file processing, model capabilities, and parameter handling to improve code maintainability and clarity.
- Updated relevant imports across services to reflect the new module structure.
- Enhanced logging and error handling in file processing functions.
* refactor(providerConfig): improve provider handling and configuration logic
- Introduced formatPrivateKey utility for better private key management.
- Updated handleSpecialProviders function to streamline provider type checks and error handling.
- Enhanced providerToAiSdkConfig function to include Google Vertex AI configuration with private key formatting.
- Removed commented-out code for clarity and maintainability.
* chore(dependencies): update ai package version and enhance aiCore functionality
- Updated the 'ai' package version from 5.0.26 to 5.0.29 in package.json and yarn.lock.
- Refactored aiCore's provider schemas to introduce new provider types and improve type safety.
- Enhanced the RuntimeExecutor class to streamline model handling and plugin execution for various AI tasks.
- Updated tests to reflect changes in parameter handling and ensure compatibility with new provider configurations.
* refactor(AiSdkToChunkAdapter): streamline web search result handling
- Replaced the switch statement with a source mapping object for improved readability and maintainability.
- Enhanced handling of various web search providers by mapping them to their respective sources.
- Simplified the logic for processing web search results in the onChunk method.
* chore: update release notes and version to 1.6.0-beta.5
- Enhanced web search functionality and optimized knowledge base settings for better performance.
- Added automatic image generation for Gemini 2.5 Flash Image model and improved OCR service compatibility on Linux.
- Refactored AI core architecture and improved message retry mechanisms.
- Fixed various UI component issues and improved overall stability.
* feat: enhance provider configuration and stream text parameters
- Added maxRetries option to buildStreamTextParams for improved error handling.
- Implemented custom fetch logic for 'cherryin' provider to include signature generation in requests.
* chore: update release notes and version to 1.6.0-beta.6
- Refined performance optimizations for AI services and improved compatibility with various providers.
- Enhanced error handling and stability in the ModernAiProvider class.
- Updated version in package.json to 1.6.0-beta.6.
* refactor(ErrorBlock): simplify CodeViewer component usage
- Removed unnecessary props from CodeViewer in ErrorBlock for cleaner code and improved readability.
* refactor(CodeViewer): change children prop type to React.ReactNode
- Updated the children prop type in CodeViewer from string to React.ReactNode for improved flexibility in rendering various content types.
* chore: update migration version and add migration logic for version 147
- Incremented migration version from 146 to 147.
- Implemented migration logic to trim trailing slashes from the apiHost of the anthropic provider.
* feat(错误处理): 增强AI SDK错误处理并添加国际化支持
添加AiSdkErrorUnion类型用于统一处理AI SDK错误
实现safeToString函数安全转换任意值为字符串
添加formatAiSdkError和formatError函数格式化错误信息
在ErrorBlock中重构错误详情展示逻辑,支持多种错误类型
补充国际化字段用于错误信息展示
* feat(i18n): 添加错误相关的多语言翻译字段
* feat(错误处理): 统一使用SerializedError类型处理错误并增强类型安全
重构错误处理逻辑,使用@reduxjs/toolkit的SerializedError类型统一处理错误
新增错误类型定义文件并扩展SerializedError接口
更新相关组件和工具函数以适配新的错误类型
* refactor(CodeViewer): make children prop optional for improved flexibility
* feat(ErrorBlock): add success message on clipboard copy action
* refactor(types): 重构错误类型定义并添加序列化功能
- 将 SerializedError 从 @reduxjs/toolkit 移至本地定义
- 添加 Serializable 类型和序列化工具函数
- 更新相关文件中的错误类型引用
- 实现安全序列化功能用于错误处理
* fix(错误处理): 修复错误块中显示错误信息的问题
修正错误块组件中错误信息的显示问题:
1. 修复BuiltinError组件中错误名称显示为消息的问题
2. 修复AiSdkError组件中原因显示为消息的问题
3. 修复AiApiCallError组件中各种字段显示为消息的问题
4. 使用CodeViewer组件正确显示JSON格式数据
5. 移除CodeViewer组件中不必要的children属性
6. 设置serialize默认pretty为true
* feat(错误处理): 添加序列化错误类型检查函数并更新错误格式化逻辑
添加 isSerializedError 类型检查函数用于判断序列化错误
更新 formatError 和 formatAiSdkError 函数以处理序列化错误类型
修改 ErrorBlock 组件使用新的类型检查函数替代 instanceof 检查
* fix: 使用 SerializedError 类型替换 errorData 的 Record 类型
* fix: 为错误对象添加name和stack字段
在工具执行失败和数据库升级时,为错误对象补充name和stack字段以提供更完整的错误信息
* fix(错误处理): 使用JSON.stringify格式化响应头信息以提高可读性
* fix(ErrorBlock): 修复错误状态码检查逻辑以支持statusCode字段
* fix(ErrorBlock): 修复错误状态码检查逻辑以支持statusCode字段
* feat(AI Provider): Update image generation handling to use legacy implementation for enhanced features
- Refactor image generation logic to utilize legacy completions for better support of image editing.
- Introduce uiMessages as a required parameter for image generation endpoints.
- Update related types and middleware configurations to accommodate new message structures.
- Adjust ConversationService and OrchestrateService to handle model and UI messages separately.
* docs(types): 添加prompt类型的注释
* feat: Update message handling to include optional uiMessages parameter
- Modify BaseParams type to make uiMessages optional.
- Refactor message preparation in HomeWindow and ActionUtils to handle modelMessages and uiMessages separately.
- Ensure compatibility with updated message structures in fetchChatCompletion calls.
* refactor(types): Enhance Serializable type to include SerializableValue for improved serialization handling
- Updated Serializable type to use SerializableValue for better clarity and structure.
- Ensured compatibility with nested objects and arrays in serialization logic.
* refactor(serialize): 重命名 SerializableValue 为 SerializablePrimitive 以提高可读性
将 SerializableValue 类型重命名为 SerializablePrimitive 以更准确地反映其用途,仅包含基本类型
* Revert "refactor(serialize): 重命名 SerializableValue 为 SerializablePrimitive 以提高可读性"
This reverts commit
|
||
|
|
2480822690
|
Remove loading state blocks input, Add "Retry failed messages" button (#9513)
* feat(chat/input): allow sending during streaming; remove loading guard - Inputbar: drop loading check; keep Send clickable; right-click to pause - Message/MessageMenubar: render menubar even while streaming - Minor cleanup of unused imports and lints * feat(chat/menubar): one-click regenerate (remove confirm); add empty-context fallback - MessageMenubar: remove Popconfirm; click to regenerate - ApiService: ensure last user message is included if filters empty (avoid messages=[]) - Minor cleanup * feat(chat/ui): decouple generation from sending; enable actions during streaming - MessageGroupMenuBar: add “Retry all” (ReloadOutlined) to retry errored/empty/no-block replies; tooltip via i18n - Context fallback: always include last user message in both messageThunk and ApiService - i18n: add message.group.retry_failed (zh-CN, en-US, ja-JP, ru-RU, zh-TW) - Cleanup: remove unused imports; fix lints * feat(chat/settings): Add confirmation settings for message actions - Add "Confirm before deleting message" and "Confirm before regenerating message" options to the settings page, allowing users to customize action confirmations. - Update internationalization files to support multi-language prompt messages. - Modify the message menu bar to integrate the confirmation logic, enhancing the user experience. * fix(chat/ui): Apply regeneration confirmation to user messages Previously, the "Regenerate" action on user messages would trigger immediately, bypassing the `confirmRegenerateMessage` setting. This behavior was inconsistent with the regeneration logic for assistant messages, which correctly showed a confirmation dialog. This commit wraps the user message's regenerate button in a `Popconfirm` component, conditioned on the `confirmRegenerateMessage` setting. This aligns its behavior with the existing logic for assistant messages. Now, all regeneration actions are uniformly governed by the user's confirmation preference, creating a more consistent and predictable user experience. * fix(ui): Only show 'Retry failed' button when errors exist fix(ui): Conditionally render the 'Retry failed' button The 'Retry failed messages' button in the message group menu bar was previously always visible, even when no messages had failed. - The 'Retry failed' button is now conditionally rendered and will only appear if one or more messages in the group meet the failure criteria. * feat(chat/ui): Add dedicated button to pause message generation Replaced the undiscoverable right-click-to-pause functionality on the send button with a dedicated, visible "Pause" button. This new button only appears during message generation, making the action intuitive and accessible. - Removed `onPause` and context menu logic from `SendMessageButton`. - Added a conditional `CirclePause` button to the `Inputbar` when loading. * feat(settings/migrate): initialize confirm message flags for legacy users - Add migration 138 to default confirmDeleteMessage/confirmRegenerateMessage - No behavior change for fresh installs (uses initialState) fix(format): correct indentation in MessageMenubar fix(settings/migrate): fix persistedReducer verison * fix(ui): resolve React Hook dependency warnings - Remove unnecessary `topic.prompt` dependencies from Message components - Remove `loading` dependency from Inputbar useCallback Resolves ESLint exhaustive-deps warnings and conflicts --------- Co-authored-by: n2yt584v2t4nh7y <117180266+n2yt584v2t4nh7y@users.noreply.github.com> |
||
|
|
267b41242d
|
fix: move topic prompt handling to message thunk and fix prompt logic (#9569) | ||
|
|
ee5e420419 |
feat(ApiService): enhance chat completion handling with chunk reception
- Added onChunkReceived call to fetchChatCompletion to improve response handling. - Removed redundant onChunkReceived call from the completion parameters section in fetchChatCompletion. |
||
|
|
10b7c70a59
|
fix(prompt): resolve variable replacement in function mode and add UI features (#6581)
* fix(prompt): fix variable replacement in function mode * fix(i18n): update available variables in prompt tip * feat(prompt): replace prompt variable in Prompt and AssistantPromptSettings components * fix(prompt): add fallback value if replace failed * feat(prompt): add hook and settings for automatic prompt replacement * feat(prompt): add supported variables and utility function to check if they exist * feat(prompt): enhance variable handling in prompt settings and tooltips * feat(i18n): add prompt settings translations for multiple languages * refactor(prompt): remove debug log from prompt processing * fix(prompt): handle model name variables and improve prompt processing * fix: correct variable replacement setting and update migration defaults * remove prompt settings * refactor: simplify model name replacement logic - Remove unnecessary assistant parameter from buildSystemPrompt function - Update all API clients to use the simplified function signature - Centralize model name replacement logic in promptVariableReplacer - Improve code maintainability by reducing parameter coupling * fix: eslint error * refactor: remove unused interval handling in usePromptProcessor * test: add tests, remove redundant replacing * feat: animate prompt substitution * chore: prepare for merge * refactor: update utils * refactor: remove getStoreSettings * refactor: update utils * style(Message/Prompt): 禁止文本选中以提升用户体验 * fix(Prompt): 修复内存泄漏问题,清除内部定时器 * refactor: move prompt replacement to api service --------- Co-authored-by: one <wangan.cs@gmail.com> Co-authored-by: icarus <eurfelux@gmail.com> |
||
|
|
84e78560f4
|
refactor: streamline system prompt handling and introduce built-in tools (#7714)
* refactor: streamline system prompt handling and introduce built-in tools - Removed the static SYSTEM_PROMPT_THRESHOLD from BaseApiClient and replaced it with a constant in constant.ts. - Updated API clients (AnthropicAPIClient, GeminiAPIClient, OpenAIApiClient, OpenAIResponseAPIClient) to simplify system prompt logic by eliminating unnecessary checks and using new utility functions for prompt building. - Introduced built-in tools functionality, including a new 'think' tool, to enhance the tool usage experience. - Refactored ApiService to integrate built-in tools and adjust system prompt modifications accordingly. - Added utility functions for managing built-in tools in mcp-tools.ts and created a new tools index for better organization. * refactor(tests): update prompt tests to use new buildSystemPromptWithTools function - Renamed the function used in prompt tests from buildSystemPrompt to buildSystemPromptWithTools to reflect recent changes in prompt handling. - Adjusted test cases to ensure compatibility with the updated function, maintaining the integrity of user prompt handling. * refactor(ApiService, mcp-tools, prompt): enhance tool usage and prompt handling - Updated ApiService to improve system prompt construction based on tool usage mode, ensuring clearer logic for tool integration. - Enhanced mcp-tools with a new response structure for the 'think' tool, allowing for better handling of tool responses. - Expanded prompt utility functions to include detailed instructions for using the 'think' tool, improving user guidance. - Refactored tests to validate new prompt building logic and tool integration, ensuring robust functionality across scenarios. * fix: enhance prompt * feat(McpToolChunkMiddleware): enhance tool call handling with built-in tool support - Added support for built-in tools in the parseAndCallTools function, allowing for conditional tool invocation based on tool type. - Implemented a check to return early if the tool call response is null, improving error handling. |
||
|
|
622d15d5d7
|
fix(TopicsTab): persist pending state via Redux and update after task completion (#8376)
* fix(TopicsTab): pending状态将自动关闭并不再由组件管理 * feat(消息状态): 添加话题完成状态指示器及相关逻辑 - 在消息状态中新增fulfilledByTopic字段记录话题完成状态 - 添加setTopicFulfilled action用于更新话题完成状态 - 在话题切换时重置完成状态为false - 在加载完成后设置完成状态为true - 添加完成状态指示器组件并显示在话题列表中 * fix(TopicsTab): 修复不切换话题时未重置当前话题fulfilled状态的问题 * refactor(messageThunk): 重命名 handleChangeLoadingOfTopic 为 finishTopicLoading 提高函数命名清晰度,更准确描述其功能 |
||
|
|
c2086fdb15
|
refactor[Logger]: strict type check for Logger (#8363)
* fix: strict type check of logger * feat: logger format in renderer * fix: error type |
||
|
|
3b123863b5
|
feat: Support LLM Tracing by Alibaba Cloud EDAS product (#7895)
* feat: add tracing modules * Initial commit * fix: problem * fix: update trace web * fix: trace view * fix: trace view * fix: fix some problem * fix: knowledge and mcp trace * feat: save trace to user home dir * feat: open trace with electron browser window * fix: root trace outputs * feat: trace internationalization and add trace icon * feat: add trace title * feat: update * package.json添加windows运行script * feat: update window title * fix: mcp trace param * fix: error show * fix: listTool result * fix: merge error * feat: add stream usage and response * feat: change trace stream * fix: change stream adapter * fix: span detail show problem * fix: process show by time * fix: stream outputs * fix: merge problem * fix: stream outputs * fix: output text * fix: EDAS support text * fix: change trace footer style * fix: topicId is loaded multiple times * fix: span reload problem & attribute with cache * fix: refresh optimization * Change Powered by text. * resolve upstream conflicts * fix: build-time type exception * fix: exceptions not used when building * fix: recend no trace * fix: resend trace list * fix: delete temporary files * feat: trace for resend * fix: trace for resend message with edit * fix: directory structure and construction method of mcp-trace * fix: change CRLF to LF * fix: add function call outputs * Revert "fix: change CRLF to LF" * fix: reorganize multi-model display * fix: append model trace binding topic * fix: some problems * fix: code optimization * fix: delete async * fix: UI optimization * fix: sort import --------- Co-authored-by: 崔顺发 <csf01409784@alibaba-inc.com> Co-authored-by: 管鑫荣 <gxr01409783@alibaba-inc.com> |
||
|
|
40f9601379
|
refactor: Unified Logger / 统一日志管理 (#8207)
* Revert "feat: optimize minapp cache with LRU (#8160)"
This reverts commit
|
||
|
|
6560369b98
|
refactor(ActionUtils): streamline message processing logic (#8226)
* refactor(ActionUtils): streamline message processing logic - Removed unnecessary content accumulation for thinking and text blocks. - Updated handling of message chunks to directly use incoming text for updates. - Improved state management for thinking and text blocks during streaming. - Enhanced the logic for creating and updating message blocks to ensure proper status and content handling. * chore: remove log * feat(ActionUtils): update message block instruction during processing - Added dispatch to update the message block instruction with the current block ID when processing messages. - Enhanced state management for message updates to ensure accurate tracking of block instructions during streaming. * feat(ActionUtils): enhance message processing with text block content tracking - Introduced a new variable to store the content of the text block during message processing. - Updated the logic to dispatch the current text block content upon completion of message chunks, improving state management and accuracy in message updates. * feat(ActionUtils): refine message processing error handling and status updates - Enhanced the logic to update the message status based on error conditions, ensuring accurate representation of message states. - Improved handling of text block content during processing, allowing for better state management and completion tracking. - Streamlined the dispatch of updates for message blocks, particularly in error scenarios, to maintain consistency in message processing. * feat(messageThunk): export throttled block update functions for improved message processing - Changed the visibility of `throttledBlockUpdate` and `cancelThrottledBlockUpdate` functions to export, allowing their use in other modules. - Updated `processMessages` in ActionUtils to utilize the newly exported functions for handling message updates, enhancing the efficiency of block updates during message processing. * fix(ActionUtils): correct text block content handling in message processing - Changed the declaration of `textBlockContent` to a constant to prevent unintended modifications. - Updated the logic in `processMessages` to use the text block ID for throttled updates instead of the content, ensuring accurate message updates during processing. * feat(HomeWindow): improve message processing with throttled updates - Integrated `throttledBlockUpdate` and `cancelThrottledBlockUpdate` for managing thinking and text block updates, enhancing performance during message streaming. - Updated logic to handle chunk types more effectively, ensuring accurate content updates and status management for message blocks. - Streamlined the dispatch of message updates upon completion and error handling, improving overall state management. |
||
|
|
7e471bfea4
|
feat: implement BlockManager and associated callbacks for message str… (#8167)
* feat: implement BlockManager and associated callbacks for message streaming - Introduced BlockManager to manage message blocks with smart update strategies. - Added various callback handlers for different message types including text, image, citation, and tool responses. - Enhanced state management for active blocks and transitions between different message types. - Created utility functions for handling block updates and transitions, improving overall message processing flow. - Refactored message thunk to utilize BlockManager for better organization and maintainability. This implementation lays the groundwork for more efficient message streaming and processing in the application. * refactor: clean up BlockManager and callback implementations - Removed redundant assignments of lastBlockType in various callback files. - Updated error handling logic to ensure correct message status updates. - Added console logs for debugging purposes in BlockManager and citation callbacks. - Enhanced smartBlockUpdate method call in citation callbacks for better state management. * refactor: streamline BlockManager and callback logic - Removed unnecessary accumulated content variables in text and thinking callbacks. - Updated content handling in callbacks to directly use incoming text instead of accumulating. - Enhanced smartBlockUpdate calls for better state management in message streaming. - Cleaned up console log statements for improved readability and debugging. |
||
|
|
8384bbfc0a
|
fix: handle mentions when resending message (#7819)
* fix(messageThunk): 修复重置消息时模型未正确继承的问题 * fix(消息重发): 修复重发消息时模型选择逻辑 确保当原始消息模型被提及时才使用该模型,否则使用助手默认模型 * style(PasteService): 统一文件换行符为LF格式 * Revert "style(PasteService): 统一文件换行符为LF格式" This reverts commit |
||
|
|
a6db53873a
|
fix: add channel property to notifications for backup and assistant messages (#8120)
* fix: add channel property to notifications for backup and assistant messages * Add notification tip and improve assistant notification logic Added a tooltip in the notification settings UI to clarify that only messages exceeding 30 seconds will trigger a reminder. Updated i18n files for all supported languages with the new tip. Modified notification logic to only send notifications for assistant responses or errors if the message duration exceeds 30 seconds and the user is not on the home page or the window is not focused. * Remove duplicate InfoCircleOutlined import Consolidated the import of InfoCircleOutlined from '@ant-design/icons' to avoid redundancy in GeneralSettings.tsx. * fix: add isFocused mock and simplify createMockStore Added a mock for isFocused in the window utility and refactored createMockStore to return the configured store directly. This improves test setup clarity and ensures all necessary window utilities are mocked. |
||
|
|
76de357cbf
|
test: add integration test for message thunk and fix some bugs (#8148)
* test: add integration test for message thunk and fix some bugs * fix: ci |
||
|
|
4dd99b5240
|
feat: enhance Anthropic and OpenAI API clients with incremental output support (#8104)
- Added support for incremental output in AnthropicAPIClient by introducing TEXT_START and THINKING_START chunk types. - Updated OpenAIAPIClient to conditionally enable incremental output for specific models. - Modified messageThunk to handle updated smartBlockUpdate calls with an isComplete parameter for better state management. - Introduced incremental_output parameter in ReasoningEffortOptionalParams type for enhanced configuration options. |
||
|
|
8340922263
|
fix: smartblock update not persist to db (#8046)
* chore(version): 1.4.10 * feat: enhance ThinkingTagExtractionMiddleware and update smartBlockUpdate function - Added support for THINKING_START and TEXT_START chunk types in ThinkingTagExtractionMiddleware. - Updated smartBlockUpdate function to include an isComplete parameter for better block state management. - Ensured proper handling of block updates based on completion status across various message types. * fix: refine block update logic in messageThunk - Adjusted conditions for canceling throttled block updates based on block type changes and completion status. - Improved handling of block updates to ensure accurate state management during message processing. * chore: add comment * fix: update message block status handling - Changed the status of image blocks from STREAMING to PENDING to better reflect the processing state. - Refined logic in OpenAIResponseAPIClient to ensure user messages are correctly handled based on assistant message content. - Improved rendering conditions in ImageBlock component for better user experience during image loading. --------- Co-authored-by: kangfenmao <kangfenmao@qq.com> |
||
|
|
3afa81eb5d
|
fix(Anthropic): content truncation (#7942)
* fix(Anthropic): content truncation * feat: add start event and fix content truncation * fix (gemini): some event * revert: index.tsx * revert(messageThunk): error block * fix: ci * chore: unuse log |
||
|
|
2a33a9af64
|
revert: timing for adding citation references (#7953) | ||
|
|
fba6c1642d
|
feat: implement tool call progress handling and status updates (#7303)
* feat: implement tool call progress handling and status updates - Update MCP tool response handling to include 'pending' and 'cancelled' statuses. - Introduce new IPC channel for progress updates. - Enhance UI components to reflect tool call statuses, including pending and cancelled states. - Add localization for new status messages in multiple languages. - Refactor message handling logic to accommodate new tool response types. * fix: adjust alignment of action tool container in MessageTools component - Change justify-content from flex-end to flex-start to improve layout consistency. * feat: enhance tool confirmation handling and update related components - Introduced a new tool confirmation mechanism in userConfirmation.ts, allowing for individual tool confirmations. - Updated GeminiAPIClient and OpenAIResponseAPIClient to include tool configuration options. - Refactored MessageTools component to utilize new confirmation functions and improved styling. - Enhanced mcp-tools.ts to manage tool invocation and confirmation processes more effectively, ensuring real-time status updates. * refactor(McpToolChunkMiddleware): enhance tool execution handling and confirmation tracking - Updated createToolHandlingTransform to manage confirmed tool calls and results more effectively. - Refactored executeToolCalls and executeToolUseResponses to return both tool results and confirmed tool calls. - Adjusted buildParamsWithToolResults to utilize confirmed tool calls for building new request messages. - Improved error handling in messageThunk for tool call status updates, ensuring accurate block ID mapping. * feat(McpToolChunkMiddleware, ToolUseExtractionMiddleware, mcp-tools, userConfirmation): enhance tool execution and confirmation handling - Updated McpToolChunkMiddleware to execute tool calls and responses asynchronously, improving performance and response handling. - Enhanced ToolUseExtractionMiddleware to generate unique tool IDs for better tracking. - Modified parseToolUse function to accept a starting index for tool extraction. - Improved user confirmation handling with abort signal support to manage tool action confirmations more effectively. - Updated SYSTEM_PROMPT to clarify the use of multiple tools per message. * fix(tagExtraction): update test expectations for tag extraction results - Adjusted expected length of results from 7 to 9 to reflect changes in tag extraction logic. - Modified content assertions for specific tag contents to ensure accurate validation of extracted tags. * refactor(GeminiAPIClient, OpenAIResponseAPIClient): remove unused function calling configurations - Removed the unused FunctionCallingConfigMode from GeminiAPIClient to streamline the code. - Eliminated the parallel_tool_calls property from OpenAIResponseAPIClient, simplifying the tool call configuration. * feat(McpToolChunkMiddleware): enhance LLM response handling and tool call confirmation - Added notification to UI for new LLM response processing before recursive calls in createToolHandlingTransform. - Improved tool call confirmation logic in executeToolCalls to match tool IDs more accurately, enhancing response validation. * refactor(McpToolChunkMiddleware, ToolUseExtractionMiddleware, messageThunk): remove unnecessary console logs - Eliminated redundant console log statements in McpToolChunkMiddleware, ToolUseExtractionMiddleware, and messageThunk to clean up the code and improve performance. - Focused on enhancing readability and maintainability by reducing clutter in the logging output. * refactor(McpToolChunkMiddleware): remove redundant logging statements - Eliminated unnecessary logging in createToolHandlingTransform to streamline the code and enhance readability. - Focused on reducing clutter in the logging output while maintaining error handling functionality. * feat: enhance action button functionality with cancel and confirm options * refactor(AbortHandlerMiddleware, McpToolChunkMiddleware, ToolUseExtractionMiddleware, messageThunk): improve error handling and code clarity - Updated AbortHandlerMiddleware to skip abort status checks if an error chunk is received, enhancing error handling logic. - Replaced console.error with Logger.error in McpToolChunkMiddleware for consistent logging practices. - Refined ToolUseExtractionMiddleware to improve tool use extraction logic and ensure proper handling of tool_use tags. - Enhanced messageThunk to include initialPlaceholderBlockId in block ID checks, improving error state management. * refactor(ToolUseExtractionMiddleware): enhance tool use parsing logic with counter - Introduced a toolCounter to track the number of tool use responses processed. - Updated parseToolUse function calls to include the toolCounter, improving the extraction logic and ensuring accurate response handling. * feat(McpService, IpcChannel, MessageTools): implement tool abort functionality - Added Mcp_AbortTool channel to handle tool abortion requests. - Implemented abortTool method in McpService to manage active tool calls and provide logging. - Updated MessageTools component to include an abort button for ongoing tool calls, enhancing user control. - Modified API calls to support optional callId for better tracking of tool executions. - Added localization strings for tool abort messages in multiple languages. --------- Co-authored-by: Vaayne <liu.vaayne@gmail.com> |
||
|
|
134ea51b0f
|
fix: websearch block and citation formatting (#7776)
* feat: enhance citation handling for Perplexity web search results - Implemented formatting for Perplexity citations in MainTextBlock, including data-citation attributes. - Updated citation processing in message store and thunk to support new citation structure. - Added utility functions for link completion based on web search results. - Enhanced tests to verify correct handling of Perplexity citations and links. * refactor: streamline chunk processing in OpenAIApiClient - Replaced single choice handling with a loop to process all choices in the chunk. - Improved handling of content sources, ensuring fallback mechanisms are in place for delta and message fields. - Enhanced tool call processing to accommodate missing function names and arguments. - Maintained existing functionality for web search data and reasoning content processing. * fix: improve citation handling and web search integration - Enhanced citation formatting to support legacy data compatibility in messageBlock.ts. - Updated messageThunk.ts to manage main text block references and citation updates more effectively. - Removed unnecessary web search flag and streamlined block processing logic. * fix: improve citation transforms to skip code blocks - Add withCitationTags for better code structure - Add tests - Remove outdated code - The Citation type in @renderer/types/index.ts is not referenced anywhere, so removed - Move the actual Citation type from @renderer/pages/home/Messages/CitationsList.tsx to @renderer/types/index.ts - Allow text selecting in tooltip * test: update tests * refactor(messageThunk): streamline citation handling in response processing - Removed redundant citation block source retrieval during text chunk processing. - Updated citation references handling to ensure proper inclusion only when available. - Simplified the logic for managing citation references in both streaming and final text updates. * refactor: simplify determineCitationSource for backward compatibility --------- Co-authored-by: one <wangan.cs@gmail.com> |
||
|
|
2fad7c0ff6
|
refactor(messageThunk): streamline loading state management for topics (#7809)
* refactor(messageThunk): streamline loading state management for topics - Reintroduced the handleChangeLoadingOfTopic function to manage loading states more effectively. - Updated thunk implementations to ensure loading state is correctly set after message processing. - Removed commented-out code for clarity and maintainability. * fix(messageThunk): ensure loading state is managed correctly after message sending - Added a finally block to guarantee that the loading state is updated after the sendMessage thunk execution. - Removed commented-out code for improved clarity and maintainability. |
||
|
|
870f794796
|
fix(messageThunk): handle missing user message in response creation (#7375)
* fix(messageThunk): handle missing user message in response creation * fix(i18n): add missing user message translations * fix(messageThunk): show error popup for missing user message instead of creating error block * fix(messageThunk): validate askId and show error popup for missing user message --------- Co-authored-by: suyao <sy20010504@gmail.com> |
||
|
|
e35b4d9cd1
|
feat(knowledge): support doc2x, mistral, MacOS, MinerU... OCR (#3734)
Co-authored-by: suyao <sy20010504@gmail.com> Co-authored-by: 亢奋猫 <kangfenmao@qq.com> |
||
|
|
0c3720123d
|
feat(TopicsHistory): add sorting functionality for topics and update UI components (#7673)
* feat(TopicsHistory): add sorting functionality for topics and update UI components * refactor(assistants): remove console log from updateTopicUpdatedAt function * refactor(TopicsHistory): update topic date display to use dynamic sorting type |
||
|
|
f500cc6c9a
|
refactor(inputbar): enforce image upload and model mentioning restrictions (#7314)
* feat(inputbar): feat: enforce image upload restrictions - allow image uploads when mentioning vision models - disallow image uploads when non-vision models are mentioned * refactor(Inputbar): improve handleDrop * fix(Inputbar): Quick panel does not refresh when file changes * fix(AttachmentButton): Fix the conditional judgment logic when mentionedModels is optional * stash * fix(Inputbar): Fix the issue where quickPanel does not close when files are updated Use useRef to track changes in files, ensuring that quickPanel is properly closed when files are updated * refactor(Inputbar): 重构附件按钮和工具条逻辑,简化文件类型支持判断 将文件类型支持判断逻辑从组件中提取到父组件,通过props传递couldAddImageFile和extensions 移除不必要的依赖和计算,优化组件性能 * fix(Inputbar): 修正文件上传逻辑并重命名快速面板方法 修复couldAddTextFile条件判断错误 将openQuickPanel重命名为openAttachmentQuickPanel以明确功能 * feat(MessageEditor): 添加基于话题ID的文件类型限制功能 根据关联消息的模型类型动态限制可添加的文件类型 * fix(MessageEditor): 仅在用户消息时显示附件按钮 根据消息角色决定是否显示附件按钮,避免非用户消息出现不必要的附件功能 * feat(MessageMenu): 添加模型筛选功能以支持视觉模型选择 根据关联消息内容动态筛选可提及的模型 当用户消息包含图片时仅显示视觉模型 * fix: 修复模型过滤器默认值处理 修复SelectModelPopup组件中modelFilter未传入时的默认值处理,使用默认值会导致卡死 * feat(输入栏): 添加模型集合功能并优化文件类型支持 添加 isVisionModels 和 isGenerateImageModels 工具函数用于判断模型集合 优化输入栏对文件类型的支持逻辑,重命名 supportExts 为 supportedExts 移除调试日志并简化模型支持判断逻辑 * refactor(Inputbar): 移除未使用的model属性并优化代码结构 清理AttachmentButton和InputbarTools组件中未使用的model属性 优化MessageEditor中的状态管理,使用useAppSelector替代store.getState 修复拼写错误(failback -> fallback) |
||
|
|
5138f5b314
|
fix: clear search cache on resending (#7510) | ||
|
|
ce32fd32b6
|
fix: include image files in block retrieval for improved file handling (#7231) | ||
|
|
5f4d73b00d
|
feat: add middleware support for provider (#6176)
* feat: add middleware support for OpenAIProvider with logging capabilities - Introduced middleware functionality in OpenAIProvider to enhance completions processing. - Created AiProviderMiddlewareTypes for defining middleware interfaces and contexts. - Implemented sampleLoggingMiddleware for logging message content and processing times. - Updated OpenAIProvider constructor to accept middleware as an optional parameter. - Refactored completions method to utilize middleware for improved extensibility and logging. * refactor: streamline OpenAIProvider initialization and middleware application - Removed optional middleware parameter from OpenAIProvider constructor for simplicity. - Refactored ProviderFactory to create instances of providers and apply logging middleware consistently. - Enhanced completions method visibility by changing it from private to public. - Cleaned up unused code related to middleware handling in OpenAIProvider. * feat: enhance AiProvider with new middleware capabilities and completion context - Added public getter for provider info in BaseProvider. - Introduced finalizeSdkRequestParams hook for middleware to modify SDK-specific request parameters. - Refactored completions method in OpenAIProvider to accept a context object, improving middleware integration. - Updated middleware types to include new context structure and callback functions for better extensibility. - Enhanced logging middleware to utilize new context structure for improved logging capabilities. * refactor: enhance middleware structure and context handling in AiProvider - Updated BaseProvider and AiProvider to utilize AiProviderMiddlewareCompletionsContext for completions method. - Introduced new utility functions for middleware context creation and execution. - Refactored middleware application logic to improve extensibility and maintainability. - Replaced sampleLoggingMiddleware with a more robust LoggingMiddleware implementation. - Added new context management features for better middleware integration. * refactor: update AiProvider and middleware structure for improved completions handling - Refactored BaseProvider and AiProvider to change completions method signature from context to params. - Removed unused AiProviderMiddlewareCompletionsContext and related code for cleaner implementation. - Enhanced middleware configuration by introducing a dedicated middleware registration file. - Implemented logging middleware for completions to improve observability during processing. - Streamlined middleware application logic in ProviderFactory for better maintainability. * docs: 添加中间件编写指南文档 - 新增《如何为 AI Provider 编写中间件》文档,详细介绍中间件架构、类型及编写示例。 - 说明了中间件的执行顺序、注册方法及最佳实践,旨在帮助开发者有效创建和维护中间件。 * refactor: update completions method signatures and introduce CompletionsResult type - Changed the completions method signature in BaseProvider and AiProvider to return CompletionsResult instead of void. - Added CompletionsResult type definition to encapsulate streaming and usage metrics. - Updated middleware and related components to handle the new CompletionsResult structure, ensuring compatibility with existing functionality. - Introduced new middleware for stream adaptation to enhance chunk processing during completions. * refactor: enhance AiProvider middleware and streaming handling - Updated CompletionsResult type to support both OpenAI SDK stream and ReadableStream. - Modified CompletionsMiddleware to return CompletionsResult, improving type safety. - Introduced StreamAdapterMiddleware to adapt OpenAI SDK streams to application-specific chunk streams. - Enhanced logging in CompletionsLoggingMiddleware to capture and return results from next middleware calls. * refactor: update AiProvider and middleware for OpenAI completions handling - Renamed CompletionsResult to CompletionsOpenAIResult for clarity and updated its structure to support both OpenAI SDK and application-specific streams. - Modified completions method signatures in AiProvider and OpenAIProvider to return CompletionsOpenAIResult. - Enhanced middleware to process and adapt OpenAI SDK streams into standard chunk formats, improving overall streaming handling. - Introduced new middleware components: FinalChunkConsumerAndNotifierMiddleware and OpenAISDKChunkToStandardChunkMiddleware for better chunk processing and logging. * 删除 ExtractReasoningCompletionsMiddleware.ts 文件,清理未使用的中间件代码以提高代码整洁性和可维护性。 * refactor: consolidate middleware types and improve imports - Replaced references to AiProviderMiddlewareTypes with the new middlewareTypes file across various middleware components for better organization. - Introduced TextChunkMiddleware to enhance chunk processing from OpenAI SDK streams. - Cleaned up imports in multiple files to reflect the new structure, improving code clarity and maintainability. * feat: enhance abort handling with AbortController in middleware chain - Update CompletionsOpenAIResult interface to use AbortController instead of AbortSignal - Modify OpenAIProvider to pass abortController in completions method return - Update AbortHandlerMiddleware to use controller from upstream result - Improve abort handling flexibility by exposing full controller capabilities - Enable middleware to actively control abort operations beyond passive monitoring This change provides better control over request cancellation and enables more sophisticated abort handling patterns in the middleware pipeline. * refactor: enhance AiProvider and middleware for improved completions handling - Updated BaseProvider to expose additional methods and properties, including getMessageParam and createAbortController. - Modified OpenAIProvider to streamline completions processing and integrate new middleware for tool handling. - Introduced TransformParamsBeforeCompletions middleware to standardize parameter transformation before completions. - Added McpToolChunkMiddleware for managing tool calls within the completions stream. - Enhanced middleware types to support new functionalities and improve overall structure. These changes improve the flexibility and maintainability of the AiProvider and its middleware, facilitating better handling of OpenAI completions and tool interactions. * refactor: enhance middleware for recursive handling and internal state management - Introduced internal state management in middleware to support recursive calls, including enhanced dispatch functionality. - Updated middleware types to include new internal fields for managing recursion depth and call status. - Improved logging for better traceability of recursive calls and state transitions. - Adjusted various middleware components to utilize the new internal state, ensuring consistent behavior during recursive processing. These changes enhance the middleware's ability to handle complex scenarios involving recursive calls, improving overall robustness and maintainability. * fix(OpenAIProvider): return empty object for missing sdkParams in completions handling - Updated OpenAIProvider to return an empty object instead of undefined when sdkParams are not found, ensuring consistent return types. - Enhanced TransformParamsBeforeCompletions middleware to include a flag for built-in web search functionality based on assistant settings. * refactor(OpenAIProvider): enhance completions handling and middleware integration - Updated the completions method in OpenAIProvider to include an onChunk callback for improved streaming support. - Enabled the ThinkChunkMiddleware in the middleware registration for better handling of reasoning content. - Increased the maximum recursion depth in McpToolChunkMiddleware to prevent infinite loops. - Refined TextChunkMiddleware to directly enqueue chunks without unnecessary type checks. - Improved the ThinkChunkMiddleware to better manage reasoning tags and streamline chunk processing. These changes enhance the overall functionality and robustness of the AI provider and middleware components. * feat(WebSearchMiddleware): add web search handling and integration - Introduced WebSearchMiddleware to process various web search results, including annotations and citations, and generate LLM_WEB_SEARCH_COMPLETE chunks. - Enhanced TextChunkMiddleware to support link conversion based on the model and assistant settings, improving the handling of TEXT_DELTA chunks. - Updated middleware registration to include WebSearchMiddleware for comprehensive search result processing. These changes enhance the AI provider's capabilities in handling web search functionalities and improve the overall middleware architecture. * fix(middleware): improve optional chaining for chunk processing - Updated McpToolChunkMiddleware and ThinkChunkMiddleware to use optional chaining for accessing choices, enhancing robustness against undefined values. - Removed commented-out code in ThinkChunkMiddleware to streamline the chunk handling process. These changes improve the reliability of middleware when processing OpenAI API responses. * feat(middleware): enhance AbortHandlerMiddleware with recursion handling - Added logic to detect and handle recursive calls, preventing unnecessary creation of AbortControllers. - Improved logging for better visibility into middleware operations, including recursion depth and cleanup processes. - Streamlined cleanup process for non-stream responses to ensure resources are released promptly. These changes enhance the robustness and efficiency of the AbortHandlerMiddleware in managing API requests. * docs(middleware): 迁移步骤 * feat(middleware): implement FinalChunkConsumerMiddleware for usage and metrics accumulation - Introduced FinalChunkConsumerMiddleware to replace the deprecated FinalChunkConsumerAndNotifierMiddleware. - This new middleware accumulates usage and metrics data from OpenAI API responses, enhancing tracking capabilities. - Updated middleware registration to utilize the new FinalChunkConsumerMiddleware, ensuring proper integration. - Added support for handling recursive calls and improved logging for better debugging and monitoring. These changes enhance the middleware's ability to manage and report usage metrics effectively during API interactions. * refactor(migrate): update API request and response structures to TypeScript types - Changed the definitions of `CoreCompletionsRequest` and `Chunk` to use TypeScript types instead of Zod Schemas for better type safety and clarity. - Updated middleware and service classes to handle the new `Chunk` type, ensuring compatibility with the revised API client structure. - Enhanced the response processing logic to standardize the handling of raw SDK chunks into application-level `Chunk` objects. - Adjusted middleware to consume the new `Chunk` type, streamlining the overall architecture and improving maintainability. These changes facilitate a more robust and type-safe integration with AI provider APIs. * feat(AiProvider): implement API client architecture - Introduced ApiClientFactory for creating instances of API clients based on provider configuration. - Added BaseApiClient as an abstract class to provide common functionality for specific client implementations. - Implemented OpenAIApiClient for OpenAI and Azure OpenAI, including request and response handling. - Defined types and interfaces for API client operations, enhancing type safety and clarity. - Established middleware schemas for standardized request processing across AI providers. These changes lay the groundwork for a modular and extensible API client architecture, improving the integration of various AI providers. * refactor(StreamAdapterMiddleware): simplify stream adaptation logic - Updated StreamAdapterMiddleware to directly use AsyncIterable instead of wrapping it with rawSdkChunkAdapter, streamlining the adaptation process. - Modified asyncGeneratorToReadableStream to accept AsyncIterable, enhancing its flexibility and usability. These changes improve the efficiency of stream handling in the middleware. * refactor(AiProvider): simplify ResponseChunkTransformer interface and streamline OpenAIApiClient response handling - Changed ResponseChunkTransformer from an interface to a type for improved clarity and simplicity. - Refactored OpenAIApiClient to streamline the response transformation logic, reducing unnecessary complexity in handling tool calls and reasoning content. - Enhanced type safety by ensuring consistent handling of optional properties in response processing. These changes improve the maintainability and readability of the codebase while ensuring robust response handling in the API client. * doc(technicalArchitecture): add comprehensive documentation for AI Provider architecture * feat(architecture): introduce AI Core Design documentation and middleware specification - Added a comprehensive technical architecture document for the new AI Provider (`aiCore`), outlining core design principles, component details, and execution flow. - Established a middleware specification document to define the design, implementation, and usage of middleware within the `aiCore` module, promoting a flexible and maintainable system. - These additions provide clarity and guidance for future development and integration of AI functionalities within Cherry Studio. * refactor(middleware): consolidate and enhance middleware architecture - Removed deprecated extractReasoningMiddleware and integrated its functionality into existing middleware. - Streamlined middleware registration and improved type definitions for better clarity and maintainability. - Introduced new middleware components for handling chunk processing, web search, and reasoning tags, enhancing overall functionality. - Updated various middleware to utilize the new structures and improve logging for better debugging. These changes enhance the middleware's efficiency and maintainability, providing a more robust framework for API interactions. * refactor(AiProvider): enhance API client and middleware integration - Updated ApiClientFactory to include new SDK types for improved type safety and clarity. - Refactored BaseApiClient to support additional parameters in the completions method, enhancing flexibility for processing states. - Streamlined OpenAIApiClient to better handle tool calls and responses, including the introduction of new chunk types for tool management. - Improved middleware architecture by integrating processing states and refining message handling, ensuring a more robust interaction with the API. These changes enhance the overall maintainability and functionality of the API client and middleware, providing a more efficient framework for AI interactions. * fix(McpToolChunkMiddleware): remove redundant logging in recursion state update * refactor(McpToolChunkMiddleware): update tool call handling and type definitions - Replaced ChatCompletionMessageToolCall with SdkToolCall for improved type consistency. - Updated return types of executeToolCalls and executeToolUses functions to SdkMessage[], enhancing clarity in message handling. - Removed unused import to streamline the code. These changes enhance the maintainability and type safety of the middleware, ensuring better integration with the SDK. * refactor(middleware): enhance middleware structure and type handling - Updated middleware components to utilize new SDK types, improving type safety and clarity across the board. - Refactored various middleware to streamline processing logic, including enhanced handling of SDK messages and tool calls. - Improved logging and error handling for better debugging and maintainability. - Consolidated middleware functions to reduce redundancy and improve overall architecture. These changes enhance the robustness and maintainability of the middleware framework, ensuring a more efficient interaction with the API. * refactor(middleware): unify type imports and enhance middleware structure - Updated middleware components to import types from a unified 'types' file, improving consistency and clarity across the codebase. - Removed the deprecated 'type.ts' file to streamline the middleware structure. - Enhanced middleware registration and export mechanisms for better accessibility and maintainability. These changes contribute to a more organized and efficient middleware framework, facilitating easier future development and integration. * refactor(AiProvider): enhance API client and middleware integration - Updated AiProvider components to support new SDK types, improving type safety and clarity. - Refactored middleware to streamline processing logic, including enhanced handling of tool calls and responses. - Introduced new middleware for tool use extraction and raw stream listening, improving overall functionality. - Improved logging and error handling for better debugging and maintainability. These changes enhance the robustness and maintainability of the API client and middleware, ensuring a more efficient interaction with the API. * feat(middleware): add new middleware components for raw stream listening and tool use extraction - Introduced RawStreamListenerMiddleware and ToolUseExtractionMiddleware to enhance middleware capabilities. - Updated MiddlewareRegistry to include new middleware entries, improving overall functionality and extensibility. These changes expand the middleware framework, facilitating better handling of streaming and tool usage scenarios. * refactor(AiProvider): integrate new API client and middleware architecture - Replaced BaseProvider with ApiClientFactory to enhance API client instantiation. - Updated completions method to utilize new middleware architecture for improved processing. - Added TODOs for refactoring remaining methods to align with the new API client structure. - Removed deprecated middleware wrapping logic from ApiClientFactory for cleaner implementation. These changes improve the overall structure and maintainability of the AiProvider, facilitating better integration with the new middleware system. * refactor(middleware): update middleware architecture and documentation - Revised middleware naming conventions and introduced a centralized MiddlewareRegistry for better management and accessibility. - Enhanced MiddlewareBuilder to support named middleware and streamline the construction of middleware chains. - Updated documentation to reflect changes in middleware usage and structure, improving clarity for future development. These changes improve the organization and usability of the middleware framework, facilitating easier integration and maintenance. * refactor(AiProvider): enhance completions middleware logic and API client handling - Updated the completions method to conditionally remove middleware based on parameters, improving flexibility in processing. - Refactored the response chunk transformer in OpenAIApiClient and AnthropicAPIClient to utilize a more streamlined approach with TransformStream. - Simplified middleware context handling by removing unnecessary custom state management. - Improved logging and error handling across middleware components for better debugging and maintainability. These changes enhance the efficiency and clarity of the AiProvider's middleware integration, ensuring a more adaptable and robust processing framework. * refactor(AiProvider, middleware): clean up logging and improve method naming - Removed unnecessary logging of parameters in AiProvider to streamline the code. - Updated method name assignment in middleware to enhance clarity and consistency. These changes contribute to a cleaner codebase and improve the readability of the middleware and provider components. * feat(middleware): enhance middleware types and add RawStreamListenerMiddleware - Introduced RawStreamListenerMiddleware to the MiddlewareName enum for improved middleware capabilities. - Updated type definitions across middleware components to enhance type safety and clarity, including the addition of new SDK types. - Refactored context and middleware API interfaces to support more specific type parameters, improving overall maintainability. These changes expand the middleware framework, facilitating better handling of streaming scenarios and enhancing type safety across the codebase. * refactor(messageThunk): convert callback functions to async and handle errors during database updates This commit updates several callback functions in the messageThunk to be asynchronous, ensuring that block transitions are awaited properly. Additionally, error handling is added for the database update function to log any failures when saving blocks. This improves the reliability and responsiveness of the message processing flow. * refactor: enhance message block handling in messageThunk This commit refactors the message processing logic in messageThunk to improve the management of message blocks. Key changes include the introduction of dedicated IDs for different block types (main text, thinking, tool, and image) to streamline updates and transitions. The handling of placeholder blocks has been improved, ensuring that they are correctly converted to their respective types during processing. Additionally, error handling has been enhanced for better reliability in database updates. * feat(AiProvider): add default timeout configuration and enhance API client aborthandler - Introduced a default timeout constant to the configuration for improved API client timeout management. - Updated BaseApiClient and its derived classes to utilize the new timeout setting, ensuring consistent timeout behavior across different API clients. - Enhanced middleware to pass the timeout value during API calls, improving error handling and responsiveness. These changes improve the overall robustness and configurability of the API client interactions, facilitating better control over request timeouts. * feat(GeminiProvider): implement Gemini API client and enhance file handling - Introduced GeminiAPIClient to facilitate interactions with the Gemini API, replacing the previous GoogleGenAI integration. - Refactored GeminiProvider to utilize the new API client, improving code organization and maintainability. - Enhanced file handling capabilities, including support for PDF uploads and retrieval of file metadata. - Updated message processing to accommodate new SDK types and improve content generation logic. These changes significantly enhance the functionality and robustness of the GeminiProvider, enabling better integration with the Gemini API and improving overall user experience. * refactor(AiProvider, middleware): streamline API client and middleware integration - Removed deprecated methods and types from various API clients, enhancing code clarity and maintainability. - Updated the CompletionsParams interface to support messages as a string or array, improving flexibility in message handling. - Refactored middleware components to eliminate unnecessary state management and improve type safety. - Enhanced the handling of streaming responses and added utility functions for better stream management. These changes contribute to a more robust and efficient architecture for the AiProvider and its associated middleware, facilitating improved API interactions and user experience. * refactor(middleware): translation 适配 - Deleted SdkCallMiddleware to streamline middleware architecture and improve maintainability. - Commented out references to SdkCallModule in examples and registration files to prevent usage. - Enhanced logging in AbortHandlerMiddleware for better debugging and tracking of middleware execution. - Updated parameters in ResponseTransformMiddleware to improve flexibility in handling response settings. These changes contribute to a cleaner and more efficient middleware framework, facilitating better integration and performance. * refactor(ApiCheck): streamline API validation and error handling - Updated the API check logic to simplify validation processes and improve error handling across various components. - Refactored the `checkApi` function to throw errors directly instead of returning validation objects, enhancing clarity in error management. - Improved the handling of API key checks in `checkModelWithMultipleKeys` to provide more informative error messages. - Added a new method `getEmbeddingDimensions` in the `AiProvider` class to facilitate embedding dimension retrieval, enhancing model compatibility checks. These changes contribute to a more robust and maintainable API validation framework, improving overall user experience and error reporting. * refactor(HealthCheckService, ModelService): improve error handling and performance metrics - Updated error handling in `checkModelWithMultipleKeys` to truncate error messages for better readability. - Refactored `performModelCheck` to remove unnecessary error handling, focusing on performance metrics by returning only latency. - Enhanced the `checkModel` function to ensure consistent return types, improving clarity in API interactions. These changes contribute to a more efficient and user-friendly error reporting and performance tracking system. * refactor(AiProvider, models): enhance model handling and API client integration - Updated the `listModels` method in various API clients to improve model retrieval and ensure consistent return types. - Refactored the `EditModelsPopup` component to handle model properties more robustly, including fallback options for `id`, `name`, and other attributes. - Enhanced type definitions for models in the SDK to support new integrations and improve type safety. These changes contribute to a more reliable and maintainable model management system within the AiProvider, enhancing overall user experience and API interactions. * refactor(AiProvider, clients): implement image generation functionality - Refactored the `generateImage` method in the `AiProvider` class to utilize the `apiClient` for image generation, replacing the previous placeholder implementation. - Updated the `BaseApiClient` to include an abstract `generateImage` method, ensuring all derived clients implement this functionality. - Implemented the `generateImage` method in `GeminiAPIClient` and `OpenAIAPIClient`, providing specific logic for image generation based on the respective SDKs. - Added type definitions for `GenerateImageParams` across relevant files to enhance type safety and clarity in image generation parameters. These changes enhance the image generation capabilities of the AiProvider, improving integration with various API clients and overall user experience. * refactor(AiProvider, clients): restructure API client architecture and remove deprecated components - Refactored the `ProviderFactory` and removed the `AihubmixProvider` to streamline the API client architecture. - Updated the import paths for `isOpenAIProvider` to reflect the new structure. - Introduced `AihubmixAPIClient` and `OpenAIResponseAPIClient` to enhance client handling based on model types. - Improved the `AiProvider` class to utilize the new clients for better model-specific API interactions. - Enhanced type definitions and error handling across various components to improve maintainability and clarity. These changes contribute to a more efficient and organized API client structure, enhancing overall integration and user experience. * fix: update system prompt handling in API clients to use await for asynchronous operations - Modified the `AnthropicAPIClient`, `GeminiAPIClient`, `OpenAIAPIClient`, and `OpenAIResponseAPIClient` to ensure `buildSystemPrompt` is awaited, improving the handling of system prompts. - Adjusted the `fetchMessagesSummary` function to utilize the last five user messages for better context in API calls and added a utility function to clean up topic names. These changes enhance the reliability of prompt generation and improve the overall API interaction experience. * refactor(middleware): remove examples.ts to streamline middleware documentation - Deleted the `examples.ts` file containing various middleware usage examples to simplify the middleware structure and documentation. - This change contributes to a cleaner codebase and focuses on essential middleware components, enhancing maintainability. * refactor(AiProvider, middleware): enhance middleware handling and error management - Updated the `CompletionsParams` interface to include a new `callType` property for better middleware decision-making based on the context of the API call. - Introduced `ErrorHandlerMiddleware` to standardize error handling across middleware, allowing errors to be captured and processed as `ErrorChunk` objects. - Modified the `AbortHandlerMiddleware` to conditionally remove itself based on the `callType`, improving middleware efficiency. - Cleaned up logging in `AbortHandlerMiddleware` to reduce console output and enhance performance. - Updated middleware registration to include the new `ErrorHandlerMiddleware`, ensuring comprehensive error management in the middleware pipeline. These changes contribute to a more robust and maintainable middleware architecture, improving error handling and overall API interaction efficiency. * feat: implement token estimation for message handling - Added an abstract method `estimateMessageTokens` to the `BaseApiClient` class for estimating token usage based on message content. - Implemented the `estimateMessageTokens` method in `AnthropicAPIClient`, `GeminiAPIClient`, `OpenAIAPIClient`, and `OpenAIResponseAPIClient` to calculate token consumption for various message types. - Enhanced middleware to accumulate token usage for new messages, improving tracking of API call costs. These changes improve the efficiency of message processing and provide better insights into token usage across different API clients. * feat: add support for image generation and model handling - Introduced `SUPPORTED_DISABLE_GENERATION_MODELS` to manage models that disable image generation. - Updated `isSupportedDisableGenerationModel` function to check model compatibility. - Enhanced `Inputbar` logic to conditionally enable image generation based on model support. - Modified API clients to handle image generation calls and responses, including new chunk types for image data. - Updated middleware and service layers to incorporate image generation parameters and improve overall processing. These changes enhance the application's capabilities for image generation and improve the handling of various model types. * feat: enhance GeminiAPIClient for image generation support - Added `getGenerateImageParameter` method to configure image generation parameters. - Updated request handling in `GeminiAPIClient` to include image generation options. - Enhanced response processing to handle image data and enqueue it correctly. These changes improve the GeminiAPIClient's capabilities for generating and processing images, aligning with recent enhancements in image generation support. * feat: enhance image generation handling in OpenAIResponseAPIClient and middleware - Updated OpenAIResponseAPIClient to improve user message processing for image generation. - Added handling for image creation events in TransformCoreToSdkParamsMiddleware. - Adjusted ApiService to streamline image generation event handling. - Modified messageThunk to reflect changes in image block status during processing. These enhancements improve the integration and responsiveness of image generation features across the application. * refactor: remove unused AI provider classes - Deleted `AihubmixProvider`, `AnthropicProvider`, `BaseProvider`, `GeminiProvider`, and `OpenAIProvider` as they are no longer utilized in the codebase. - This cleanup reduces code complexity and improves maintainability by removing obsolete components related to AI provider functionality. * chore: remove obsolete test files for middleware - Deleted test files for `AbortHandlerMiddleware`, `LoggingMiddleware`, `TextChunkMiddleware`, `ThinkChunkMiddleware`, and `WebSearchMiddleware` as they are no longer needed. - This cleanup helps streamline the codebase and reduces maintenance overhead by removing outdated tests. * chore: remove Suggestions component and related functionality - Deleted the `Suggestions` component from the home page as it is no longer needed. - Removed associated imports and functions related to suggestion fetching, streamlining the codebase. - This cleanup helps improve maintainability by eliminating unused components. * feat: enhance OpenAIAPIClient and StreamProcessingService for tool call handling - Updated OpenAIAPIClient to conditionally include tool calls in the assistant message, improving message processing logic. - Enhanced tool call handling in the response transformer to correctly manage and enqueue tool call data. - Added a new callback for LLM response completion in StreamProcessingService, allowing better integration of response handling. These changes improve the functionality and responsiveness of the OpenAI API client and stream processing capabilities. * fix: copilot error * fix: improve chunk handling in TextChunkMiddleware and ThinkChunkMiddleware - Updated TextChunkMiddleware to enqueue LLM_RESPONSE_COMPLETE chunks based on accumulated text content. - Refactored ThinkChunkMiddleware to generate THINKING_COMPLETE chunks when receiving non-THINKING_DELTA chunks, ensuring proper handling of accumulated thinking content. - These changes enhance the middleware's responsiveness and accuracy in processing text and thinking chunks. * chore: update dependencies and improve styling - Updated `selection-hook` dependency to version 0.9.23 in `package.json` and `yarn.lock`. - Removed unused styles from `container.scss` and adjusted padding in `index.scss`. - Enhanced message rendering and layout in various components, including `Message`, `MessageHeader`, and `MessageMenubar`. - Added tooltip support for message divider settings in `SettingsTab`. - Improved handling of citation display in `CitationsList` and `CitationBlock`. These changes streamline the codebase and enhance the user interface for better usability. * feat: implement image generation middleware and enhance model handling - Added `ImageGenerationMiddleware` to handle dedicated image generation models, integrating image processing and OpenAI's image generation API. - Updated `AiProvider` to utilize the new middleware for dedicated image models, ensuring proper middleware chaining. - Introduced constants for dedicated image models in `models.ts` to streamline model identification. - Refactored error handling in `ErrorHandlerMiddleware` to use a utility function for better error management. - Cleaned up imports and removed unused code in various files for improved maintainability. * fix: update dedicated image models identification logic - Modified the `DEDICATED_IMAGE_MODELS` array to include 'grok-2-image' for improved model handling. - Enhanced the `isDedicatedImageGenerationModel` function to use a more robust check for model identification, ensuring better accuracy in middleware processing. * refactor: remove OpenAIResponseProvider class - Deleted the `OpenAIResponseProvider` class from the `AiProvider` module, streamlining the codebase by eliminating unused code. - This change enhances maintainability and reduces complexity in the provider architecture. * fix: usermessage * refactor: simplify AbortHandlerMiddleware for improved abort handling - Removed direct dependency on ApiClient for creating AbortController, enhancing modularity. - Introduced utility functions to manage abort controllers, streamlining the middleware's responsibilities. - Delegated abort signal handling to downstream middlewares, allowing for cleaner separation of concerns. * refactor(aiCore): Consolidate AI provider and middleware architecture This commit refactors the AI-related modules by unifying the `clients` and `middleware` directories under a single `aiCore` directory. This change simplifies the project structure, improves modularity, and makes the architecture more cohesive. Key changes: - Relocated provider-specific clients and middleware into the `aiCore` directory, removing the previous `providers/AiProvider` structure. - Updated the architectural documentation (`AI_CORE_DESIGN.md`) to accurately reflect the new, streamlined directory layout and execution flow. - The main `AiProvider` class is now the primary export of `aiCore/index.ts`, serving as the central access point for AI functionalities. * refactor: update imports and enhance middleware functionality - Adjusted import statements in `AnthropicAPIClient` and `GeminiAPIClient` for better organization. - Improved `AbortHandlerMiddleware` to handle abort signals more effectively, including the conversion of streams to handle abort scenarios. - Enhanced `ErrorHandlerMiddleware` to differentiate between abort errors and other types, ensuring proper error handling. - Cleaned up commented-out code in `FinalChunkConsumerMiddleware` for better readability and maintainability. * refactor: streamline middleware logging and improve error handling - Removed excessive debug logging from various middleware components, including `AbortHandlerMiddleware`, `FinalChunkConsumerMiddleware`, and `McpToolChunkMiddleware`, to enhance readability and performance. - Updated logging levels to use warnings for potential issues in `ResponseTransformMiddleware`, `TextChunkMiddleware`, and `ThinkChunkMiddleware`, ensuring better visibility of important messages. - Cleaned up commented-out code and unnecessary debug statements across multiple middleware files for improved maintainability. --------- Co-authored-by: suyao <sy20010504@gmail.com> Co-authored-by: eeee0717 <chentao020717Work@outlook.com> Co-authored-by: lizhixuan <zhixuan.li@banosuperapp.com> |
||
|
|
7e201522d0 |
fix: remove topic or message did not delte releated files
This reverts commit
|
||
|
|
b402cdf7ff
|
perf: improve responsiveness on streaming formulas (#6659)
* perf: improve performance on streaming formulas * refactor: create throttlers for blocks * refactor: use LRU cache for better memory management |
||
|
|
e906d5db25
|
fix: Optimize error message formatting (#5988)
* fix: Optimize error message formatting * fix: improve error unit test * refactor: simplify error handling in ErrorBlock component - Replaced custom StyledAlert with a more streamlined Alert component for error messages. - Reduced complexity by removing unnecessary JSX wrappers and improving readability. - Adjusted styling for the Alert component to maintain visual consistency. * fix: update error handling in ErrorBlock component - Removed unnecessary message prop from Alert component to simplify error display. - Maintained existing error handling logic while improving code clarity. |
||
|
|
e162da55bd
|
fix: enhance backup and restore functionality with skip option (#6294)
* fix: enhance backup and restore functionality with skip option - Updated `restore` and `restoreFromWebdav` methods in `BackupManager` to include a `skipBackupFile` parameter, allowing users to skip restoring the Data directory if desired. - Modified corresponding IPC calls in `preload` and `BackupService` to support the new parameter. - Added success message translations in multiple languages for improved user feedback. * fix: integrate skipBackupFile option in RestorePopup for enhanced restore functionality * refactor: remove skipBackupFile parameter from restore methods for simplified usage - Updated `restore` and `restoreFromWebdav` methods in `BackupManager`, `BackupService`, and `NutstoreService` to remove the `skipBackupFile` parameter, streamlining the restore process. - Adjusted corresponding IPC calls in `preload` and `RestorePopup` to reflect the changes, ensuring consistent functionality across the application. |
||
|
|
a5738fdae5
|
feat: add functionality to insert messages at a specific index in the… (#6299)
* feat: add functionality to insert messages at a specific index in the Redux store - Introduced a new interface for inserting messages at a specified index. - Implemented the insertMessageAtIndex reducer to handle message insertion. - Updated saveMessageAndBlocksToDB to support message insertion logic. - Modified appendAssistantResponseThunk to utilize the new insertion functionality. * feat: integrate multi-select mode handling in MessageGroup component - Added useChatContext hook to access multi-select mode state. - Updated isGrouped logic to account for multi-select mode, ensuring proper message grouping behavior. - Enhanced MessageWrapper styles for better layout management in different modes. --------- Co-authored-by: kangfenmao <kangfenmao@qq.com> |
||
|
|
b30a30efa8 |
refactor: enhance notification handling based on page context
- Updated NotificationProvider to truncate long messages for better display. - Modified message sending logic in messageThunk to prevent notifications on the home page, improving user experience. |
||
|
|
2798bd9d9d
|
feat/notification (#6144)
* WIP * feat: integrate notification system using Electron's Notification API - Replaced the previous notification implementation with Electron's Notification API for better integration. - Updated notification types and structures to support new features. - Added translations for notification messages in multiple languages. - Cleaned up unused dependencies related to notifications. * refactor: remove unused node-notifier dependency from Electron config * clean: remove node-notifier from asarUnpack in Electron config * fix: update notification type in preload script to align with new structure * feat: Integrate NotificationService for user notifications across various components - Implemented NotificationService in useUpdateHandler to notify users of available updates. - Enhanced KnowledgeQueue to send success and error notifications during item processing. - Updated BackupService to notify users upon successful restoration and backup completion. - Added error notifications in message handling to improve user feedback on assistant responses. This update improves user experience by providing timely notifications for important actions and events. * feat: Refactor notification handling and integrate new notification settings - Moved SYSTEM_MODELS to a new file for better organization. - Enhanced notification handling across various components, including BackupService and KnowledgeQueue, to include a source attribute for better context. - Introduced notification settings in the GeneralSettings component, allowing users to toggle notifications for assistant messages, backups, and knowledge embedding. - Updated the Redux store to manage notification settings and added migration logic for the new settings structure. - Improved user feedback by ensuring notifications are sent based on user preferences. This update enhances the user experience by providing customizable notification options and clearer context for notifications. * feat: Add 'update' source to NotificationSource type for enhanced notification context * feat: Integrate electron-notification-state for improved notification handling - Added electron-notification-state package to manage Do Not Disturb settings. - Updated NotificationService to respect user DND preferences when sending notifications. - Adjusted notification settings in various components (BackupService, KnowledgeQueue, useUpdateHandler) to ensure notifications are sent based on user preferences. - Enhanced user feedback by allowing notifications to be silenced based on DND status. * feat: Add notification icon to Electron notifications * fix: import SYSTEM_MODELS in EditModelsPopup for improved model handling * feat(i18n): add knowledge base notifications in multiple languages - Added success and error messages for knowledge base operations in English, Japanese, Russian, Chinese (Simplified and Traditional). - Updated the KnowledgeQueue to utilize the new notification messages for better user feedback. * feat(notification): introduce NotificationProvider and integrate into App component - Added NotificationProvider to manage notifications within the application. - Updated App component to include NotificationProvider, enhancing user feedback through notifications. - Refactored NotificationQueue to support multiple listeners for notification handling. |
||
|
|
d42bf89045 | Merge branch 'develop' |