- Upgraded various SDK packages to their latest beta versions for improved functionality and compatibility.
- Updated `@ai-sdk/provider-utils` to version 3.0.0-beta.3.
- Adjusted dependencies in `package.json` to reflect the latest versions, including `@ai-sdk/amazon-bedrock`, `@ai-sdk/anthropic`, `@ai-sdk/azure`, and others.
- Removed outdated versions from `yarn.lock` and ensured consistency across the project.
- Added `webSearchTool` to facilitate web search functionality within the SDK.
- Updated `AiSdkToChunkAdapter` to utilize `BaseTool` for improved type handling.
- Refactored `transformParameters` to support `webSearchProviderId` for enhanced web search integration.
- Introduced new `BaseTool` type structure to unify tool definitions across the codebase.
- Adjusted imports and type definitions to align with the new tool handling logic.
- Updated `ModelConfig` to include a `mode` property for better differentiation between 'chat' and 'responses'.
- Modified `createBaseModel` to conditionally set the provider based on the new `mode` property in `providerSettings`.
- Refactored `RuntimeExecutor` to utilize the updated `ModelConfig` for improved type safety and clarity in provider settings.
- Adjusted imports in `executor.ts` and `types.ts` to align with the new model configuration structure.
- Refactored `helper.ts` to export new types `AnthropicSearchInput` and `AnthropicSearchOutput` for better integration with the web search plugin.
- Updated `index.ts` to include the new types in the exports for improved type safety.
- Modified `AiSdkToChunkAdapter.ts` to handle tool calls more flexibly by introducing a `GenericProviderTool` type, allowing for better differentiation between MCP tools and provider-executed tools.
- Adjusted `handleTooCallChunk.ts` to accommodate the new tool type structure, enhancing the handling of tool call responses.
- Updated type definitions in `index.ts` to reflect changes in tool handling logic.
- Updated `createBaseModel` to differentiate between OpenAI chat and response models.
- Introduced new utility functions for model identification: `isOpenAIReasoningModel`, `isOpenAILLMModel`, and `getModelToProviderId`.
- Improved `transformParameters` to conditionally set the system prompt based on the assistant's prompt.
- Refactored `getAiSdkProviderIdForAihubmix` to simplify provider identification logic.
- Enhanced `getAiSdkProviderId` to support provider type checks.
- Added `sources` array to the default web search configuration, allowing for multiple source types including 'web', 'x', and 'news'.
- This update improves the flexibility and functionality of the web search plugin.
- Cleaned up the web search plugin code by commenting out unused sections for clarity.
- Enhanced middleware handling for the OpenAI provider by wrapping the logic in a block for better readability.
- Removed redundant imports from various files to streamline the codebase.
- Added `enableWebSearch` parameter to the fetchChatCompletion function for improved functionality.
- Introduced `createXaiOptions` function for XAI provider configuration.
- Added `XaiProviderOptions` type and validation schema in `xai.ts`.
- Updated `ProviderOptionsMap` to include XAI options.
- Enhanced `webSearchPlugin` to support XAI-specific search parameters.
- Refactored helper functions to integrate new XAI options into provider configurations.
- Added `ReasoningPart`, `FilePart`, and `ImagePart` to type exports in `index.ts`.
- Refactored `transformParameters.ts` to include `enableWebSearch` option and integrate web search tools.
- Introduced new utility `getWebSearchTools` in `websearch.ts` to manage web search tool configurations based on model type.
- Commented out deprecated code in `smoothReasoningPlugin.ts` and `textPlugin.ts` for potential removal.
- Simplified the `createModel` function to directly accept the `ModelConfig` object, improving clarity.
- Updated `createBaseModel` to include `extraModelConfig` for extended configuration options.
- Introduced `executeConfigureContext` method in `PluginManager` to handle context configuration for plugins.
- Adjusted type definitions in `types.ts` to ensure consistency with the new configuration structure.
- Refactored plugin execution methods in `PluginEngine` to utilize the resolved model directly, enhancing the flow of data through the plugin system.
- Updated the `createModel` function to accept a simplified `ModelConfig` interface, enhancing clarity and usability.
- Refactored `createBaseModel` to destructure parameters for better readability and maintainability.
- Removed the `ModelCreator.ts` file as its functionality has been integrated into the factory functions.
- Adjusted type definitions in `types.ts` to reflect changes in model configuration structure, ensuring consistency across the codebase.
- Introduced a new `webSearchPlugin` to provide unified web search functionality across multiple AI providers.
- Added helper functions for adapting web search parameters for OpenAI, Gemini, and Anthropic providers.
- Updated the built-in plugin index to export the new web search plugin and its configuration type.
- Created a new `helper.ts` file to encapsulate web search adaptation logic and support checks for provider compatibility.
- Introduced `isOpenAIChatCompletionOnlyModel` utility function to determine if a model ID corresponds to OpenAI's chat completion-only models.
- Updated `createBaseModel` function to utilize the new utility for improved handling of OpenAI provider responses in strict mode.
- Refactored reasoning parameters in `getOpenAIReasoningParams` for consistency and clarity.
- Updated the `createBaseModel` function to handle OpenAI provider responses in strict mode.
- Modified `providerToAiSdkConfig` to include specific options for OpenAI when in strict mode.
- Introduced a new utility module `providerParams.ts` for managing provider-specific parameters, including OpenAI, Anthropic, and Gemini configurations.
- Added functions to retrieve service tiers, specific parameters, and reasoning efforts for various providers, improving overall provider management.
- Updated the OpenRouter provider dependency in `package.json` and `yarn.lock` to version 0.7.2.
- Added a new function `createOpenRouterOptions` in `factory.ts` for creating OpenRouter provider options.
- Updated type definitions in `types.ts` and `registry.ts` to include OpenRouter provider settings, enhancing provider management.
- Commented out the OpenRouter provider in `registry.ts` and related configurations due to excessive bugs.
- Simplified reasoning logic in `transformParameters.ts` and `options.ts` by removing unnecessary checks for `enableReasoning`.
- Enhanced logging in `transformParameters.ts` to provide better insights into reasoning capabilities.
- Updated `getReasoningEffort` to handle cases where reasoning effort is not defined, improving model compatibility.
- Commented out the provider support check in `RuntimeExecutor` to streamline initialization.
- Updated `providerToAiSdkConfig` to utilize `AiCore.isSupported` for improved provider validation.
- Enhanced middleware configuration in `ModernAiProvider` to ensure tools are only added when enabled and available.
- Added comments in `transformParameters` for clarity on parameter handling and plugin activation.
- Added `mcpPromptPlugin.ts` to encapsulate MCP Prompt functionality, providing a structured approach for tool calls within prompts.
- Updated `index.ts` to reference the new `mcpPromptPlugin`, enhancing modularity and clarity in the built-in plugins.
- Removed the outdated `example-plugins.ts` file to streamline the plugin directory and focus on essential components.
- Commented out console log statements in the `createMCPPromptPlugin` to reduce noise during execution.
- Maintained the structure and functionality of the plugin while improving readability and performance.
- Added patches for `@ai-sdk/google-vertex` and `@ai-sdk/openai-compatible` to enhance functionality and fix issues.
- Updated `package.json` to reflect new dependency versions and patch paths.
- Refactored `transformParameters` and `ApiService` to support new tool configurations and improve parameter handling.
- Introduced utility functions for setting up tools and managing options, enhancing the overall integration of tools within the AI SDK.
- Updated `AiRequestContext` to enforce `recursiveCall` and added `isRecursiveCall` for better state management.
- Modified `createContext` to initialize `recursiveCall` with a placeholder function.
- Enhanced `MCPPromptPlugin` to utilize a custom `createSystemMessage` function for improved message handling during recursive calls.
- Refactored `PluginEngine` to manage recursive call states, ensuring proper execution flow and context integrity.
- Updated `tsconfig.web.json` to support wildcard imports for `@cherrystudio/ai-core`.
- Enhanced `package.json` to include type definitions and imports for built-in plugins.
- Introduced recursive call functionality in `PluginManager` and `PluginEngine`, allowing for improved handling of tool interactions.
- Added `MCPPromptPlugin` to facilitate tool calls within prompts, enabling recursive processing of tool results.
- Refactored `transformStream` methods across plugins to accommodate new parameters and improve type safety.
- Changed `pluginClient` to `pluginEngine` in `RuntimeExecutor` for clarity and consistency.
- Updated method calls in `RuntimeExecutor` to use the new `pluginEngine`.
- Enhanced `AiSdkMiddlewareBuilder` to include `mcpTools` in the middleware configuration.
- Added `MCPPromptPlugin` to support tool calls within prompts, enabling recursive processing and improved handling of tool interactions.
- Updated `ApiService` to pass `mcpTools` during chat completion requests, enhancing integration with the new plugin system.
- Introduced `reasonPlugin` and `textPlugin` to improve chunk processing and handling of reasoning content.
- Updated `transformStream` method signatures for better type safety and usability.
- Enhanced `ThinkingTimeMiddleware` to accurately track thinking time using `performance.now()`.
- Refactored `ThinkingBlock` component to utilize block thinking time directly, improving performance and clarity.
- Added logging for middleware builder to assist in debugging and monitoring middleware configurations.
- Introduced new high-level APIs for model creation and configuration, improving usability for advanced users.
- Enhanced the RuntimeExecutor to support both direct model usage and model ID resolution, allowing for more flexible execution options.
- Updated existing methods to accept middleware configurations, streamlining the integration of custom processing logic.
- Refactored the plugin system to better accommodate middleware, enhancing the overall extensibility of the AI Core.
- Improved documentation to reflect the new capabilities and usage patterns for the runtime APIs.
- Restructured the AI Core documentation to reflect a simplified two-layer architecture, focusing on clear responsibilities between models and runtime layers.
- Removed the orchestration layer and consolidated its functionality into the runtime layer, streamlining the API for users.
- Introduced a new runtime executor for managing plugin-enhanced AI calls, improving the handling of execution and middleware.
- Updated the core modules to enhance type safety and usability, including comprehensive type definitions for model creation and execution configurations.
- Removed obsolete files and refactored existing code to improve organization and maintainability across the SDK.
- Updated the AI Core documentation to reflect the new architecture and design principles, emphasizing modularity and type safety.
- Refactored the client structure by removing obsolete files and consolidating client creation logic into a more streamlined format.
- Introduced a new core module for managing execution and middleware, improving the overall organization of the codebase.
- Enhanced the orchestration layer to provide a clearer API for users, integrating the creation and execution processes more effectively.
- Added comprehensive type definitions and utility functions for better type safety and usability across the SDK.
- Introduced ToolCallChunkHandler for managing tool call events and results, improving the handling of tool interactions.
- Updated AiSdkToChunkAdapter to utilize the new handler, streamlining the processing of tool call chunks.
- Refactored transformParameters to support dynamic tool integration and improved parameter handling.
- Adjusted provider mapping in factory.ts to include new provider types, enhancing compatibility with various AI services.
- Removed obsolete cherryStudioTransformPlugin to clean up the codebase and focus on more relevant functionality.
- Introduced a new OpenAI Compatible provider to the AiProviderRegistry, allowing for integration with the @ai-sdk/openai-compatible package.
- Updated provider configuration logic to support the new provider, including adjustments to API host formatting and options management.
- Refactored middleware to streamline handling of OpenAI-specific configurations.
- Introduced a patch for the @ai-sdk/google-vertex package to improve URL handling based on region.
- Added a new utility function to format private keys, ensuring correct PEM structure and validation.
- Updated the ProviderConfigBuilder to utilize the new private key formatting function for Google credentials.
- Created a pnpm workspace configuration to manage patched dependencies effectively.
- Introduced new utility functions for creating and merging provider options, improving type safety and usability.
- Added comprehensive examples for OpenAI, Anthropic, Google, and generic provider options to demonstrate usage.
- Refactored existing code to streamline provider configuration and enhance clarity in the options management.
- Updated the PluginEnabledAiClient to simplify the handling of model parameters and improve overall functionality.
- Added support for Google Vertex AI credentials in the provider configuration.
- Refactored the VertexAPIClient to handle both standard and VertexProvider types.
- Implemented utility functions to check Vertex AI configuration completeness and create VertexProvider instances.
- Updated provider mapping in index_new.ts to ensure proper handling of Vertex AI settings.
- Introduced the @ai-sdk/openai-compatible package to support compatibility with OpenAI.
- Added a new ProviderConfigFactory and ProviderConfigBuilder for streamlined provider configuration.
- Updated the provider registry to include the new Google Vertex AI import path.
- Enhanced the index.ts to export new provider configuration utilities for better type safety and usability.
- Refactored ApiService and middleware to integrate the new provider configurations effectively.
- Changed the type of options in ClientConfig to 'any' for flexibility.
- Overloaded createImageClient method to support different provider settings.
- Added vertexai mapping to the provider type mapping in index_new.ts for enhanced compatibility.
- Updated PluginEnabledAiClient to streamline the handling of experimental_transform parameters.
- Adjusted ModernAiProvider's smoothStream configuration for better chunking of text, enhancing processing efficiency.
- Re-enabled block updates in messageThunk for improved state management.
- Added detailed usage examples for the native provider registry in the README.md, demonstrating how to create and utilize custom provider registries.
- Updated ApiClientFactory to enforce type safety for model instances.
- Refactored PluginEnabledAiClient methods to support both built-in logic and custom registry usage for text and object generation, improving flexibility and usability.
- Added smoothStream to the middleware exports in index.ts for improved streaming capabilities.
- Updated PluginEnabledAiClient to conditionally apply middlewares, removing the default simulateStreamingMiddleware.
- Modified ModernAiProvider to utilize smoothStream in streamText, enhancing text processing with configurable chunking and delay options.
- Added AiSdkMiddlewareBuilder for dynamic middleware construction based on various conditions.
- Updated ModernAiProvider to utilize new middleware configuration, improving flexibility in handling completions.
- Refactored ApiService to pass middleware configuration during AI completions, enabling better control over processing.
- Introduced new README documentation for the middleware builder, outlining usage and supported conditions.