feat: enhance AI Core with plugin system and middleware support

- Introduced a plugin system in the AI Core package, allowing for flexible request handling and middleware integration.
- Added support for various hook types: First, Sequential, Parallel, and Stream, enabling developers to customize request processing.
- Implemented a PluginManager for managing and executing plugins, enhancing extensibility and modularity.
- Updated architecture documentation to reflect new plugin capabilities and usage examples.
- Included new middleware types and examples to demonstrate the plugin system's functionality.

This update aims to improve the developer experience by providing a robust framework for extending AI Core's capabilities.
This commit is contained in:
MyPrototypeWhat 2025-06-17 19:48:14 +08:00
parent 7187e63ce2
commit 453a2bcb68
14 changed files with 1419 additions and 773 deletions

View File

@ -3,21 +3,23 @@
## 1. 架构设计理念
### 1.1 设计目标
- **统一接口**:使用 Vercel AI SDK 统一不同 AI Provider 的接口差异
- **动态导入**:通过动态导入实现按需加载,减少打包体积
- **最小包装**:直接使用 AI SDK 的类型和接口,避免重复定义
- **中间件增强**:扩大中间件的介入范围,覆盖请求的全生命周期(规划中)
- **插件系统**:基于钩子的插件架构,支持请求全生命周期扩展
- **类型安全**:利用 TypeScript 和 AI SDK 的类型系统确保类型安全
- **轻量级**:专注核心功能,保持包的轻量和高效
- **包级独立**:作为独立包管理,便于复用和维护
### 1.2 核心优势
- **标准化**AI SDK 提供统一的模型接口,减少适配工作
- **简化维护**:废弃复杂的 XxxApiClient统一为工厂函数模式
- **更好的开发体验**:完整的 TypeScript 支持和丰富的生态系统
- **性能优化**AI SDK 内置优化和最佳实践
- **模块化设计**:独立包结构,支持跨项目复用
- **可扩展中间件**:支持在请求全生命周期中插入自定义逻辑
- **可扩展插件**:基于钩子的插件系统,支持灵活的功能扩展和流转换
## 2. 整体架构图
@ -32,7 +34,7 @@ graph TD
ApiClientFactory["ApiClientFactory (工厂类)"]
UniversalClient["UniversalAiSdkClient (统一客户端)"]
ProviderRegistry["Provider 注册表"]
MiddlewareChain["中间件链 (规划中)"]
PluginManager["插件管理器"]
end
subgraph "动态导入层"
@ -48,37 +50,33 @@ graph TD
Others["其他 19+ Providers"]
end
subgraph "中间件生态 (规划中)"
PreRequest["请求预处理"]
StreamTransform["流转换"]
PostProcess["后处理"]
ErrorHandle["错误处理"]
Logging["日志记录"]
Cache["缓存"]
subgraph "插件生态"
FirstHooks["First Hooks (resolveModel, loadTemplate)"]
SequentialHooks["Sequential Hooks (transformParams, transformResult)"]
ParallelHooks["Parallel Hooks (onRequestStart, onRequestEnd, onError)"]
StreamHooks["Stream Hooks (transformStream)"]
end
UI --> ApiClientFactory
Components --> ApiClientFactory
ApiClientFactory --> UniversalClient
UniversalClient --> MiddlewareChain
MiddlewareChain --> ProviderRegistry
UniversalClient --> PluginManager
PluginManager --> ProviderRegistry
ProviderRegistry --> DynamicImport
DynamicImport --> OpenAI
DynamicImport --> Anthropic
DynamicImport --> Google
DynamicImport --> XAI
DynamicImport --> Others
UniversalClient --> AICore
AICore --> streamText
AICore --> generateText
MiddlewareChain --> PreRequest
MiddlewareChain --> StreamTransform
MiddlewareChain --> PostProcess
MiddlewareChain --> ErrorHandle
MiddlewareChain --> Logging
MiddlewareChain --> Cache
PluginManager --> FirstHooks
PluginManager --> SequentialHooks
PluginManager --> ParallelHooks
PluginManager --> StreamHooks
```
## 3. 包结构设计
@ -94,24 +92,14 @@ packages/aiCore/
│ ├── clients/
│ │ ├── UniversalAiSdkClient.ts # 统一AI SDK客户端 ✅
│ │ └── ApiClientFactory.ts # 客户端工厂 ✅
│ ├── middleware/ # 中间件系统 (规划中)
│ │ ├── lifecycle/ # 生命周期中间件
│ │ │ ├── PreRequestMiddleware.ts
│ │ │ ├── PostResponseMiddleware.ts
│ │ │ ├── ErrorHandlingMiddleware.ts
│ │ │ └── CacheMiddleware.ts
│ │ ├── core/ # 核心中间件
│ │ │ ├── StreamProcessingMiddleware.ts
│ │ │ ├── RequestValidationMiddleware.ts
│ │ │ └── ResponseTransformMiddleware.ts
│ │ ├── feat/ # 特性中间件
│ │ │ ├── ThinkingMiddleware.ts
│ │ │ ├── ToolCallMiddleware.ts
│ │ │ └── WebSearchMiddleware.ts
│ │ ├── builder.ts # 中间件构建器
│ │ ├── composer.ts # 中间件组合器
│ │ ├── register.ts # 中间件注册表
│ │ └── types.ts # 中间件类型定义
│ ├── middleware/ # 插件系统 ✅
│ │ ├── types.ts # 插件类型定义 ✅
│ │ ├── manager.ts # 插件管理器 ✅
│ │ ├── examples/ # 示例插件 ✅
│ │ │ ├── example-plugins.ts # 示例插件实现 ✅
│ │ │ └── example-usage.ts # 使用示例 ✅
│ │ ├── README.md # 插件系统文档 ✅
│ │ └── index.ts # 插件模块入口 ✅
│ ├── services/ # 高级服务 (规划中)
│ │ ├── AiCoreService.ts # 统一服务入口
│ │ ├── CompletionsService.ts # 文本生成服务
@ -125,33 +113,10 @@ packages/aiCore/
```
**图例:**
- ✅ 已实现
- 规划中:设计完成,待实现
### 3.2 包配置 (package.json)
```json
{
"name": "@cherry-studio/ai-core",
"version": "1.0.0",
"description": "Cherry Studio AI Core - 基于 Vercel AI SDK 的统一 AI Provider 接口",
"main": "dist/index.js",
"types": "dist/index.d.ts",
"dependencies": {
"ai": "^4.3.16"
},
"peerDependenciesMeta": {
"@ai-sdk/openai": { "optional": true },
"@ai-sdk/anthropic": { "optional": true },
"@ai-sdk/google": { "optional": true },
"@ai-sdk/xai": { "optional": true }
},
"keywords": [
"ai", "sdk", "vercel-ai-sdk", "cherry-studio"
]
}
```
## 4. 核心组件详解
### 4.1 Provider 注册表 (`providers/registry.ts`)
@ -159,12 +124,14 @@ packages/aiCore/
统一管理所有 AI Provider 的注册和动态导入。
**主要功能:**
- 动态导入 AI SDK providers
- 提供统一的 Provider 创建接口
- 支持 19+ 官方 AI SDK providers
- 类型安全的 Provider 配置
**核心 API**
```typescript
export interface ProviderConfig {
id: string
@ -182,6 +149,7 @@ export class AiProviderRegistry {
```
**支持的 Providers**
- OpenAI, Anthropic, Google, XAI
- Azure OpenAI, Amazon Bedrock, Google Vertex
- Groq, Together.ai, Fireworks, DeepSeek
@ -193,12 +161,14 @@ export class AiProviderRegistry {
将不同 AI providers 包装为统一接口。
**主要功能:**
- 异步初始化和动态加载
- 统一的 stream() 和 generate() 方法
- 直接使用 AI SDK 的 streamText() 和 generateText()
- 配置验证和错误处理
**核心 API**
```typescript
export class UniversalAiSdkClient {
async initialize(): Promise<void>
@ -215,12 +185,14 @@ export class UniversalAiSdkClient {
统一创建和管理 AI SDK 客户端。
**主要功能:**
- 统一的客户端创建接口
- 智能缓存和复用机制
- 批量创建和健康检查
- 错误处理和重试
**核心 API**
```typescript
export class ApiClientFactory {
static async createAiSdkClient(providerId: string, options: any): Promise<UniversalAiSdkClient>
@ -231,56 +203,83 @@ export class ApiClientFactory {
}
```
### 4.4 增强的中间件系统 (规划中)
### 4.4 钩子风格插件系统 ✅
扩展中间件架构,支持请求全生命周期的介入
基于钩子机制的插件架构设计,提供灵活的扩展系统
**生命周期阶段:**
1. **Pre-Request**:请求预处理、参数验证、缓存检查
2. **Request**:实际的 AI SDK 调用
3. **Stream Processing**:流式响应处理、实时转换
4. **Post-Response**:响应后处理、结果聚合
5. **Error Handling**:错误处理、重试、降级
**钩子类型:**
**中间件分类:**
1. **First Hooks**:执行到第一个有效结果就停止
2. **Sequential Hooks**:按序链式执行,可变换数据
3. **Parallel Hooks**:并发执行,用于副作用
4. **Stream Hooks**:流转换,直接传递给 AI SDK
**生命周期中间件:**
- `PreRequestMiddleware`:请求前处理,参数验证、权限检查
- `PostResponseMiddleware`:响应后处理,结果转换、统计记录
- `ErrorHandlingMiddleware`:错误处理,重试机制、降级策略
- `CacheMiddleware`:缓存中间件,请求缓存、结果缓存
**优先级系统:**
**核心中间件:**
- `StreamProcessingMiddleware`流式处理chunk 转换、进度追踪
- `RequestValidationMiddleware`请求验证schema 验证、安全检查
- `ResponseTransformMiddleware`:响应转换,格式标准化、类型转换
- `pre`:前置处理(-100 到 -1
- `normal`标准处理0 到 99
- `post`后置处理100 到 199
**特性中间件:**
- `ThinkingMiddleware`:思考过程中间件,记录推理步骤
- `ToolCallMiddleware`:工具调用中间件,函数调用处理
- `WebSearchMiddleware`:网络搜索中间件,集成搜索功能
**核心钩子:**
**First Hooks (第一个有效结果)**
- `resolveModel`:模型解析,返回第一个匹配的模型
- `loadTemplate`:模板加载,返回第一个找到的模板
**Sequential Hooks (链式变换)**
- `transformParams`:参数转换,依次变换请求参数
- `transformResult`:结果转换,依次变换响应结果
**Parallel Hooks (并发副作用)**
- `onRequestStart`:请求开始时触发
- `onRequestEnd`:请求结束时触发
- `onError`:错误发生时触发
**Stream Hooks (流转换)**
- `transformStream`:流转换,返回 AI SDK 转换函数
**插件 API 设计:**
**中间件 API 设计:**
```typescript
export interface Middleware {
export interface Plugin {
name: string
priority: number
execute(context: MiddlewareContext, next: () => Promise<void>): Promise<void>
enforce?: 'pre' | 'normal' | 'post'
// First hooks - 执行到第一个有效结果
resolveModel?(params: ResolveModelParams): Promise<string | null>
loadTemplate?(params: LoadTemplateParams): Promise<Template | null>
// Sequential hooks - 链式变换
transformParams?(params: any, context: PluginContext): Promise<any>
transformResult?(result: any, context: PluginContext): Promise<any>
// Parallel hooks - 并发副作用
onRequestStart?(context: PluginContext): Promise<void>
onRequestEnd?(context: PluginContext): Promise<void>
onError?(error: Error, context: PluginContext): Promise<void>
// Stream hooks - AI SDK 流转换
transformStream?(context: PluginContext): Promise<(readable: ReadableStream) => ReadableStream>
}
export interface MiddlewareContext {
request: AiCoreRequest
response?: AiCoreResponse
error?: Error
export interface PluginContext {
request: any
response?: any
metadata: Record<string, any>
provider: string
model: string
}
export class MiddlewareChain {
use(middleware: Middleware): this
compose(): (context: MiddlewareContext) => Promise<void>
execute(context: MiddlewareContext): Promise<void>
export class PluginManager {
use(plugin: Plugin): this
executeFirstHook<T>(hookName: string, ...args: any[]): Promise<T | null>
executeSequentialHook<T>(hookName: string, initialValue: T, context: PluginContext): Promise<T>
executeParallelHook(hookName: string, ...args: any[]): Promise<void>
collectStreamTransforms(context: PluginContext): Promise<Array<(readable: ReadableStream) => ReadableStream>>
}
```
@ -289,6 +288,7 @@ export class MiddlewareChain {
作为包的主要对外接口,提供高级 AI 功能。
**服务方法:**
- `completions()`: 文本生成
- `streamCompletions()`: 流式文本生成
- `generateObject()`: 结构化数据生成
@ -296,16 +296,17 @@ export class MiddlewareChain {
- `embed()`: 文本嵌入
**API 设计:**
```typescript
export class AiCoreService {
constructor(middlewares?: Middleware[])
async completions(request: CompletionRequest): Promise<CompletionResponse>
async streamCompletions(request: CompletionRequest): Promise<StreamCompletionResponse>
async generateObject<T>(request: ObjectGenerationRequest): Promise<T>
async generateImage(request: ImageGenerationRequest): Promise<ImageResponse>
async embed(request: EmbeddingRequest): Promise<EmbeddingResponse>
use(middleware: Middleware): this
configure(config: AiCoreConfig): this
}
@ -313,49 +314,7 @@ export class AiCoreService {
## 5. 使用方式
### 5.1 基础用法
```typescript
import { createAiSdkClient } from '@cherry-studio/ai-core'
// 创建 OpenAI 客户端
const client = await createAiSdkClient('openai', {
apiKey: 'your-api-key'
})
// 流式生成
const result = await client.stream({
modelId: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
})
// 非流式生成
const response = await client.generate({
modelId: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
})
```
### 5.2 便捷函数
```typescript
import { createOpenAIClient, streamGeneration } from '@cherry-studio/ai-core'
// 快速创建特定 provider 客户端
const client = await createOpenAIClient({
apiKey: 'your-api-key'
})
// 便捷的一次性调用
const result = await streamGeneration(
'anthropic',
'claude-3-sonnet',
[{ role: 'user', content: 'Hello!' }],
{ apiKey: 'your-api-key' }
)
```
### 5.3 多 Provider 支持
### 5.1 多 Provider 支持
```typescript
import { createAiSdkClient, AiCore } from '@cherry-studio/ai-core'
@ -371,7 +330,7 @@ const google = await createAiSdkClient('google', { apiKey: 'google-key' })
const xai = await createAiSdkClient('xai', { apiKey: 'xai-key' })
```
### 5.4 在 Cherry Studio 中集成
### 5.2 在 Cherry Studio 中集成
```typescript
// 替换现有的 XxxApiClient
@ -390,54 +349,10 @@ const createProviderClient = async (provider: CherryProvider) => {
}
```
### 5.5 中间件使用 (规划中)
```typescript
import {
AiCoreService,
ThinkingMiddleware,
CacheMiddleware,
LoggingMiddleware
} from '@cherry-studio/ai-core'
// 创建带中间件的服务
const aiService = new AiCoreService()
.use(new CacheMiddleware({ ttl: 3600 }))
.use(new LoggingMiddleware({ level: 'info' }))
.use(new ThinkingMiddleware({ recordSteps: true }))
// 使用增强的服务
const result = await aiService.streamCompletions({
provider: 'openai',
model: 'gpt-4',
messages: [{ role: 'user', content: 'Explain quantum computing' }],
middleware: {
thinking: { enabled: true },
cache: { enabled: true, key: 'quantum-explanation' }
}
})
// 自定义中间件
class CustomMiddleware implements Middleware {
name = 'custom'
priority = 100
async execute(context: MiddlewareContext, next: () => Promise<void>): Promise<void> {
console.log('Before request:', context.request)
await next() // 执行下一个中间件或实际请求
console.log('After response:', context.response)
}
}
aiService.use(new CustomMiddleware())
```
### 5.6 完整的工作流示例 (规划中)
```typescript
import {
import {
createAiSdkClient,
AiCoreService,
MiddlewareChain,
@ -450,21 +365,27 @@ import {
const createEnhancedAiService = async () => {
// 创建中间件链
const middlewareChain = new MiddlewareChain()
.use(new PreRequestMiddleware({
validateApiKey: true,
checkRateLimit: true
}))
.use(new StreamProcessingMiddleware({
enableProgressTracking: true,
chunkTransform: (chunk) => ({
...chunk,
timestamp: Date.now()
.use(
new PreRequestMiddleware({
validateApiKey: true,
checkRateLimit: true
})
}))
.use(new PostResponseMiddleware({
saveToHistory: true,
calculateMetrics: true
}))
)
.use(
new StreamProcessingMiddleware({
enableProgressTracking: true,
chunkTransform: (chunk) => ({
...chunk,
timestamp: Date.now()
})
})
)
.use(
new PostResponseMiddleware({
saveToHistory: true,
calculateMetrics: true
})
)
// 创建服务实例
const service = new AiCoreService(middlewareChain.middlewares)
@ -478,9 +399,7 @@ const enhancedService = await createEnhancedAiService()
const response = await enhancedService.completions({
provider: 'anthropic',
model: 'claude-3-sonnet',
messages: [
{ role: 'user', content: 'Write a technical blog post about AI middleware' }
],
messages: [{ role: 'user', content: 'Write a technical blog post about AI middleware' }],
options: {
temperature: 0.7,
maxTokens: 2000
@ -494,34 +413,24 @@ const response = await enhancedService.completions({
})
```
## 6. 与现有架构的对比
## 6. 简化设计原则
| 方面 | 现有架构 | 新架构 (AI Core 包) |
|------|----------|-------------------|
| **代码组织** | 集成在主应用中 | 独立包,模块化管理 |
| **Provider 管理** | 各自独立的 XxxApiClient | 统一的 Provider 注册表 + 工厂 |
| **接口标准化** | 手动适配各 Provider 差异 | AI SDK 统一接口 |
| **类型安全** | 部分类型安全 | 完整的 TypeScript 支持 |
| **维护成本** | 每个 Provider 需要单独维护 | 统一维护,新 Provider 快速接入 |
| **包体积** | 所有 Provider 都打包 | 按需加载,动态导入 |
| **复用性** | 仅限当前项目 | 可跨项目复用 |
| **扩展性** | 添加新 Provider 复杂 | 只需在注册表中添加配置 |
### 6.1 最小包装原则
## 7. 简化设计原则
### 7.1 最小包装原则
- 直接使用 AI SDK 的类型,不重复定义
- 避免过度抽象和复杂的中间层
- 保持与 AI SDK 原生 API 的一致性
### 7.2 动态导入优化
### 6.2 动态导入优化
```typescript
// 按需加载,减少打包体积
const module = await import('@ai-sdk/openai')
const createOpenAI = module.createOpenAI
```
### 7.3 类型安全
### 6.3 类型安全
```typescript
// 直接使用 AI SDK 类型
import { streamText, generateText } from 'ai'
@ -530,159 +439,184 @@ import { streamText, generateText } from 'ai'
return streamText({ model, ...request })
```
### 7.4 配置简化
### 6.4 配置简化
```typescript
// 简化的 Provider 配置
interface ProviderConfig {
id: string // provider 标识
name: string // 显示名称
import: () => Promise<any> // 动态导入函数
creatorFunctionName: string // 创建函数名
id: string // provider 标识
name: string // 显示名称
import: () => Promise<any> // 动态导入函数
creatorFunctionName: string // 创建函数名
}
```
## 8. 技术要点
## 7. 技术要点
### 7.1 动态导入策略
### 8.1 动态导入策略
- **按需加载**:只加载用户实际使用的 providers
- **缓存机制**:避免重复导入和初始化
- **错误处理**:优雅处理导入失败的情况
### 8.2 依赖管理策略
### 7.2 依赖管理策略
- **核心依赖**`ai` 库作为必需依赖
- **可选依赖**:所有 `@ai-sdk/*` 包都是可选的
- **版本兼容**:支持 AI SDK v3-v5 版本
### 8.3 缓存策略
### 7.3 缓存策略
- **客户端缓存**:基于 provider + options 的智能缓存
- **配置哈希**:安全的 API key 哈希处理
- **生命周期管理**:支持缓存清理和验证
## 9. 迁移策略
## 8. 迁移策略
### 8.1 阶段一:包基础搭建 (Week 1) ✅ 已完成
### 9.1 阶段一:包基础搭建 (Week 1) ✅ 已完成
1. ✅ 创建简化的包结构
2. ✅ 实现 Provider 注册表
3. ✅ 创建统一客户端和工厂
4. ✅ 配置构建和类型系统
### 9.2 阶段二:核心功能完善 (Week 2) ✅ 已完成
### 8.2 阶段二:核心功能完善 (Week 2) ✅ 已完成
1. ✅ 支持 19+ 官方 AI SDK providers
2. ✅ 实现缓存和错误处理
3. ✅ 完善类型安全和 API 设计
4. ✅ 添加便捷函数和工具
### 9.3 阶段三:集成测试 (Week 3) 🔄 进行中
### 8.3 阶段三:集成测试 (Week 3) 🔄 进行中
1. 在 Cherry Studio 中集成测试
2. 功能完整性验证
3. 性能基准测试
4. 兼容性问题修复
### 9.4 阶段四:中间件系统实现 (Week 4-5) 📋 规划中
1. **中间件核心架构**
- 实现 `MiddlewareChain``MiddlewareContext`
- 创建中间件接口和基础类型
- 建立中间件生命周期管理
### 8.4 阶段四:插件系统实现 ✅ 已完成
2. **生命周期中间件**
- `PreRequestMiddleware`:请求预处理
- `PostResponseMiddleware`:响应后处理
- `ErrorHandlingMiddleware`:错误处理
- `CacheMiddleware`:缓存机制
1. **插件核心架构**
3. **核心中间件**
- `StreamProcessingMiddleware`:流式处理
- `RequestValidationMiddleware`:请求验证
- `ResponseTransformMiddleware`:响应转换
- 实现 `PluginManager``PluginContext`
- 创建钩子风格插件接口和类型系统
- 建立四种钩子类型执行机制
4. **集成到现有架构**
- 在 `UniversalAiSdkClient` 中集成中间件链
- 更新 `ApiClientFactory` 支持中间件配置
- 创建 `AiCoreService` 统一服务接口
2. **钩子系统**
### 9.5 阶段五:特性中间件 (Week 6) 📋 规划中
1. **Cherry Studio 特性中间件**
- `ThinkingMiddleware`:思考过程记录
- `ToolCallMiddleware`:工具调用处理
- `WebSearchMiddleware`:网络搜索集成
- `First Hooks`:第一个有效结果执行
- `Sequential Hooks`:链式数据变换
- `Parallel Hooks`:并发副作用处理
- `Stream Hooks`AI SDK 流转换集成
3. **优先级和排序**
- `pre`/`normal`/`post` 优先级系统
- 插件注册顺序维护
- 错误处理和插件隔离
4. **集成到现有架构**
- 在 `UniversalAiSdkClient` 中集成插件管理器
- 更新 `ApiClientFactory` 支持插件配置
- 创建示例插件和使用文档
### 8.5 阶段五:特性插件扩展 (规划中)
1. **Cherry Studio 特性插件**
- `ThinkingPlugin`:思考过程记录和提取
- `ToolCallPlugin`:工具调用处理和增强
- `WebSearchPlugin`:网络搜索集成
2. **高级功能**
- 中间件组合器和构建器
- 动态中间件加载
- 中间件配置管理
- 插件组合和条件执行
- 动态插件加载系统
- 插件配置管理和持久化
### 8.6 阶段六:文档和发布 (Week 7) 📋 规划中
### 9.6 阶段六:文档和发布 (Week 7) 📋 规划中
1. 完善使用文档和示例
2. 中间件开发指南
2. 插件开发指南和最佳实践
3. 准备发布到 npm
4. 建立维护流程
### 9.7 阶段七:生态系统扩展 (Week 8+) 🚀 未来规划
1. 社区中间件插件系统
2. 可视化中间件编排工具
### 8.7 阶段七:生态系统扩展 (Week 8+) 🚀 未来规划
1. 社区插件生态系统
2. 可视化插件编排工具
3. 性能监控和分析
4. 高级缓存策略
4. 高级缓存和优化策略
## 10. 预期收益
## 9. 预期收益
### 9.1 开发效率提升
### 10.1 开发效率提升
- **90%** 减少新 Provider 接入时间(只需添加注册表配置)
- **70%** 减少维护工作量
- **95%** 提升开发体验(统一接口 + 类型安全)
- **独立开发**:可以独立于主应用开发和测试
### 10.2 代码质量改善
### 9.2 代码质量改善
- 完整的 TypeScript 类型安全
- 统一的错误处理机制
- 标准化的 AI SDK 接口
- 更好的测试覆盖率
### 10.3 架构优势
### 9.3 架构优势
- **轻量级**:最小化的包装层
- **可复用**:其他项目可以直接使用
- **可维护**:独立版本管理和发布
- **可扩展**:新 provider 只需配置即可
### 10.4 生态系统价值
### 9.4 生态系统价值
- 支持 AI SDK 的完整生态系统
- 可以独立发布到 npm
- 为开源社区贡献价值
- 建立统一的 AI 基础设施
## 11. 风险评估与应对
## 10. 风险评估与应对
### 10.1 技术风险
### 11.1 技术风险
- **AI SDK 版本兼容**:支持多版本兼容策略
- **依赖管理**:合理使用 peerDependencies
- **类型一致性**:直接使用 AI SDK 类型
- **性能影响**:最小化包装层开销
### 11.2 迁移风险
### 10.2 迁移风险
- **功能对等性**:确保所有现有功能都能实现
- **API 兼容性**:提供平滑的迁移路径
- **集成复杂度**:保持简单的集成方式
- **学习成本**:提供清晰的使用文档
## 12. 总结
## 11. 总结
简化的 AI Core 架构专注于核心价值:
### 12.1 核心价值
### 11.1 核心价值
- **统一接口**:一套 API 支持 19+ AI providers
- **按需加载**:只打包用户实际使用的 providers
- **类型安全**:完整的 TypeScript 支持
- **轻量高效**:最小化的包装层
### 12.2 设计哲学
- **直接使用 AI SDK**:避免重复造轮子
- **最小包装**:只在必要时添加抽象层
- **开发者友好**:简单易用的 API 设计
- **生态兼容**:充分利用 AI SDK 生态系统
### 11.2 设计哲学
- **直接使用 AI SDK**:避免重复造轮子,充分利用原生能力
- **最小包装**:只在必要时添加抽象层,保持轻量高效
- **开发者友好**:简单易用的 API 设计,熟悉的钩子风格
- **生态兼容**:充分利用 AI SDK 生态系统和原生流转换
- **插件优先**:基于钩子的扩展模式,支持灵活组合
### 11.3 成功关键
### 12.3 成功关键
1. **保持简单**:专注核心功能,避免过度设计
2. **充分测试**:确保功能完整性和稳定性
3. **渐进迁移**:平滑过渡,降低风险
4. **文档完善**:支持快速上手和深度使用
这个简化的架构为 Cherry Studio 提供了一个轻量、高效、可维护的 AI 基础设施,同时为社区贡献了一个高质量的开源包。
这个基于钩子的插件系统架构为 Cherry Studio 提供了一个轻量、高效、可维护的 AI 基础设施,通过熟悉的钩子模式和原生 AI SDK 集成,为开发者提供了强大而简洁的扩展能力,同时为社区贡献了一个高质量的开源包。

View File

@ -40,7 +40,11 @@
"@ai-sdk/togetherai": "^0.2.14",
"@ai-sdk/vercel": "^0.0.1",
"@ai-sdk/xai": "^1.2.16",
"ai": "^4.3.16"
"ai": "^4.3.16",
"anthropic-vertex-ai": "^1.0.2",
"ollama-ai-provider": "^1.2.0",
"qwen-ai-provider": "^0.1.0",
"zhipu-ai-provider": "^0.1.1"
},
"peerDependenciesMeta": {
"@ai-sdk/amazon-bedrock": {

View File

@ -3,15 +3,15 @@
* API客户端工厂
*/
import type { LanguageModelV1 } from 'ai'
import { aiProviderRegistry } from '../providers/registry'
import type { CacheStats as BaseCacheStats, ClientConfig as BaseClientConfig } from '../providers/types'
import { UniversalAiSdkClient } from './UniversalAiSdkClient'
// 客户端配置接口
export interface ClientConfig extends BaseClientConfig {}
// 缓存统计信息
export interface CacheStats extends BaseCacheStats {}
export interface ClientConfig {
providerId: string
options?: any
}
// 错误类型
export class ClientFactoryError extends Error {
@ -30,62 +30,48 @@ export class ClientFactoryError extends Error {
* AI SDK客户端
*/
export class ApiClientFactory {
private static instance: ApiClientFactory
private static sdkClients = new Map<string, UniversalAiSdkClient>()
private static lastCleanup = new Date()
private constructor() {
// Private constructor for singleton pattern
}
public static getInstance(): ApiClientFactory {
if (!ApiClientFactory.instance) {
ApiClientFactory.instance = new ApiClientFactory()
}
return ApiClientFactory.instance
}
/**
* [NEW METHOD] Create a new universal client for ai-sdk providers.
* [] ai-sdk
* AI SDK
* LanguageModelV1 streamText/generateText
*/
static async createAiSdkClient(providerId: string, options: any = {}): Promise<UniversalAiSdkClient> {
static async createClient(
providerId: string,
modelId: string = 'default',
options: any = {}
): Promise<LanguageModelV1> {
try {
// 验证provider是否支持
if (!aiProviderRegistry.isSupported(providerId)) {
throw new ClientFactoryError(`Provider "${providerId}" is not supported`, providerId)
}
// 生成缓存键 - 对于有认证选项的providers使用更精细的键
const cacheKey = this.generateCacheKey(providerId, options)
// 检查缓存
if (this.sdkClients.has(cacheKey)) {
const cachedClient = this.sdkClients.get(cacheKey)!
// 验证缓存的客户端是否仍然有效
if (cachedClient.isInitialized() && cachedClient.validateConfig()) {
return cachedClient
} else {
// 如果缓存的客户端无效,清理它
this.sdkClients.delete(cacheKey)
cachedClient.cleanup()
}
// 获取Provider配置
const providerConfig = aiProviderRegistry.getProvider(providerId)
if (!providerConfig) {
throw new ClientFactoryError(`Provider "${providerId}" is not registered`, providerId)
}
// 1. 创建一个新的通用客户端实例
const client = new UniversalAiSdkClient(providerId, options)
// 动态导入模块
const module = await providerConfig.import()
// 2. 初始化它(这将执行动态导入)
await client.initialize()
// 获取创建函数
const creatorFunction = module[providerConfig.creatorFunctionName]
// 3. 验证配置
if (!client.validateConfig()) {
throw new ClientFactoryError(`Invalid configuration for provider "${providerId}"`, providerId)
if (typeof creatorFunction !== 'function') {
throw new ClientFactoryError(
`Creator function "${providerConfig.creatorFunctionName}" not found in the imported module for provider "${providerId}"`
)
}
// 4. 缓存并返回
this.sdkClients.set(cacheKey, client)
return client
// 创建provider实例
const provider = creatorFunction(options)
// 返回模型实例
if (typeof provider === 'function') {
return provider(modelId)
} else {
throw new ClientFactoryError(`Unknown model access pattern for provider "${providerId}"`)
}
} catch (error) {
if (error instanceof ClientFactoryError) {
throw error
@ -99,243 +85,39 @@ export class ApiClientFactory {
}
/**
*
*/
static getCachedClient(providerId: string, options: any = {}): UniversalAiSdkClient | undefined {
const cacheKey = this.generateCacheKey(providerId, options)
return this.sdkClients.get(cacheKey)
}
/**
*
*/
static hasCachedClient(providerId: string, options: any = {}): boolean {
const cacheKey = this.generateCacheKey(providerId, options)
return this.sdkClients.has(cacheKey)
}
/**
*
*/
private static generateCacheKey(providerId: string, options: any): string {
// 创建一个包含关键配置的键,但不包含敏感信息的完整内容
const keyData = {
providerId,
apiKey: options.apiKey ? this.hashApiKey(options.apiKey) : undefined,
baseURL: options.baseURL,
organization: options.organization,
project: options.project,
// 添加其他相关但非敏感的配置
model: options.model,
region: options.region
}
// 移除undefined值
Object.keys(keyData).forEach((key) => {
if (keyData[key as keyof typeof keyData] === undefined) {
delete keyData[key as keyof typeof keyData]
}
})
return JSON.stringify(keyData)
}
/**
* API Key进行哈希处理
*/
private static hashApiKey(apiKey: string): string {
// 简单的哈希方法只取前8个字符和后4个字符
if (apiKey.length <= 12) {
return apiKey.slice(0, 4) + '...'
}
return apiKey.slice(0, 8) + '...' + apiKey.slice(-4)
}
/**
*
*/
static clearCache(): void {
// 清理所有客户端
this.sdkClients.forEach((client) => {
try {
client.cleanup()
} catch (error) {
console.warn('Error cleaning up client:', error)
}
})
this.sdkClients.clear()
this.lastCleanup = new Date()
}
/**
* provider的缓存
*/
static clearProviderCache(providerId: string): void {
const keysToDelete: string[] = []
this.sdkClients.forEach((client, key) => {
if (key.includes(`"providerId":"${providerId}"`)) {
try {
client.cleanup()
} catch (error) {
console.warn(`Error cleaning up client for ${providerId}:`, error)
}
keysToDelete.push(key)
}
})
keysToDelete.forEach((key) => {
this.sdkClients.delete(key)
})
}
/**
*
*/
static getCacheStats(): CacheStats {
return {
size: this.sdkClients.size,
keys: Array.from(this.sdkClients.keys()),
lastCleanup: this.lastCleanup
}
}
/**
*
*/
static async warmupClients(configs: ClientConfig[]): Promise<void> {
const warmupPromises = configs.map(async (config) => {
try {
const { providerId, ...options } = config
await this.createAiSdkClient(providerId, options)
console.log(`✅ Warmed up client for provider: ${providerId}`)
} catch (error) {
console.warn(`⚠️ Failed to warm up client for ${config.providerId}:`, error)
}
})
await Promise.allSettled(warmupPromises)
}
/**
* providers信息
* Providers
*/
static getSupportedProviders(): Array<{
id: string
name: string
hasCachedClient: boolean
}> {
const providers = aiProviderRegistry.getAllProviders()
return providers.map((provider) => ({
return aiProviderRegistry.getAllProviders().map((provider) => ({
id: provider.id,
name: provider.name,
hasCachedClient: Array.from(this.sdkClients.keys()).some((key) => key.includes(`"providerId":"${provider.id}"`))
name: provider.name
}))
}
/**
*
* Provider
*/
static async createMultipleClients(configs: ClientConfig[]): Promise<{
success: Array<{ providerId: string; client: UniversalAiSdkClient }>
errors: Array<{ providerId: string; error: Error }>
}> {
const success: Array<{ providerId: string; client: UniversalAiSdkClient }> = []
const errors: Array<{ providerId: string; error: Error }> = []
await Promise.allSettled(
configs.map(async (config) => {
try {
const { providerId, ...options } = config
const client = await this.createAiSdkClient(providerId, options)
success.push({ providerId, client })
} catch (error) {
errors.push({
providerId: config.providerId,
error: error instanceof Error ? error : new Error('Unknown error')
})
}
})
)
return { success, errors }
}
/**
* -
*/
static async healthCheck(): Promise<{
healthy: number
unhealthy: number
total: number
details: Array<{
providerId: string
status: 'healthy' | 'unhealthy'
error?: string
}>
}> {
const details: Array<{
providerId: string
status: 'healthy' | 'unhealthy'
error?: string
}> = []
let healthy = 0
let unhealthy = 0
for (const [, client] of this.sdkClients) {
try {
const info = client.getProviderInfo()
if (client.isInitialized() && client.validateConfig()) {
healthy++
details.push({
providerId: info.id,
status: 'healthy'
})
} else {
unhealthy++
details.push({
providerId: info.id,
status: 'unhealthy',
error: 'Client not properly initialized or invalid config'
})
}
} catch (error) {
unhealthy++
details.push({
providerId: 'unknown',
status: 'unhealthy',
error: error instanceof Error ? error.message : 'Unknown error'
})
}
}
static getClientInfo(providerId: string): {
id: string
name: string
isSupported: boolean
} {
const provider = aiProviderRegistry.getProvider(providerId)
return {
healthy,
unhealthy,
total: this.sdkClients.size,
details
id: providerId,
name: provider?.name || providerId,
isSupported: aiProviderRegistry.isSupported(providerId)
}
}
}
// 导出单例实例和便捷函数
export const apiClientFactory = ApiClientFactory.getInstance()
// 便捷导出函数
export const createClient = (providerId: string, modelId?: string, options?: any) =>
ApiClientFactory.createClient(providerId, modelId, options)
// 便捷函数
export const createAiSdkClient = (providerId: string, options?: any) =>
ApiClientFactory.createAiSdkClient(providerId, options)
export const getSupportedProviders = () => ApiClientFactory.getSupportedProviders()
export const getCachedClient = (providerId: string, options?: any) =>
ApiClientFactory.getCachedClient(providerId, options)
export const clearCache = () => ApiClientFactory.clearCache()
export const warmupClients = (configs: ClientConfig[]) => ApiClientFactory.warmupClients(configs)
export const healthCheck = () => ApiClientFactory.healthCheck()
// 默认导出
export default ApiClientFactory
export const getClientInfo = (providerId: string) => ApiClientFactory.getClientInfo(providerId)

View File

@ -1,201 +1,81 @@
/**
* Universal AI SDK Client
* AI SDK客户端
*/
import { generateText, streamText } from 'ai'
import { aiProviderRegistry } from '../providers/registry'
/**
* Universal AI SDK Client
* AI SDK客户端实现
*/
export class UniversalAiSdkClient {
private provider: any // The instantiated provider (e.g., from createOpenAI)
private initialized = false
private providerConfig: any
import { generateObject, generateText, streamObject, streamText } from 'ai'
import { ApiClientFactory } from './ApiClientFactory'
/**
* AI SDK
* AI
*/
export class UniversalAiSdkClient {
constructor(
private providerName: string,
private options: any // API keys, etc.
private readonly providerId: string,
private readonly options: any = {}
) {}
/**
* -
*
* 使 AI SDK streamText
*/
async initialize(): Promise<void> {
if (this.initialized) return
// 获取Provider配置
this.providerConfig = aiProviderRegistry.getProvider(this.providerName)
if (!this.providerConfig) {
throw new Error(`Provider "${this.providerName}" is not registered.`)
}
try {
// 使用注册表的动态导入功能
const module = await this.providerConfig.import()
// 获取创建函数
const creatorFunction = module[this.providerConfig.creatorFunctionName]
if (typeof creatorFunction !== 'function') {
throw new Error(
`Creator function "${this.providerConfig.creatorFunctionName}" not found in the imported module for provider "${this.providerName}".`
)
}
// 创建provider实例
this.provider = creatorFunction(this.options)
this.initialized = true
} catch (error) {
if (error instanceof Error) {
throw new Error(`Failed to initialize provider "${this.providerName}": ${error.message}`)
}
throw new Error(`An unknown error occurred while initializing provider "${this.providerName}".`)
}
}
/**
*
*/
isInitialized(): boolean {
return this.initialized
}
/**
*
*/
private getModel(modelId: string): any {
if (!this.initialized) throw new Error('Client not initialized')
// 大多数providers都有直接调用模式provider(modelId)
if (typeof this.provider === 'function') {
return this.provider(modelId)
}
throw new Error(`Unknown model access pattern for provider "${this.providerName}"`)
}
/**
* 使ai-sdk函数
*/
async stream(request: any): Promise<any> {
if (!this.initialized) await this.initialize()
const model = this.getModel(request.modelId)
// 直接调用标准ai-sdk函数
return streamText({
async streamText(modelId: string, params: Omit<Parameters<typeof streamText>[0], 'model'>) {
const model = await ApiClientFactory.createClient(this.providerId, modelId, this.options)
return await streamText({
model,
...request
...params
})
}
/**
*
*
* 使 AI SDK generateText
*/
async generate(request: any): Promise<any> {
if (!this.initialized) await this.initialize()
const model = this.getModel(request.modelId)
return generateText({
async generateText(modelId: string, params: Omit<Parameters<typeof generateText>[0], 'model'>) {
const model = await ApiClientFactory.createClient(this.providerId, modelId, this.options)
return await generateText({
model,
...request
...params
})
}
/**
*
*
* 使 AI SDK generateObject
*/
validateConfig(): boolean {
try {
// 基础验证
if (!this.providerName) return false
if (!this.providerConfig) return false
// API Key验证如果需要
if (this.requiresApiKey() && !this.options?.apiKey) {
return false
}
return true
} catch {
return false
}
async generateObject(modelId: string, params: Omit<Parameters<typeof generateObject>[0], 'model'>) {
const model = await ApiClientFactory.createClient(this.providerId, modelId, this.options)
return await generateObject({
model,
...params
})
}
/**
* Provider是否需要API Key
*
* 使 AI SDK streamObject
*/
private requiresApiKey(): boolean {
// 大多数云服务Provider都需要API Key
const noApiKeyProviders = ['local', 'ollama'] // 本地运行的Provider
return !noApiKeyProviders.includes(this.providerName)
async streamObject(modelId: string, params: Omit<Parameters<typeof streamObject>[0], 'model'>) {
const model = await ApiClientFactory.createClient(this.providerId, modelId, this.options)
return await streamObject({
model,
...params
})
}
/**
* Provider信
*
*/
getProviderInfo(): {
id: string
name: string
isInitialized: boolean
} {
return {
id: this.providerName,
name: this.providerConfig?.name || this.providerName,
isInitialized: this.initialized
}
}
/**
*
*/
cleanup(): void {
this.provider = null
this.initialized = false
this.providerConfig = null
getClientInfo() {
return ApiClientFactory.getClientInfo(this.providerId)
}
}
// 工厂函数,方便创建客户端
export async function createUniversalClient(providerName: string, options: any = {}): Promise<UniversalAiSdkClient> {
const client = new UniversalAiSdkClient(providerName, options)
await client.initialize()
return client
}
// 便捷的流式生成函数
export async function streamGeneration(
providerName: string,
modelId: string,
messages: any[],
options: any = {}
): Promise<any> {
const client = await createUniversalClient(providerName, options)
return client.stream({
modelId,
messages,
...options
})
}
// 便捷的非流式生成函数
export async function generateCompletion(
providerName: string,
modelId: string,
messages: any[],
options: any = {}
): Promise<any> {
const client = await createUniversalClient(providerName, options)
return client.generate({
modelId,
messages,
...options
})
/**
*
*/
export function createUniversalClient(providerId: string, options: any = {}): UniversalAiSdkClient {
return new UniversalAiSdkClient(providerId, options)
}

View File

@ -3,30 +3,28 @@
* Vercel AI SDK AI Provider
*/
// 导入内部使用的类和函数
import { ApiClientFactory } from './clients/ApiClientFactory'
import { createUniversalClient } from './clients/UniversalAiSdkClient'
import { aiProviderRegistry, isProviderSupported } from './providers/registry'
// 核心导出
export { ApiClientFactory, apiClientFactory } from './clients/ApiClientFactory'
export { UniversalAiSdkClient } from './clients/UniversalAiSdkClient'
export { aiProviderRegistry, PROVIDER_REGISTRY } from './providers/registry'
export { ApiClientFactory } from './clients/ApiClientFactory'
export { createUniversalClient, UniversalAiSdkClient } from './clients/UniversalAiSdkClient'
export { aiProviderRegistry } from './providers/registry'
// 类型导出
export type { CacheStats, ClientConfig, ClientFactoryError } from './clients/ApiClientFactory'
export type { ClientFactoryError } from './clients/ApiClientFactory'
export type { ProviderConfig } from './providers/registry'
export type { ProviderError } from './providers/types'
// 便捷函数导出
export { clearCache, createAiSdkClient, getCachedClient, healthCheck, warmupClients } from './clients/ApiClientFactory'
export { createUniversalClient, generateCompletion, streamGeneration } from './clients/UniversalAiSdkClient'
export { createClient, getClientInfo, getSupportedProviders } from './clients/ApiClientFactory'
export { getAllProviders, getProvider, isProviderSupported, registerProvider } from './providers/registry'
// 默认导出 - 主要的工厂类
export { ApiClientFactory as default } from './clients/ApiClientFactory'
// 导入内部使用的函数
import { ApiClientFactory } from './clients/ApiClientFactory'
import { clearCache, createAiSdkClient, healthCheck } from './clients/ApiClientFactory'
import { aiProviderRegistry } from './providers/registry'
import { getAllProviders, isProviderSupported } from './providers/registry'
// 包信息
export const AI_CORE_VERSION = '1.0.0'
export const AI_CORE_NAME = '@cherry-studio/ai-core'
@ -37,13 +35,18 @@ export const AiCore = {
name: AI_CORE_NAME,
// 快速创建客户端
async createClient(providerId: string, options: any = {}) {
return createAiSdkClient(providerId, options)
async createClient(providerId: string, modelId: string = 'default', options: any = {}) {
return ApiClientFactory.createClient(providerId, modelId, options)
},
// 创建通用客户端
createUniversalClient(providerId: string, options: any = {}) {
return createUniversalClient(providerId, options)
},
// 获取支持的providers
getSupportedProviders() {
return getAllProviders()
return ApiClientFactory.getSupportedProviders()
},
// 检查provider支持
@ -51,38 +54,27 @@ export const AiCore = {
return isProviderSupported(providerId)
},
// 获取缓存统计
getCacheStats() {
return ApiClientFactory.getCacheStats()
},
// 健康检查
async healthCheck() {
return healthCheck()
},
// 清理所有资源
cleanup() {
clearCache()
aiProviderRegistry.cleanup()
// 获取客户端信息
getClientInfo(providerId: string) {
return ApiClientFactory.getClientInfo(providerId)
}
}
// 便捷的预配置clients创建函数
export const createOpenAIClient = async (options: { apiKey: string; baseURL?: string }) => {
return createAiSdkClient('openai', options)
export const createOpenAIClient = (options: { apiKey: string; baseURL?: string }) => {
return createUniversalClient('openai', options)
}
export const createAnthropicClient = async (options: { apiKey: string; baseURL?: string }) => {
return createAiSdkClient('anthropic', options)
export const createAnthropicClient = (options: { apiKey: string; baseURL?: string }) => {
return createUniversalClient('anthropic', options)
}
export const createGoogleClient = async (options: { apiKey: string; baseURL?: string }) => {
return createAiSdkClient('google', options)
export const createGoogleClient = (options: { apiKey: string; baseURL?: string }) => {
return createUniversalClient('google', options)
}
export const createXAIClient = async (options: { apiKey: string; baseURL?: string }) => {
return createAiSdkClient('xai', options)
export const createXAIClient = (options: { apiKey: string; baseURL?: string }) => {
return createUniversalClient('xai', options)
}
// 调试和开发工具
@ -98,13 +90,13 @@ export const DevTools = {
// 测试provider连接
async testProvider(providerId: string, options: any) {
try {
const client = await createAiSdkClient(providerId, options)
const info = client.getProviderInfo()
const client = createUniversalClient(providerId, options)
const info = client.getClientInfo()
return {
success: true,
providerId: info.id,
name: info.name,
isInitialized: info.isInitialized
isSupported: info.isSupported
}
} catch (error) {
return {
@ -115,16 +107,17 @@ export const DevTools = {
}
},
// 获取详细的缓存信息
getCacheDetails() {
const stats = ApiClientFactory.getCacheStats()
// 获取provider详细信息
getProviderDetails() {
const providers = aiProviderRegistry.getAllProviders()
return {
cacheStats: stats,
supportedProviders: providers.length,
registeredProviders: aiProviderRegistry.getAllProviders().length,
activeClients: stats.size
registeredProviders: providers.length,
providers: providers.map((p) => ({
id: p.id,
name: p.name
}))
}
}
}

View File

@ -0,0 +1,259 @@
# AI Core 插件系统
支持四种钩子类型:**First**、**Sequential**、**Parallel** 和 **Stream**
## 🎯 设计理念
借鉴 Rollup/Vite 的成熟插件思想:
- **语义清晰**:不同钩子有不同的执行语义
- **类型安全**TypeScript 完整支持
- **性能优化**First 短路、Parallel 并发、Sequential 链式
- **易于扩展**`enforce` 排序 + 功能分类
## 📋 钩子类型
### 🥇 First 钩子 - 首个有效结果
```typescript
// 只执行第一个返回值的插件,用于解析和查找
resolveModel?: (modelId: string, context: AiRequestContext) => string | null
loadTemplate?: (templateName: string, context: AiRequestContext) => any | null
```
### 🔄 Sequential 钩子 - 链式数据转换
```typescript
// 按顺序链式执行,每个插件可以修改数据
transformParams?: (params: any, context: AiRequestContext) => any
transformResult?: (result: any, context: AiRequestContext) => any
```
### ⚡ Parallel 钩子 - 并行副作用
```typescript
// 并发执行,用于日志、监控等副作用
onRequestStart?: (context: AiRequestContext) => void
onRequestEnd?: (context: AiRequestContext, result: any) => void
onError?: (error: Error, context: AiRequestContext) => void
```
### 🌊 Stream 钩子 - 流处理
```typescript
// 直接使用 AI SDK 的 TransformStream
transformStream?: () => (options) => TransformStream<TextStreamPart, TextStreamPart>
```
## 🚀 快速开始
### 基础用法
```typescript
import { PluginManager, createContext, definePlugin } from '@cherry-studio/ai-core/middleware'
// 创建插件管理器
const pluginManager = new PluginManager()
// 添加插件
pluginManager.use({
name: 'my-plugin',
async transformParams(params, context) {
return { ...params, temperature: 0.7 }
}
})
// 使用插件
const context = createContext('openai', 'gpt-4', { messages: [] })
const transformedParams = await pluginManager.executeSequential(
'transformParams',
{ messages: [{ role: 'user', content: 'Hello' }] },
context
)
```
### 完整示例
```typescript
import {
PluginManager,
ModelAliasPlugin,
LoggingPlugin,
ParamsValidationPlugin,
createContext
} from '@cherry-studio/ai-core/middleware'
// 创建插件管理器
const manager = new PluginManager([
ModelAliasPlugin, // 模型别名解析
ParamsValidationPlugin, // 参数验证
LoggingPlugin // 日志记录
])
// AI 请求流程
async function aiRequest(providerId: string, modelId: string, params: any) {
const context = createContext(providerId, modelId, params)
try {
// 1. 【并行】触发请求开始事件
await manager.executeParallel('onRequestStart', context)
// 2. 【首个】解析模型别名
const resolvedModel = await manager.executeFirst('resolveModel', modelId, context)
context.modelId = resolvedModel || modelId
// 3. 【串行】转换请求参数
const transformedParams = await manager.executeSequential('transformParams', params, context)
// 4. 【流处理】收集流转换器AI SDK 原生支持数组)
const streamTransforms = manager.collectStreamTransforms()
// 5. 调用 AI SDK这里省略具体实现
const result = await callAiSdk(transformedParams, streamTransforms)
// 6. 【串行】转换响应结果
const transformedResult = await manager.executeSequential('transformResult', result, context)
// 7. 【并行】触发请求完成事件
await manager.executeParallel('onRequestEnd', context, transformedResult)
return transformedResult
} catch (error) {
// 8. 【并行】触发错误事件
await manager.executeParallel('onError', context, undefined, error)
throw error
}
}
```
## 🔧 自定义插件
### 模型别名插件
```typescript
const ModelAliasPlugin = definePlugin({
name: 'model-alias',
enforce: 'pre', // 最先执行
async resolveModel(modelId) {
const aliases = {
gpt4: 'gpt-4-turbo-preview',
claude: 'claude-3-sonnet-20240229'
}
return aliases[modelId] || null
}
})
```
### 参数验证插件
```typescript
const ValidationPlugin = definePlugin({
name: 'validation',
async transformParams(params) {
if (!params.messages) {
throw new Error('messages is required')
}
return {
...params,
temperature: params.temperature ?? 0.7,
max_tokens: params.max_tokens ?? 4096
}
}
})
```
### 监控插件
```typescript
const MonitoringPlugin = definePlugin({
name: 'monitoring',
enforce: 'post', // 最后执行
async onRequestEnd(context, result) {
const duration = Date.now() - context.startTime
console.log(`请求耗时: ${duration}ms`)
}
})
```
### 内容过滤插件
```typescript
const FilterPlugin = definePlugin({
name: 'content-filter',
transformStream() {
return () =>
new TransformStream({
transform(chunk, controller) {
if (chunk.type === 'text-delta') {
const filtered = chunk.textDelta.replace(/敏感词/g, '***')
controller.enqueue({ ...chunk, textDelta: filtered })
} else {
controller.enqueue(chunk)
}
}
})
}
})
```
## 📊 执行顺序
### 插件排序
```
enforce: 'pre' → normal → enforce: 'post'
```
### 钩子执行流程
```mermaid
graph TD
A[请求开始] --> B[onRequestStart 并行执行]
B --> C[resolveModel 首个有效]
C --> D[loadTemplate 首个有效]
D --> E[transformParams 串行执行]
E --> F[collectStreamTransforms]
F --> G[AI SDK 调用]
G --> H[transformResult 串行执行]
H --> I[onRequestEnd 并行执行]
G --> J[异常处理]
J --> K[onError 并行执行]
```
## 💡 最佳实践
1. **功能单一**:每个插件专注一个功能
2. **幂等性**:插件应该是幂等的,重复执行不会产生副作用
3. **错误处理**:插件内部处理异常,不要让异常向上传播
4. **性能优化**使用合适的钩子类型First vs Sequential vs Parallel
5. **命名规范**:使用语义化的插件名称
## 🔍 调试工具
```typescript
// 查看插件统计信息
const stats = manager.getStats()
console.log('插件统计:', stats)
// 查看所有插件
const plugins = manager.getPlugins()
console.log(
'已注册插件:',
plugins.map((p) => p.name)
)
```
## ⚡ 性能优势
- **First 钩子**:一旦找到结果立即停止,避免无效计算
- **Parallel 钩子**:真正并发执行,不阻塞主流程
- **Sequential 钩子**:保证数据转换的顺序性
- **Stream 钩子**:直接集成 AI SDK零开销
这个设计兼顾了简洁性和强大功能,为 AI Core 提供了灵活而高效的扩展机制。

View File

@ -0,0 +1,192 @@
import type { AiPlugin } from '../types'
/**
* First
*/
export const ModelAliasPlugin: AiPlugin = {
name: 'model-alias',
enforce: 'pre',
async resolveModel(modelId) {
const aliases: Record<string, string> = {
gpt4: 'gpt-4-turbo-preview',
claude: 'claude-3-sonnet-20240229',
gemini: 'gemini-pro'
}
return aliases[modelId] || null
}
}
/**
* Sequential
*/
export const ParamsValidationPlugin: AiPlugin = {
name: 'params-validation',
async transformParams(params) {
// 参数验证
if (!params.messages || !Array.isArray(params.messages)) {
throw new Error('Invalid messages parameter')
}
// 参数转换:添加默认配置
return {
...params,
temperature: params.temperature ?? 0.7,
max_tokens: params.max_tokens ?? 4096,
stream: params.stream ?? true
}
},
async transformResult(result, context) {
// 结果后处理:添加元数据
return {
...result,
metadata: {
...result.metadata,
processedAt: new Date().toISOString(),
provider: context.providerId,
model: context.modelId
}
}
}
}
/**
* Parallel
*/
export const LoggingPlugin: AiPlugin = {
name: 'logging',
async onRequestStart(context) {
console.log(`🚀 AI请求开始: ${context.providerId}/${context.modelId}`, {
requestId: context.requestId,
timestamp: new Date().toISOString()
})
},
async onRequestEnd(context, result) {
const duration = Date.now() - context.startTime
console.log(`✅ AI请求完成: ${context.requestId} (${duration}ms)`, {
provider: context.providerId,
model: context.modelId,
hasResult: !!result
})
},
async onError(error, context) {
const duration = Date.now() - context.startTime
console.error(`❌ AI请求失败: ${context.requestId} (${duration}ms)`, {
provider: context.providerId,
model: context.modelId,
error: error.message,
stack: error.stack
})
}
}
/**
* Parallel
*/
export const PerformancePlugin: AiPlugin = {
name: 'performance',
enforce: 'post',
async onRequestEnd(context) {
const duration = Date.now() - context.startTime
// 记录性能指标
const metrics = {
requestId: context.requestId,
provider: context.providerId,
model: context.modelId,
duration,
timestamp: context.startTime,
success: true
}
// 发送到监控系统(这里只是示例)
// await sendMetrics(metrics)
console.log('📊 性能指标:', metrics)
},
async onError(error, context) {
const duration = Date.now() - context.startTime
const metrics = {
requestId: context.requestId,
provider: context.providerId,
model: context.modelId,
duration,
timestamp: context.startTime,
success: false,
errorType: error.constructor.name
}
console.log('📊 错误指标:', metrics)
}
}
/**
* Stream
*/
export const ContentFilterPlugin: AiPlugin = {
name: 'content-filter',
transformStream() {
return () =>
new TransformStream({
transform(chunk, controller) {
// 过滤敏感内容
if (chunk.type === 'text-delta') {
const filtered = chunk.textDelta.replace(/\b(敏感词|违禁词)\b/g, '***')
controller.enqueue({
...chunk,
textDelta: filtered
})
} else {
controller.enqueue(chunk)
}
}
})
}
}
/**
* First
*/
export const TemplatePlugin: AiPlugin = {
name: 'template-loader',
async loadTemplate(templateName) {
const templates: Record<string, any> = {
chat: {
systemPrompt: '你是一个有用的AI助手',
temperature: 0.7
},
coding: {
systemPrompt: '你是一个专业的编程助手,请提供清晰、高质量的代码',
temperature: 0.3
},
creative: {
systemPrompt: '你是一个创意写作助手,请发挥想象力',
temperature: 0.9
}
}
return templates[templateName] || null
}
}
/**
*
*/
export const defaultPlugins: AiPlugin[] = [
ModelAliasPlugin,
TemplatePlugin,
ParamsValidationPlugin,
LoggingPlugin,
PerformancePlugin,
ContentFilterPlugin
]

View File

@ -0,0 +1,102 @@
import { openai } from '@ai-sdk/openai'
import { streamText } from 'ai'
import { createContext, PluginManager } from '..'
import { ContentFilterPlugin, LoggingPlugin } from './example-plugins'
/**
* AI SDK
*/
export async function exampleAiRequest() {
// 1. 创建插件管理器
const pluginManager = new PluginManager([LoggingPlugin, ContentFilterPlugin])
// 2. 创建请求上下文
const context = createContext('openai', 'gpt-4', {
messages: [{ role: 'user', content: 'Hello!' }]
})
try {
// 3. 触发请求开始事件
await pluginManager.executeParallel('onRequestStart', context)
// 4. 解析模型别名
// const resolvedModel = await pluginManager.executeFirst('resolveModel', 'gpt-4', context)
// const modelId = resolvedModel || 'gpt-4'
// 5. 转换请求参数
const params = {
messages: [{ role: 'user' as const, content: 'Hello, AI!' }],
temperature: 0.7
}
const transformedParams = await pluginManager.executeSequential('transformParams', params, context)
// 6. 收集流转换器关键AI SDK 原生支持数组!)
const streamTransforms = pluginManager.collectStreamTransforms()
// 7. 调用 AI SDK直接传入转换器工厂数组
const result = await streamText({
model: openai('gpt-4'),
...transformedParams,
experimental_transform: streamTransforms // 直接传入工厂函数数组
})
// 8. 处理结果
let fullText = ''
for await (const textPart of result.textStream) {
fullText += textPart
console.log('Streaming:', textPart)
}
// 9. 转换最终结果
const finalResult = { text: fullText, usage: await result.usage }
const transformedResult = await pluginManager.executeSequential('transformResult', finalResult, context)
// 10. 触发完成事件
await pluginManager.executeParallel('onRequestEnd', context, transformedResult)
return transformedResult
} catch (error) {
// 11. 触发错误事件
await pluginManager.executeParallel('onError', context, undefined, error as Error)
throw error
}
}
/**
* 使
*/
export function demonstrateStreamTransforms() {
const pluginManager = new PluginManager([
ContentFilterPlugin,
{
name: 'text-replacer',
transformStream() {
return () =>
new TransformStream({
transform(chunk, controller) {
if (chunk.type === 'text-delta') {
const replaced = chunk.textDelta.replace(/hello/gi, 'hi')
controller.enqueue({ ...chunk, textDelta: replaced })
} else {
controller.enqueue(chunk)
}
}
})
}
}
])
// 获取所有流转换器
const transforms = pluginManager.collectStreamTransforms()
console.log(`收集到 ${transforms.length} 个流转换器`)
// 可以单独使用每个转换器
transforms.forEach((factory, index) => {
console.log(`转换器 ${index + 1} 已准备就绪`)
const transform = factory({ stopStream: () => {} })
console.log('Transform created:', transform)
})
return transforms
}

View File

@ -0,0 +1,23 @@
// 核心类型和接口
export type { AiPlugin, AiRequestContext, HookResult, HookType, PluginManagerConfig } from './types'
import type { AiPlugin, AiRequestContext } from './types'
// 插件管理器
export { PluginManager } from './manager'
// 工具函数
export function createContext(providerId: string, modelId: string, originalParams: any): AiRequestContext {
return {
providerId,
modelId,
originalParams,
metadata: {},
startTime: Date.now(),
requestId: `${providerId}-${modelId}-${Date.now()}-${Math.random().toString(36).slice(2)}`
}
}
// 插件构建器 - 便于创建插件
export function definePlugin(plugin: AiPlugin): AiPlugin {
return plugin
}

View File

@ -0,0 +1,182 @@
import type { TextStreamPart, ToolSet } from 'ai'
import { AiPlugin, AiRequestContext } from './types'
/**
* - Rollup
*/
export class PluginManager {
private plugins: AiPlugin[] = []
constructor(plugins: AiPlugin[] = []) {
this.plugins = this.sortPlugins(plugins)
}
/**
*
*/
use(plugin: AiPlugin): this {
this.plugins = this.sortPlugins([...this.plugins, plugin])
return this
}
/**
*
*/
remove(pluginName: string): this {
this.plugins = this.plugins.filter((p) => p.name !== pluginName)
return this
}
/**
* pre -> normal -> post
*/
private sortPlugins(plugins: AiPlugin[]): AiPlugin[] {
const pre: AiPlugin[] = []
const normal: AiPlugin[] = []
const post: AiPlugin[] = []
plugins.forEach((plugin) => {
if (plugin.enforce === 'pre') {
pre.push(plugin)
} else if (plugin.enforce === 'post') {
post.push(plugin)
} else {
normal.push(plugin)
}
})
return [...pre, ...normal, ...post]
}
/**
* First -
*/
async executeFirst<T>(
hookName: 'resolveModel' | 'loadTemplate',
arg: string,
context: AiRequestContext
): Promise<T | null> {
for (const plugin of this.plugins) {
const hook = plugin[hookName]
if (hook) {
const result = await hook(arg, context)
if (result !== null && result !== undefined) {
return result as T
}
}
}
return null
}
/**
* Sequential -
*/
async executeSequential<T>(
hookName: 'transformParams' | 'transformResult',
initialValue: T,
context: AiRequestContext
): Promise<T> {
let result = initialValue
for (const plugin of this.plugins) {
const hook = plugin[hookName]
if (hook) {
result = await hook(result, context)
}
}
return result
}
/**
* Parallel -
*/
async executeParallel(
hookName: 'onRequestStart' | 'onRequestEnd' | 'onError',
context: AiRequestContext,
result?: any,
error?: Error
): Promise<void> {
const promises = this.plugins
.map((plugin) => {
const hook = plugin[hookName]
if (!hook) return null
if (hookName === 'onError' && error) {
return (hook as any)(error, context)
} else if (hookName === 'onRequestEnd' && result !== undefined) {
return (hook as any)(context, result)
} else if (hookName === 'onRequestStart') {
return (hook as any)(context)
}
return null
})
.filter(Boolean)
// 使用 Promise.all 而不是 allSettled让插件错误能够抛出
await Promise.all(promises)
}
/**
* AI SDK
*/
collectStreamTransforms<TOOLS extends ToolSet>(): Array<
(options: {
tools?: TOOLS
stopStream: () => void
}) => TransformStream<TextStreamPart<TOOLS>, TextStreamPart<TOOLS>>
> {
return this.plugins.map((plugin) => plugin.transformStream?.()).filter(Boolean) as Array<
(options: {
tools?: TOOLS
stopStream: () => void
}) => TransformStream<TextStreamPart<TOOLS>, TextStreamPart<TOOLS>>
>
}
/**
*
*/
getPlugins(): AiPlugin[] {
return [...this.plugins]
}
/**
*
*/
getStats() {
const stats = {
total: this.plugins.length,
pre: 0,
normal: 0,
post: 0,
hooks: {
resolveModel: 0,
loadTemplate: 0,
transformParams: 0,
transformResult: 0,
onRequestStart: 0,
onRequestEnd: 0,
onError: 0,
transformStream: 0
}
}
this.plugins.forEach((plugin) => {
// 统计 enforce 类型
if (plugin.enforce === 'pre') stats.pre++
else if (plugin.enforce === 'post') stats.post++
else stats.normal++
// 统计钩子数量
Object.keys(stats.hooks).forEach((hookName) => {
if (plugin[hookName as keyof AiPlugin]) {
stats.hooks[hookName as keyof typeof stats.hooks]++
}
})
})
return stats
}
}

View File

@ -0,0 +1,61 @@
import type { TextStreamPart, ToolSet } from 'ai'
/**
* AI
*/
export interface AiRequestContext {
providerId: string
modelId: string
originalParams: any
metadata: Record<string, any>
startTime: number
requestId: string
}
/**
* Rollup
*/
export interface AiPlugin {
name: string
enforce?: 'pre' | 'post'
// 【First】首个钩子 - 只执行第一个返回值的插件
resolveModel?: (modelId: string, context: AiRequestContext) => string | null | Promise<string | null>
loadTemplate?: (templateName: string, context: AiRequestContext) => any | null | Promise<any | null>
// 【Sequential】串行钩子 - 链式执行,支持数据转换
transformParams?: (params: any, context: AiRequestContext) => any | Promise<any>
transformResult?: (result: any, context: AiRequestContext) => any | Promise<any>
// 【Parallel】并行钩子 - 不依赖顺序,用于副作用
onRequestStart?: (context: AiRequestContext) => void | Promise<void>
onRequestEnd?: (context: AiRequestContext, result: any) => void | Promise<void>
onError?: (error: Error, context: AiRequestContext) => void | Promise<void>
// 【Stream】流处理 - 直接使用 AI SDK
transformStream?: <TOOLS extends ToolSet>() => (options: {
tools?: TOOLS
stopStream: () => void
}) => TransformStream<TextStreamPart<TOOLS>, TextStreamPart<TOOLS>>
}
/**
*
*/
export interface PluginManagerConfig {
plugins: AiPlugin[]
context: Partial<AiRequestContext>
}
/**
*
*/
export type HookType = 'first' | 'sequential' | 'parallel' | 'stream'
/**
*
*/
export interface HookResult<T = any> {
value: T
stop?: boolean
}

View File

@ -155,8 +155,37 @@ export class AiProviderRegistry {
}
]
// 初始化注册表
providers.forEach((config) => {
// 社区提供的 Providers
const communityProviders: ProviderConfig[] = [
{
id: 'ollama',
name: 'Ollama',
import: () => import('ollama-ai-provider'),
creatorFunctionName: 'createOllama'
},
{
id: 'qwen',
name: 'Qwen',
import: () => import('qwen-ai-provider'),
creatorFunctionName: 'createQwen'
},
{
id: 'zhipu',
name: 'Zhipu AI',
import: () => import('zhipu-ai-provider'),
creatorFunctionName: 'createZhipu'
},
{
id: 'anthropic-vertex',
name: 'Anthropic Vertex AI',
import: () => import('anthropic-vertex-ai'),
creatorFunctionName: 'createAnthropicVertex'
}
]
// 注册所有 providers官方 + 社区)
const allProviders = [...providers, ...communityProviders]
allProviders.forEach((config) => {
this.registry.set(config.id, config)
})
}

View File

@ -0,0 +1,66 @@
/**
* AI Core
* 使 Vercel AI SDK
*/
// 直接重新导出 AI SDK 的类型,避免重复定义
export type {
// 通用类型
CoreMessage,
CoreTool,
CoreToolChoice,
// 其他有用的类型
FinishReason,
GenerateTextResult,
LanguageModelV1,
// 核心函数的参数和返回类型
StreamTextResult,
// 流式处理相关
TextStreamPart,
ToolSet
} from 'ai'
/**
*
*/
export enum LifecycleStage {
PRE_REQUEST = 'pre-request', // 请求预处理
REQUEST_EXECUTION = 'execution', // 请求执行
STREAM_PROCESSING = 'stream', // 流式处理(仅流模式)
POST_RESPONSE = 'post-response', // 响应后处理
ERROR_HANDLING = 'error' // 错误处理
}
/**
*
*/
export interface LifecycleContext {
currentStage: LifecycleStage
startTime: number
stageStartTime: number
completedStages: Set<LifecycleStage>
stageDurations: Map<LifecycleStage, number>
metadata: Record<string, any>
}
/**
*
*/
export interface AiRequestContext {
// 生命周期信息
lifecycle: LifecycleContext
// 请求信息
method: 'streamText' | 'generateText'
providerId: string
originalParams: any // 使用 any让 AI SDK 自己处理类型检查
// 可变状态
state: {
transformedParams?: any
result?: any
error?: Error
aborted?: boolean
metadata: Record<string, any>
}
}

149
yarn.lock
View File

@ -277,7 +277,41 @@ __metadata:
languageName: node
linkType: hard
"@ai-sdk/provider-utils@npm:2.2.8":
"@ai-sdk/provider-utils@npm:1.0.20":
version: 1.0.20
resolution: "@ai-sdk/provider-utils@npm:1.0.20"
dependencies:
"@ai-sdk/provider": "npm:0.0.24"
eventsource-parser: "npm:1.1.2"
nanoid: "npm:3.3.6"
secure-json-parse: "npm:2.7.0"
peerDependencies:
zod: ^3.0.0
peerDependenciesMeta:
zod:
optional: true
checksum: 10c0/40b3a9f3188904ba4e56d857d9bf7297ac2787bf92e2af26d95e435dc04cee6a12d82af71a04e1e2bea15e5b3cf7ddffc33323d2e06c372de0d853624f60f6fb
languageName: node
linkType: hard
"@ai-sdk/provider-utils@npm:2.1.10":
version: 2.1.10
resolution: "@ai-sdk/provider-utils@npm:2.1.10"
dependencies:
"@ai-sdk/provider": "npm:1.0.9"
eventsource-parser: "npm:^3.0.0"
nanoid: "npm:^3.3.8"
secure-json-parse: "npm:^2.7.0"
peerDependencies:
zod: ^3.0.0
peerDependenciesMeta:
zod:
optional: true
checksum: 10c0/d33bbe18f05b3713870ee400378d356e3ccd4a648e2c1bcd492fd3517781b8f7dae91e2916265641098861c4a447e23c178ad22026e2c47e286f56ecfd50b156
languageName: node
linkType: hard
"@ai-sdk/provider-utils@npm:2.2.8, @ai-sdk/provider-utils@npm:^2.0.0, @ai-sdk/provider-utils@npm:^2.1.6":
version: 2.2.8
resolution: "@ai-sdk/provider-utils@npm:2.2.8"
dependencies:
@ -290,7 +324,25 @@ __metadata:
languageName: node
linkType: hard
"@ai-sdk/provider@npm:1.1.3":
"@ai-sdk/provider@npm:0.0.24":
version: 0.0.24
resolution: "@ai-sdk/provider@npm:0.0.24"
dependencies:
json-schema: "npm:0.4.0"
checksum: 10c0/6e550c33ce6375636897b24ad8dfb2a605ff91d92aabd3c7aba2049f3d943c3a5534a1441e9ae4d7ef35c864687dc41c15704d19f11dcc6624fa1e705255c103
languageName: node
linkType: hard
"@ai-sdk/provider@npm:1.0.9":
version: 1.0.9
resolution: "@ai-sdk/provider@npm:1.0.9"
dependencies:
json-schema: "npm:^0.4.0"
checksum: 10c0/49ecd7e69e949c0290159bab15ac0228ae51f2eeb5b7694b19bc98f1058891a570ef75bcc5748afdff5fa607f6da50d9d426500d4a1651f922ff18e74ba2a840
languageName: node
linkType: hard
"@ai-sdk/provider@npm:1.1.3, @ai-sdk/provider@npm:^1.0.0, @ai-sdk/provider@npm:^1.0.7":
version: 1.1.3
resolution: "@ai-sdk/provider@npm:1.1.3"
dependencies:
@ -872,7 +924,11 @@ __metadata:
"@ai-sdk/vercel": "npm:^0.0.1"
"@ai-sdk/xai": "npm:^1.2.16"
ai: "npm:^4.3.16"
anthropic-vertex-ai: "npm:^1.0.2"
ollama-ai-provider: "npm:^1.2.0"
qwen-ai-provider: "npm:^0.1.0"
typescript: "npm:^5.0.0"
zhipu-ai-provider: "npm:^0.1.1"
peerDependenciesMeta:
"@ai-sdk/amazon-bedrock":
optional: true
@ -6512,6 +6568,19 @@ __metadata:
languageName: node
linkType: hard
"anthropic-vertex-ai@npm:^1.0.2":
version: 1.0.2
resolution: "anthropic-vertex-ai@npm:1.0.2"
dependencies:
"@ai-sdk/provider": "npm:0.0.24"
"@ai-sdk/provider-utils": "npm:1.0.20"
google-auth-library: "npm:^9.14.1"
peerDependencies:
zod: ^3.0.0
checksum: 10c0/e250c6a4319ab9ea236e0bff2bcbd0541bbc9a493bfd0ae36125f8ad98ecb591b33d8f2d82da74a29d8ab9029f7c6c3a7a00cdd3f424f8bf35a9f0c895c68f11
languageName: node
linkType: hard
"app-builder-bin@npm:5.0.0-alpha.12":
version: 5.0.0-alpha.12
resolution: "app-builder-bin@npm:5.0.0-alpha.12"
@ -9894,6 +9963,20 @@ __metadata:
languageName: node
linkType: hard
"eventsource-parser@npm:1.1.2":
version: 1.1.2
resolution: "eventsource-parser@npm:1.1.2"
checksum: 10c0/b38948bc81ae6c2a8b9c88383d4f8c2bfbaf23955827a9af68d39bc0550ae83cc400b197e814bea9aef6e0cdc9bae5afd95787418ee3d9ad01ffc4774cf1b84a
languageName: node
linkType: hard
"eventsource-parser@npm:^3.0.0":
version: 3.0.2
resolution: "eventsource-parser@npm:3.0.2"
checksum: 10c0/067c6e60b7c68a4577630cc7e11d2aaeef52005e377a213308c7c2350596a175d5a179671d85f570726dce3f451c15d174ece4479ce68a1805686c88950d08dd
languageName: node
linkType: hard
"eventsource-parser@npm:^3.0.1":
version: 3.0.1
resolution: "eventsource-parser@npm:3.0.1"
@ -10800,7 +10883,7 @@ __metadata:
languageName: node
linkType: hard
"google-auth-library@npm:^9.14.2, google-auth-library@npm:^9.15.0":
"google-auth-library@npm:^9.14.1, google-auth-library@npm:^9.14.2, google-auth-library@npm:^9.15.0":
version: 9.15.1
resolution: "google-auth-library@npm:9.15.1"
dependencies:
@ -12000,7 +12083,7 @@ __metadata:
languageName: node
linkType: hard
"json-schema@npm:^0.4.0":
"json-schema@npm:0.4.0, json-schema@npm:^0.4.0":
version: 0.4.0
resolution: "json-schema@npm:0.4.0"
checksum: 10c0/d4a637ec1d83544857c1c163232f3da46912e971d5bf054ba44fdb88f07d8d359a462b4aec46f2745efbc57053365608d88bc1d7b1729f7b4fc3369765639ed3
@ -14176,6 +14259,15 @@ __metadata:
languageName: node
linkType: hard
"nanoid@npm:3.3.6":
version: 3.3.6
resolution: "nanoid@npm:3.3.6"
bin:
nanoid: bin/nanoid.cjs
checksum: 10c0/606b355960d0fcbe3d27924c4c52ef7d47d3b57208808ece73279420d91469b01ec1dce10fae512b6d4a8c5a5432b352b228336a8b2202a6ea68e67fa348e2ee
languageName: node
linkType: hard
"nanoid@npm:^3.3.7, nanoid@npm:^3.3.8":
version: 3.3.11
resolution: "nanoid@npm:3.3.11"
@ -14531,6 +14623,22 @@ __metadata:
languageName: node
linkType: hard
"ollama-ai-provider@npm:^1.2.0":
version: 1.2.0
resolution: "ollama-ai-provider@npm:1.2.0"
dependencies:
"@ai-sdk/provider": "npm:^1.0.0"
"@ai-sdk/provider-utils": "npm:^2.0.0"
partial-json: "npm:0.1.7"
peerDependencies:
zod: ^3.0.0
peerDependenciesMeta:
zod:
optional: true
checksum: 10c0/d8db4e3e764de179cc04d2ee460118c468a9417ab20a2d13980862ff4df08ab7d41449dad4c49b1c6cd04f3b16517e0b3304365f64b73e90c008b01b4ec40e4b
languageName: node
linkType: hard
"ollama@npm:^0.5.12":
version: 0.5.16
resolution: "ollama@npm:0.5.16"
@ -14956,6 +15064,13 @@ __metadata:
languageName: node
linkType: hard
"partial-json@npm:0.1.7":
version: 0.1.7
resolution: "partial-json@npm:0.1.7"
checksum: 10c0/cd5f994c3a5ca903918c028a6947ebc1d46459234c1c57c7ab98e234d8dca49cb46b05a71889ee422b39d1f66b95c59a5ce3a6ae06966aca95a8960ad20c12d2
languageName: node
linkType: hard
"path-data-parser@npm:0.1.0, path-data-parser@npm:^0.1.0":
version: 0.1.0
resolution: "path-data-parser@npm:0.1.0"
@ -15498,6 +15613,18 @@ __metadata:
languageName: node
linkType: hard
"qwen-ai-provider@npm:^0.1.0":
version: 0.1.0
resolution: "qwen-ai-provider@npm:0.1.0"
dependencies:
"@ai-sdk/provider": "npm:^1.0.7"
"@ai-sdk/provider-utils": "npm:^2.1.6"
peerDependencies:
zod: ^3.24.1
checksum: 10c0/032d18f9ccb868bcafd0034e393364e2bac0211d7567d5673e090e79a2a5cf4d5233f761e04357ab2a48a3b858e9b992840579fb8d2f0f937ad5a6b9b8fa0f6f
languageName: node
linkType: hard
"raf-schd@npm:^4.0.3":
version: 4.0.3
resolution: "raf-schd@npm:4.0.3"
@ -17000,7 +17127,7 @@ __metadata:
languageName: node
linkType: hard
"secure-json-parse@npm:^2.7.0":
"secure-json-parse@npm:2.7.0, secure-json-parse@npm:^2.7.0":
version: 2.7.0
resolution: "secure-json-parse@npm:2.7.0"
checksum: 10c0/f57eb6a44a38a3eeaf3548228585d769d788f59007454214fab9ed7f01fbf2e0f1929111da6db28cf0bcc1a2e89db5219a59e83eeaec3a54e413a0197ce879e4
@ -19439,6 +19566,18 @@ __metadata:
languageName: node
linkType: hard
"zhipu-ai-provider@npm:^0.1.1":
version: 0.1.1
resolution: "zhipu-ai-provider@npm:0.1.1"
dependencies:
"@ai-sdk/provider": "npm:1.0.9"
"@ai-sdk/provider-utils": "npm:2.1.10"
peerDependencies:
zod: ^3.0.0
checksum: 10c0/ccdb6b105d817f1eb9c69387c8794935e707ca4f2ae3878d49582d6069ef6e07834252d85473233a77ff914f39fae32c1cb9fa265ea40cbb20fcf53dc71651f4
languageName: node
linkType: hard
"zip-stream@npm:^6.0.1":
version: 6.0.1
resolution: "zip-stream@npm:6.0.1"