feat: add utility functions for merging models and providers, including deep merge capabilities

- Implemented mergeObjects function to smartly merge objects, preserving existing values and allowing for configurable overwrite options.
- Added mergeModelsList and mergeProvidersList functions to handle merging of model and provider lists, respectively, with case-insensitive ID matching.
- Introduced preset merge strategies for common use cases.
- Created a new API route for syncing provider models, handling data import and merge operations.
- Developed ModelEditForm and ProviderEditForm components for editing model and provider details, respectively, with form validation and state management.
- Added UI components for labels, selects, and notifications to enhance user experience.
This commit is contained in:
suyao 2025-12-24 01:29:07 +08:00
parent f842ea2ab0
commit 5b009769c3
No known key found for this signature in database
61 changed files with 84754 additions and 34256 deletions

View File

@ -0,0 +1,53 @@
# Provider API Keys for Model Synchronization
# Copy this file to .env and fill in your API keys
# Aggregators (China)
CHERRYIN_API_KEY=
SILICON_API_KEY=
OCOOLAI_API_KEY=
DMXAPI_API_KEY=
AIONLY_API_KEY=
BURNCLOUD_API_KEY=
AI_302_API_KEY=
CEPHALON_API_KEY=
LANYUN_API_KEY=
PH8_API_KEY=
SOPHNET_API_KEY=
PPIO_API_KEY=
QINIU_API_KEY=
# Official Providers
OPENAI_API_KEY=
GITHUB_API_KEY=
COPILOT_API_KEY=
YI_API_KEY=
MOONSHOT_API_KEY=
BAICHUAN_API_KEY=
DASHSCOPE_API_KEY=
STEPFUN_API_KEY=
DOUBAO_API_KEY=
INFINI_API_KEY=
MINIMAX_API_KEY=
GROQ_API_KEY=
TOGETHER_API_KEY=
FIREWORKS_API_KEY=
NVIDIA_API_KEY=
GROK_API_KEY=
HYPERBOLIC_API_KEY=
MISTRAL_API_KEY=
JINA_API_KEY=
PERPLEXITY_API_KEY=
MODELSCOPE_API_KEY=
XIRANG_API_KEY=
HUNYUAN_API_KEY=
TENCENT_CLOUD_TI_API_KEY=
BAIDU_CLOUD_API_KEY=
VOYAGEAI_API_KEY=
POE_API_KEY=
LONGCAT_API_KEY=
HUGGINGFACE_API_KEY=
CEREBRAS_API_KEY=
# Authoritative sources (use import scripts instead)
# OPENROUTER_API_KEY=
# AIHUBMIX_API_KEY=

25
packages/catalog/.gitignore vendored Normal file
View File

@ -0,0 +1,25 @@
# Environment variables
.env
.env.local
.env.*.local
# Build output
dist/
*.tsbuildinfo
# Node modules
node_modules/
# Test coverage
coverage/
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# OS
.DS_Store
Thumbs.db

View File

@ -1 +1,196 @@
# catalog
# Cherry Studio Catalog
Comprehensive AI model catalog with provider information, pricing, capabilities, and automatic synchronization.
## Quick Start
### 1. Setup API Keys
Most providers require API keys to list models:
```bash
# Copy example file
cp .env.example .env
# Edit .env and add your API keys
# OPENAI_API_KEY=sk-...
# GROQ_API_KEY=gsk_...
# DEEPSEEK_API_KEY=...
```
### 2. Sync Provider Models
**Option A: Sync all providers (batch)**
```bash
npm run sync:all
```
**Option B: Import authoritative sources**
```bash
# OpenRouter (360+ models)
npm run import:openrouter
# AIHubMix (600+ models)
npm run import:aihubmix
```
**Option C: Use Web UI**
```bash
cd web
npm run dev
# Open http://localhost:3000/providers
# Click "Sync" button on any provider
```
## Features
### Provider Management
- ✅ 51 providers configured with API endpoints
- ✅ Automatic model discovery via `models_api`
- ✅ Support for multiple API formats (OpenAI, Anthropic, Gemini)
- ✅ Custom transformers for aggregators
### Model Catalog
- ✅ 1000+ models from various providers
- ✅ Comprehensive metadata (pricing, capabilities, limits)
- ✅ Input/output modalities
- ✅ Case-insensitive model IDs
### Override System
- ✅ Provider-specific model overrides
- ✅ Tracks all provider-supported models (even if identical)
- ✅ Smart merging (preserves manual edits)
- ✅ Priority system (auto < 100 < manual)
- ✅ Automatic deduplication
### Synchronization
- ✅ Batch sync all providers
- ✅ Per-provider sync via Web UI
- ✅ API key management
- ✅ Rate limiting and error handling
## Data Files
```
data/
├── models.json # Base model catalog (authoritative)
├── providers.json # Provider configurations with models_api
└── overrides.json # Provider-specific model overrides
```
## Scripts
| Command | Description |
|---------|-------------|
| `npm run sync:all` | Sync all providers (except OpenRouter/AIHubMix) |
| `npm run import:openrouter` | Import models from OpenRouter |
| `npm run import:aihubmix` | Import models from AIHubMix |
| `npm run build` | Build TypeScript package |
| `npm run test` | Run test suite |
## Architecture
### Transformers
Transform provider API responses to internal format:
- **OpenAI-compatible** (default): Standard `/v1/models` format
- **OpenRouter**: Custom aggregator format with advanced capabilities
- **AIHubMix**: CSV-based format with type/feature parsing
### Data Flow
```
Provider API → Transformer → ModelConfig[]
Compare with models.json
┌──────────────────┴─────────────────┐
↓ ↓
New Model Existing Model
↓ ↓
Add to models.json Generate Override
Merge with existing
Save to overrides.json
```
## Documentation
- [Sync Guide](./docs/SYNC_GUIDE.md) - Detailed synchronization documentation
- [Schema Documentation](./src/schemas/README.md) - Data schemas and validation
## Development
### Prerequisites
- Node.js 18+
- Yarn 4+
### Setup
```bash
# Install dependencies
yarn install
# Run tests
npm run test
# Build package
npm run build
# Watch mode
npm run dev
```
### Adding a Provider
1. Add provider config to `data/providers.json`:
```json
{
"id": "new-provider",
"name": "New Provider",
"models_api": {
"endpoints": [
{
"url": "https://api.provider.com/v1/models",
"endpoint_type": "CHAT_COMPLETIONS",
"format": "OPENAI"
}
],
"enabled": true,
"update_frequency": "daily"
}
}
```
2. Add API key mapping in `scripts/sync-all-providers.ts`:
```typescript
const PROVIDER_ENV_MAP: Record<string, string> = {
// ...
'new-provider': 'NEW_PROVIDER_API_KEY'
}
```
3. Add to `.env.example`:
```bash
NEW_PROVIDER_API_KEY=
```
4. Run sync:
```bash
npm run sync:all
```
### Adding a Custom Transformer
See [Transformers Guide](./docs/SYNC_GUIDE.md#transformers) for details.
## License
MIT
## Contributing
Contributions welcome! Please read the [Sync Guide](./docs/SYNC_GUIDE.md) first.

View File

@ -1,88 +0,0 @@
{
"timestamp": "2025-11-24T06:41:03.487Z",
"summary": {
"total_providers": 104,
"total_base_models": 241,
"total_overrides": 1164,
"provider_categories": {
"direct": 2,
"cloud": 6,
"proxy": 3,
"self_hosted": 5
},
"models_by_provider": {
"openai": 79,
"anthropic": 20,
"dashscope": 22,
"deepseek": 7,
"gemini": 50,
"mistral": 31,
"xai": 32
},
"overrides_by_provider": {
"bedrock": 152,
"bedrock_converse": 56,
"anyscale": 12,
"azure": 112,
"azure_ai": 45,
"cerebras": 5,
"vertex_ai-chat-models": 5,
"nlp_cloud": 1,
"cloudflare": 4,
"vertex_ai-code-text-models": 1,
"vertex_ai-code-chat-models": 6,
"codestral": 2,
"cohere_chat": 7,
"databricks": 9,
"deepinfra": 67,
"featherless_ai": 2,
"fireworks_ai": 27,
"friendliai": 2,
"openai": 8,
"vertex_ai-language-models": 46,
"vertex_ai-vision-models": 3,
"gradient_ai": 13,
"groq": 27,
"heroku": 4,
"hyperbolic": 16,
"ai21": 9,
"lambda_ai": 20,
"lemonade": 5,
"aleph_alpha": 3,
"meta_llama": 4,
"moonshot": 17,
"morph": 2,
"nscale": 14,
"oci": 13,
"ollama": 21,
"openrouter": 92,
"ovhcloud": 15,
"palm": 2,
"perplexity": 25,
"replicate": 13,
"sagemaker": 3,
"sambanova": 16,
"snowflake": 24,
"together_ai": 36,
"v0": 3,
"vercel_ai_gateway": 85,
"vertex_ai-anthropic_models": 22,
"vertex_ai-mistral_models": 19,
"vertex_ai-deepseek_models": 2,
"vertex_ai": 1,
"vertex_ai-ai21_models": 5,
"vertex_ai-llama_models": 11,
"vertex_ai-minimax_models": 1,
"vertex_ai-moonshot_models": 1,
"vertex_ai-openai_models": 2,
"vertex_ai-qwen_models": 4,
"wandb": 14,
"watsonx": 28
}
},
"files": {
"providers": "providers.json",
"models": "models.json",
"overrides": "overrides.json"
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,407 @@
# Provider Model Synchronization Guide
This guide explains how to use the provider model synchronization system to automatically fetch and update model catalogs from provider APIs.
## Overview
The synchronization system consists of three main components:
1. **Provider API Configuration** (`models_api` in providers.json)
2. **Web UI Sync Button** (Manual sync per provider)
3. **Batch Sync Script** (Automated sync for all providers)
## Provider API Configuration
### Schema
Each provider can have a `models_api` configuration:
```json
{
"id": "openrouter",
"models_api": {
"endpoints": [
{
"url": "https://openrouter.ai/api/v1/models",
"endpoint_type": "CHAT_COMPLETIONS",
"format": "OPENAI",
"transformer": "openrouter"
}
],
"enabled": true,
"update_frequency": "realtime",
"last_synced": "2025-01-15T10:30:00.000Z"
}
}
```
### Fields
- **`endpoints`**: Array of API endpoints to fetch models from
- `url`: Full API endpoint URL
- `endpoint_type`: Type of models (CHAT_COMPLETIONS, EMBEDDINGS, etc.)
- `format`: API format (OPENAI, ANTHROPIC, GEMINI)
- `transformer`: Optional custom transformer name (openrouter, aihubmix)
- **`enabled`**: Whether sync is enabled for this provider
- **`update_frequency`**: Suggested sync frequency
- `realtime`: Aggregators that change frequently (OpenRouter, AIHubMix)
- `daily`: Most official providers
- `weekly`: Stable providers
- `manual`: Manual sync only
- **`last_synced`**: ISO timestamp of last successful sync (auto-updated)
## Setup
### Environment Variables
Most providers require API keys to list their models. Configure your API keys:
1. **Copy the example file:**
```bash
cd packages/catalog
cp .env.example .env
```
2. **Edit `.env` and add your API keys:**
```bash
# Official Providers
OPENAI_API_KEY=sk-...
GROQ_API_KEY=gsk_...
TOGETHER_API_KEY=...
# China Aggregators
DEEPSEEK_API_KEY=...
SILICON_API_KEY=...
```
3. **Keep `.env` secure:**
- Never commit `.env` to git (already in `.gitignore`)
- Use different keys for development and production
- Rotate keys periodically
### API Key Format
Each provider has a corresponding environment variable:
| Provider ID | Environment Variable | Example Format |
|------------|---------------------|----------------|
| openai | `OPENAI_API_KEY` | `sk-...` |
| groq | `GROQ_API_KEY` | `gsk_...` |
| deepseek | `DEEPSEEK_API_KEY` | `sk-...` |
| silicon | `SILICON_API_KEY` | `sk-...` |
| together | `TOGETHER_API_KEY` | `...` |
| mistral | `MISTRAL_API_KEY` | `...` |
| perplexity | `PERPLEXITY_API_KEY` | `pplx-...` |
See `.env.example` for the complete list.
## Usage
### Method 1: Web UI (Per Provider)
1. Open the provider management page (`/providers`)
2. Find a provider with `models_api` enabled
3. Click the **Sync** button in the Actions column
4. Wait for the sync to complete (toast notification will show progress)
5. Review the statistics (fetched, new models, overrides)
**Features:**
- Real-time progress feedback
- Detailed statistics
- Manual trigger control
- Per-provider sync
**Use Cases:**
- Testing new provider configurations
- Emergency updates for specific providers
- Validating API changes
### Method 2: Batch Sync Script (All Providers)
Run the batch sync script to sync all providers at once:
```bash
cd packages/catalog
npm run sync:all
```
**Features:**
- Syncs all providers with `models_api.enabled = true`
- Skips OpenRouter and AIHubMix (use dedicated import scripts)
- Adds delays to avoid rate limiting
- Comprehensive progress logging
- Summary statistics
**Use Cases:**
- Scheduled updates (cron jobs, CI/CD)
- Initial bulk import
- Regular maintenance updates
**Output Example:**
```
============================================================
Batch Provider Model Sync
============================================================
Loading data files...
Loaded:
- 51 providers
- 604 models
- 120 overrides
Providers to sync: 49
Skipping: openrouter, aihubmix (authoritative sources)
API Keys Status:
✓ Found: 12
✗ Missing: 37
Providers without API keys (will likely fail):
- cherryin (env: CHERRYIN_API_KEY)
- silicon (env: SILICON_API_KEY)
...
To configure API keys:
1. Copy .env.example to .env
2. Fill in your API keys
3. Re-run this script
[deepseek] Syncing models...
- Fetching from https://api.deepseek.com/v1/models
✓ Fetched 3 models
+ Adding 1 new models to models.json
+ Generated 2 new overrides
...
============================================================
Sync Summary
============================================================
Total providers: 49
✓ Successful: 47
✗ Failed: 2
Statistics:
- Total models fetched: 520
- New models added: 45
- Overrides generated: 178
- Overrides merged: 12
✓ Batch sync completed
============================================================
```
## How It Works
### Data Flow
```
Provider API → Transformer → ModelConfig
Compare with models.json
┌──────────────────┴─────────────────┐
↓ ↓
New Model Existing Model
↓ ↓
Add to models.json Generate Override
Merge with existing
Save to overrides.json
```
### Override Generation
The system automatically generates overrides for **all models** supported by a provider, even if identical to the base model. This serves two purposes:
1. **Provider Support Tracking**: Mark which providers support which models
2. **Difference Recording**: Record any differences from the base model
**Override Types:**
1. **Empty Override** (identical models):
```json
{
"provider_id": "groq",
"model_id": "llama-3.1-8b",
"priority": 0
}
```
This marks that the provider supports the model with no differences.
2. **Override with Differences**:
```json
{
"provider_id": "provider-x",
"model_id": "gpt-4",
"priority": 0,
"pricing": {
"input": { "per_million_tokens": 5.0, "currency": "USD" },
"output": { "per_million_tokens": 15.0, "currency": "USD" }
},
"limits": {
"context_window": 32000
}
}
```
**Priority System:**
- `priority < 100`: Auto-generated overrides (replaced on sync)
- `priority >= 100`: Manual overrides (preserved during sync)
### Merge Strategy
When syncing:
1. **New Models**: Added directly to `models.json`
2. **Existing Models with Differences**: Override created/updated in `overrides.json`
3. **Manual Overrides**: Preserved (priority >= 100)
4. **Auto Overrides**: Replaced with latest data (priority < 100)
## Transformers
### Built-in Transformers
1. **OpenAI-compatible** (default): Standard OpenAI API format
- Used by most providers (deepseek, groq, together, etc.)
- Handles `{ data: [...] }` responses
- Basic capability inference
2. **OpenRouter**: Custom transformer for OpenRouter aggregator
- Normalizes model IDs to lowercase
- Extracts provider from model ID format (`openai/gpt-4`)
- Advanced capability inference from supported_parameters
- Pricing conversion (per-token → per-million)
3. **AIHubMix**: Custom transformer for AIHubMix aggregator
- Normalizes model IDs to lowercase
- Parses CSV fields (types, features, input_modalities)
- Capability mapping (thinking → REASONING, etc.)
- Provider extraction from model ID
### Adding Custom Transformers
To add a custom transformer:
1. Create `src/utils/importers/{provider}/transformer.ts`
2. Implement `ITransformer` interface
3. Update sync endpoint to use your transformer
4. Add transformer name to provider config
Example:
```typescript
import type { ModelConfig } from '../../../schemas'
import type { ITransformer } from '../base/base-transformer'
export class CustomTransformer implements ITransformer<CustomModel> {
extractModels(response: any): CustomModel[] {
// Extract models from API response
}
transform(apiModel: CustomModel): ModelConfig {
// Transform to internal format
}
}
```
## Best Practices
### 1. Authoritative Sources
OpenRouter and AIHubMix are treated as **authoritative sources** because:
- They aggregate models from multiple providers
- They have custom transformers with advanced logic
- They should be imported using dedicated scripts:
```bash
npm run import:openrouter
npm run import:aihubmix
```
### 2. Sync Frequency
Recommended sync frequencies:
| Provider Type | Frequency | Reason |
|--------------|-----------|--------|
| Aggregators | Daily | Models change frequently |
| Official APIs | Weekly | Stable, infrequent updates |
| Beta/Experimental | Manual | May have unstable APIs |
### 3. API Keys
Most providers require API keys for model listing:
**For Batch Script:**
- Configure in `.env` file (see Setup section above)
- Script will automatically use the appropriate key for each provider
- Missing keys will trigger warnings but won't stop the sync
**For Web UI:**
- Currently uses same `.env` file (server-side)
- Future enhancement: API key input field in UI
### 4. Rate Limiting
The batch script includes:
- 1-second delay between providers
- Error handling to continue on failures
- Retry logic (future enhancement)
### 5. Manual Overrides
To create manual overrides that won't be replaced:
1. Set `priority >= 100` in `overrides.json`
2. Add reason field to document why it's manual
3. These will be preserved during sync
Example:
```json
{
"provider_id": "custom-provider",
"model_id": "special-model",
"priority": 100,
"reason": "Custom pricing negotiated with provider",
"pricing": {
"input": { "per_million_tokens": 1.0, "currency": "USD" },
"output": { "per_million_tokens": 2.0, "currency": "USD" }
}
}
```
## Troubleshooting
### Provider Sync Fails
1. Check if `models_api.enabled = true`
2. Verify API endpoint URL is accessible
3. Check if API key is required
4. Review transformer compatibility
### Models Not Appearing
1. Check if model IDs are normalized to lowercase
2. Verify transformer is extracting models correctly
3. Check console logs for transformation errors
### Overrides Not Generated
1. Verify model exists in base `models.json`
2. Check if differences actually exist (pricing, capabilities, etc.)
3. Review merge strategy settings
## Future Enhancements
- [ ] API key management in Web UI
- [ ] Scheduled sync (cron-style)
- [ ] Sync history and audit log
- [ ] Conflict resolution UI
- [ ] Retry logic with exponential backoff
- [ ] Webhook notifications
- [ ] Differential sync (only changed models)
- [ ] Provider-specific transformers registry

View File

@ -0,0 +1,273 @@
# Merge Strategies Documentation
## Overview
The merge utilities provide smart merging capabilities for importing and updating model/provider data while preserving manually curated information.
## Merge Strategies
### 1. FILL_UNDEFINED (Default)
Only fills in `undefined` values in the existing data. Best for initial imports or filling missing data.
```typescript
import { mergeModelsList, MergeStrategies } from './src/utils/merge-utils'
const merged = mergeModelsList(
existingModels,
incomingModels,
MergeStrategies.FILL_UNDEFINED
)
```
**Example:**
```typescript
// Existing model
{
id: 'gpt-4',
description: 'Manually curated description',
output_modalities: undefined,
pricing: { input: 30, output: 60 }
}
// Incoming model
{
id: 'gpt-4',
description: 'Auto-generated description',
output_modalities: ['TEXT'],
pricing: { input: 30, output: 60 }
}
// Result
{
id: 'gpt-4',
description: 'Manually curated description', // Preserved (not undefined)
output_modalities: ['TEXT'], // Filled (was undefined)
pricing: { input: 30, output: 60 } // Preserved
}
```
### 2. UPDATE_DYNAMIC
Updates dynamic fields (pricing, metadata) while preserving manually curated content.
```typescript
const merged = mergeModelsList(
existingModels,
incomingModels,
MergeStrategies.UPDATE_DYNAMIC
)
```
**Example:**
```typescript
// Existing model
{
id: 'claude-3',
description: 'Custom description',
capabilities: ['FUNCTION_CALL'],
pricing: { input: 3, output: 15 }
}
// Incoming model
{
id: 'claude-3',
description: 'New description',
capabilities: ['FUNCTION_CALL', 'REASONING'],
pricing: { input: 3, output: 15 }
}
// Result
{
id: 'claude-3',
description: 'Custom description', // Preserved (neverOverwrite)
capabilities: ['FUNCTION_CALL'], // Preserved (neverOverwrite)
pricing: { input: 3, output: 15 } // Updated (alwaysOverwrite)
}
```
### 3. FULL_REPLACE
Completely replaces all existing data with incoming data.
```typescript
const merged = mergeModelsList(
existingModels,
incomingModels,
MergeStrategies.FULL_REPLACE
)
```
**Use case:** Complete re-import from authoritative source.
### 4. PRESERVE_MANUAL
Preserves all manually edited fields, only updates system-maintained fields.
```typescript
const merged = mergeModelsList(
existingModels,
incomingModels,
MergeStrategies.PRESERVE_MANUAL
)
```
**Example:**
```typescript
// Existing model (manually edited)
{
id: 'gemini-pro',
description: 'Carefully curated description',
capabilities: ['FUNCTION_CALL', 'REASONING'],
pricing: { input: 0.5, output: 1.5 },
context_window: 128000
}
// Incoming model (new pricing)
{
id: 'gemini-pro',
description: 'Auto description',
capabilities: ['FUNCTION_CALL'],
pricing: { input: 0.125, output: 0.375 },
context_window: 2000000
}
// Result
{
id: 'gemini-pro',
description: 'Carefully curated description', // Preserved
capabilities: ['FUNCTION_CALL', 'REASONING'], // Preserved
pricing: { input: 0.125, output: 0.375 }, // Updated (alwaysOverwrite)
context_window: 2000000 // Updated (alwaysOverwrite)
}
```
## Custom Merge Options
Create your own merge strategy:
```typescript
import { mergeModelsList, type MergeOptions } from './src/utils/merge-utils'
const customStrategy: MergeOptions = {
preserveExisting: true,
alwaysOverwrite: ['pricing', 'metadata'],
neverOverwrite: ['description', 'capabilities']
}
const merged = mergeModelsList(
existingModels,
incomingModels,
customStrategy
)
```
## Usage in Scripts
### Import Script Example
```typescript
#!/usr/bin/env tsx
import * as fs from 'fs'
import { mergeModelsList, MergeStrategies } from '../src/utils/merge-utils'
async function importModels() {
// Fetch new models
const newModels = await fetchFromAPI()
// Load existing models
const existingData = JSON.parse(fs.readFileSync('data/models.json', 'utf-8'))
// Merge with FILL_UNDEFINED strategy
const merged = mergeModelsList(
existingData.models,
newModels,
MergeStrategies.FILL_UNDEFINED
)
// Save
existingData.models = merged
fs.writeFileSync('data/models.json', JSON.stringify(existingData, null, 2))
console.log('✓ Import complete with smart merge')
}
```
## API Reference
### `mergeObjects<T>(existing, incoming, options)`
Deep merge two objects with configurable strategy.
**Parameters:**
- `existing: T` - Existing object
- `incoming: Partial<T>` - New object to merge
- `options: MergeOptions` - Merge configuration
**Returns:** `T` - Merged object
### `mergeModelsList(existingModels, incomingModels, options)`
Merge model arrays by ID.
**Parameters:**
- `existingModels: ModelConfig[]` - Current models
- `incomingModels: ModelConfig[]` - New models
- `options: MergeOptions` - Merge strategy
**Returns:** `ModelConfig[]` - Merged models array
### `mergeProvidersList(existingProviders, incomingProviders, options)`
Merge provider arrays by ID.
**Parameters:**
- `existingProviders: ProviderConfig[]` - Current providers
- `incomingProviders: ProviderConfig[]` - New providers
- `options: MergeOptions` - Merge strategy
**Returns:** `ProviderConfig[]` - Merged providers array
## Best Practices
1. **Use FILL_UNDEFINED for first import** - Safest option for initial data population
2. **Use UPDATE_DYNAMIC for regular updates** - Keeps pricing fresh while preserving curation
3. **Use PRESERVE_MANUAL after manual edits** - Protects your work while updating system fields
4. **Test merge before commit** - Preview changes before overwriting production data
5. **Document custom strategies** - Add comments explaining why specific fields are preserved
## Migration Guide
### From Simple Replace
**Before:**
```typescript
data.models = newModels // Loses all existing data!
```
**After:**
```typescript
data.models = mergeModelsList(data.models, newModels, MergeStrategies.FILL_UNDEFINED)
```
### From Manual Merge Logic
**Before:**
```typescript
for (const newModel of newModels) {
const existing = data.models.find(m => m.id === newModel.id)
if (existing && existing.description) {
newModel.description = existing.description
}
// ... lots of manual field checking
}
```
**After:**
```typescript
data.models = mergeModelsList(data.models, newModels, {
preserveExisting: true,
neverOverwrite: ['description']
})
```

View File

@ -0,0 +1,193 @@
# Provider Endpoint Schema Design ✅ IMPLEMENTED
## Problem Analysis
### Previous Issues (SOLVED)
1. ❌ **provider_type** was semantically unclear - represented API format/protocol, not provider type
2. ❌ **api_host** was in metadata but is a core configuration field
3. ❌ **anthropic_api_host** existed as a separate field for dual-protocol providers
4. ❌ **supported_endpoints** was too coarse-grained (all were "CHAT_COMPLETIONS")
5. ❌ No clear mapping between endpoint types, their API hosts, and request formats
### Real-World Patterns
Different LLM providers use different API formats:
- **OpenAI**: Covers `/v1/chat/completions`, `/v1/embeddings`, `/v1/images/generations`, etc.
- **Anthropic**: `/v1/messages` (Claude API)
- **Gemini**: Custom Google API format
- **DeepSeek**: Supports both OpenAI format AND Anthropic format at different base URLs
### Key Insight
Most providers share the same **base_url** for all their endpoints - only the API **format** and endpoint **path** differ.
## Final Schema Design (IMPLEMENTED)
### Two-Layer Abstraction
1. **Endpoint Type** - What functionality (chat, embeddings, images, etc.)
2. **API Format** - What protocol (OpenAI, Anthropic, Gemini, etc.)
```typescript
// Endpoint types - represents the API functionality
export const EndpointTypeSchema = z.enum([
// LLM endpoints
'CHAT_COMPLETIONS',
'TEXT_COMPLETIONS',
// Embedding endpoints
'EMBEDDINGS',
'RERANK',
// Image endpoints
'IMAGE_GENERATION',
'IMAGE_EDIT',
'IMAGE_VARIATION',
// Audio endpoints
'AUDIO_TRANSCRIPTION',
'AUDIO_TRANSLATION',
'TEXT_TO_SPEECH',
// Video endpoints
'VIDEO_GENERATION'
])
// API format types - represents the protocol/format of the API
export const ApiFormatSchema = z.enum([
'OPENAI', // OpenAI standard format (covers chat, embeddings, images, etc.)
'ANTHROPIC', // Anthropic format
'GEMINI', // Google Gemini API format
'CUSTOM' // Custom/proprietary format
])
// Format configuration - maps API format to base URL
export const FormatConfigSchema = z.object({
format: ApiFormatSchema,
base_url: z.string().url(),
default: z.boolean().default(false)
})
// Provider schema with format configurations
export const ProviderConfigSchema = z.object({
id: ProviderIdSchema,
name: z.string(),
description: z.string().optional(),
authentication: AuthenticationSchema.default('API_KEY'),
// API format configurations
// Each provider can support multiple API formats (e.g., OpenAI + Anthropic)
formats: z.array(FormatConfigSchema).min(1)
.refine((formats) => formats.filter(f => f.default).length <= 1, {
message: 'Only one format can be marked as default'
}),
// Supported endpoint types (optional, for documentation)
supported_endpoints: z.array(EndpointTypeSchema).optional(),
// API compatibility - kept for online updates
api_compatibility: ApiCompatibilitySchema.optional(),
documentation: z.string().url().optional(),
website: z.string().url().optional(),
deprecated: z.boolean().default(false),
// Additional metadata (only truly extra fields go here)
metadata: MetadataSchema
})
```
### Example Data
#### Single Format Provider (OpenAI)
```json
{
"id": "openai",
"name": "OpenAI",
"formats": [
{
"format": "OPENAI",
"base_url": "https://api.openai.com",
"default": true
}
],
"supported_endpoints": [
"CHAT_COMPLETIONS",
"EMBEDDINGS",
"IMAGE_GENERATION",
"TEXT_TO_SPEECH",
"AUDIO_TRANSCRIPTION"
]
}
```
#### Multi-Format Provider (DeepSeek)
```json
{
"id": "deepseek",
"name": "DeepSeek",
"formats": [
{
"format": "OPENAI",
"base_url": "https://api.deepseek.com",
"default": true
},
{
"format": "ANTHROPIC",
"base_url": "https://api.deepseek.com/anthropic"
}
],
"supported_endpoints": ["CHAT_COMPLETIONS"]
}
```
#### Custom Format Provider (Anthropic)
```json
{
"id": "anthropic",
"name": "Anthropic",
"formats": [
{
"format": "ANTHROPIC",
"base_url": "https://api.anthropic.com",
"default": true
}
],
"supported_endpoints": ["CHAT_COMPLETIONS"]
}
```
## Benefits
1. ✅ **Clear Semantics**: `format` clearly indicates the API protocol, `endpoint_type` indicates functionality
2. ✅ **Simplified Structure**: Same base_url for most providers, only format differs
3. ✅ **Multi-Protocol Support**: Providers can support multiple formats naturally (e.g., DeepSeek)
4. ✅ **Default Selection**: Client knows which format to use by default
5. ✅ **No Metadata Pollution**: Core config fields are top-level, not in metadata
6. ✅ **Extensible**: Easy to add new endpoint types or formats
7. ✅ **Business Logic Separation**: Schema doesn't encode priority/selection logic - that's for client code
## Migration Completed ✅
Migration script: `scripts/migrate-providers-to-formats.ts`
Transformations applied:
- `metadata.provider_type``formats[0].format` (mapped to OPENAI/ANTHROPIC/GEMINI)
- `metadata.api_host``formats[0].base_url`
- `metadata.anthropic_api_host``formats[1]` with format: ANTHROPIC
- `supported_endpoints` → set to ["CHAT_COMPLETIONS"] as default
- Cleaned metadata to remove migrated fields
## Special Cases
### Replicate (per-model endpoints)
For providers where each model has a unique endpoint URL:
- Provider defines `formats: [{ format: "CUSTOM", base_url: "https://api.replicate.com", default: true }]`
- Model stores custom endpoint in `metadata.custom_endpoint` or similar field
- Client code handles CUSTOM format by checking model metadata
### Future: Multiple Endpoint Types
When providers add support for embeddings, images, etc.:
- Simply update `supported_endpoints` array
- Client code maps `endpoint_type + format` to correct API path
- Example: `EMBEDDINGS + OPENAI``{base_url}/v1/embeddings`
- Example: `CHAT_COMPLETIONS + ANTHROPIC``{base_url}/v1/messages`

View File

@ -12,7 +12,9 @@
"clean": "rm -rf dist",
"test": "vitest run",
"test:watch": "vitest",
"import:aihubmix": "tsx scripts/import-aihubmix.ts"
"import:aihubmix": "tsx scripts/import-aihubmix.ts",
"import:openrouter": "tsx scripts/import-openrouter.ts",
"sync:all": "tsx scripts/sync-all-providers.ts"
},
"author": "Cherry Studio",
"license": "MIT",
@ -38,6 +40,8 @@
},
"devDependencies": {
"@types/json-schema": "^7.0.15",
"@types/node": "^24.10.2",
"dotenv": "^17.2.3",
"tsdown": "^0.16.6",
"typescript": "^5.9.3",
"vitest": "^4.0.13",

View File

@ -0,0 +1,41 @@
#!/usr/bin/env tsx
/**
* Clean up models with invalid pricing (null values)
*/
import fs from 'fs/promises'
import path from 'path'
const DATA_DIR = path.join(__dirname, '../data')
async function cleanupInvalidPricing() {
console.log('Cleaning up models with invalid pricing...\n')
const modelsPath = path.join(DATA_DIR, 'models.json')
const modelsData = JSON.parse(await fs.readFile(modelsPath, 'utf-8'))
let fixed = 0
for (const model of modelsData.models) {
if (model.pricing) {
const hasNullInput = model.pricing.input?.per_million_tokens == null
const hasNullOutput = model.pricing.output?.per_million_tokens == null
if (hasNullInput || hasNullOutput) {
console.log(`Removing invalid pricing from: ${model.id}`)
delete model.pricing
fixed++
}
}
}
if (fixed > 0) {
await fs.writeFile(modelsPath, JSON.stringify(modelsData, null, 2) + '\n', 'utf-8')
console.log(`\n✓ Fixed ${fixed} models with invalid pricing`)
} else {
console.log('✓ No invalid pricing found')
}
}
cleanupInvalidPricing().catch(console.error)

View File

@ -1,76 +0,0 @@
#!/usr/bin/env tsx
/**
* Cleanup script for data/overrides.json
* Removes deprecated tracking fields: last_updated, updated_by
* These fields will be removed from override schema as git provides better tracking
*/
import * as fs from 'fs/promises'
import * as path from 'path'
interface Override {
provider_id: string
model_id: string
disabled?: boolean
reason?: string
priority?: number
last_updated?: string // To be removed
updated_by?: string // To be removed
limits?: unknown
pricing?: unknown
capabilities?: unknown
reasoning?: unknown
parameters?: unknown
replace_with?: string
[key: string]: unknown
}
interface OverrideFile {
version: string
overrides: Override[]
}
async function cleanupOverrides(): Promise<void> {
const overridesPath = path.join(process.cwd(), 'data', 'overrides.json')
console.log('Reading overrides file...')
const content = await fs.readFile(overridesPath, 'utf-8')
const data: OverrideFile = JSON.parse(content)
console.log(`Found ${data.overrides.length} override entries`)
let removedCount = 0
const cleanedOverrides = data.overrides.map((override) => {
const cleaned: Override = { ...override }
// Remove deprecated tracking fields
if ('last_updated' in cleaned) {
delete cleaned.last_updated
removedCount++
}
if ('updated_by' in cleaned) {
delete cleaned.updated_by
}
return cleaned
})
const cleanedData: OverrideFile = {
version: data.version,
overrides: cleanedOverrides
}
// Write cleaned data back
console.log(`Removing last_updated and updated_by from ${removedCount} entries...`)
await fs.writeFile(overridesPath, JSON.stringify(cleanedData, null, 2) + '\n', 'utf-8')
console.log('✓ Cleanup completed successfully')
console.log(`✓ Saved ${cleanedOverrides.length} cleaned override entries`)
}
// Run the cleanup
cleanupOverrides().catch((error) => {
console.error('Cleanup failed:', error)
process.exit(1)
})

View File

@ -1,273 +0,0 @@
#!/usr/bin/env tsx
import * as fs from 'fs'
import * as path from 'path'
// Types based on AIHubMix API structure
interface AiHubMixModel {
model_id: string
desc: string
pricing: {
cache_read?: number
cache_write?: number
input: number
output: number
}
types: string
features: string
input_modalities: string
max_output: number
context_length: number
}
interface AiHubMixResponse {
data: AiHubMixModel[]
}
// Transformer function (simplified version of the transformer class)
function transformModel(apiModel: AiHubMixModel) {
const capabilities = mapCapabilities(apiModel.types, apiModel.features)
const inputModalities = mapModalities(apiModel.input_modalities)
const outputModalities = inferOutputModalities(apiModel.types)
const tags = extractTags(apiModel)
const category = inferCategory(apiModel.types)
const transformed: any = {
id: apiModel.model_id,
description: apiModel.desc || undefined,
capabilities: capabilities.length > 0 ? capabilities : undefined,
input_modalities: inputModalities.length > 0 ? inputModalities : undefined,
output_modalities: outputModalities.length > 0 ? outputModalities : undefined,
context_window: apiModel.context_length || undefined,
max_output_tokens: apiModel.max_output || undefined,
pricing: {
input: {
per_million_tokens: apiModel.pricing.input,
currency: 'USD'
},
output: {
per_million_tokens: apiModel.pricing.output,
currency: 'USD'
}
},
metadata: {
source: 'aihubmix',
tags: tags.length > 0 ? tags : undefined,
category: category || undefined,
original_types: apiModel.types || undefined,
original_features: apiModel.features || undefined
}
}
// Add optional pricing fields only if they exist
if (apiModel.pricing.cache_read !== undefined) {
transformed.pricing.cache_read = {
per_million_tokens: apiModel.pricing.cache_read,
currency: 'USD'
}
}
if (apiModel.pricing.cache_write !== undefined) {
transformed.pricing.cache_write = {
per_million_tokens: apiModel.pricing.cache_write,
currency: 'USD'
}
}
// Remove undefined description
if (!apiModel.desc) {
delete transformed.description
}
return transformed
}
function mapCapabilities(types: string, features: string): string[] {
const caps = new Set<string>()
if (features) {
const featureList = features
.split(',')
.map((f) => f.trim().toLowerCase())
.filter(Boolean)
featureList.forEach((feature) => {
switch (feature) {
case 'thinking':
caps.add('REASONING')
break
case 'function_calling':
case 'tools':
caps.add('FUNCTION_CALL')
break
case 'structured_outputs':
caps.add('STRUCTURED_OUTPUT')
break
case 'web':
case 'deepsearch':
caps.add('WEB_SEARCH')
break
}
})
}
if (types) {
const typeList = types
.split(',')
.map((t) => t.trim().toLowerCase())
.filter(Boolean)
typeList.forEach((type) => {
switch (type) {
case 'image_generation':
caps.add('IMAGE_GENERATION')
break
case 'video':
caps.add('VIDEO_GENERATION')
break
}
})
}
return Array.from(caps)
}
function mapModalities(modalitiesCSV: string): string[] {
if (!modalitiesCSV) {
return []
}
const modalities = new Set<string>()
const modalityList = modalitiesCSV
.split(',')
.map((m) => m.trim().toUpperCase())
.filter(Boolean)
modalityList.forEach((m) => {
switch (m) {
case 'TEXT':
modalities.add('TEXT')
break
case 'IMAGE':
modalities.add('VISION')
break
case 'AUDIO':
modalities.add('AUDIO')
break
case 'VIDEO':
modalities.add('VIDEO')
break
}
})
return Array.from(modalities)
}
function inferOutputModalities(types: string): string[] {
if (!types) {
return []
}
const typeList = types
.split(',')
.map((t) => t.trim().toLowerCase())
.filter(Boolean)
if (typeList.includes('image_generation')) {
return ['VISION']
}
if (typeList.includes('video')) {
return ['VIDEO']
}
return []
}
function extractTags(apiModel: AiHubMixModel): string[] {
const tags: string[] = []
if (apiModel.types) {
const types = apiModel.types.split(',').map((t) => t.trim()).filter(Boolean)
tags.push(...types)
}
if (apiModel.features) {
const features = apiModel.features.split(',').map((f) => f.trim()).filter(Boolean)
tags.push(...features)
}
return Array.from(new Set(tags))
}
function inferCategory(types: string): string {
if (!types) {
return ''
}
const typeList = types
.split(',')
.map((t) => t.trim().toLowerCase())
.filter(Boolean)
if (typeList.includes('image_generation')) {
return 'image-generation'
}
if (typeList.includes('video')) {
return 'video-generation'
}
return ''
}
// Main function
async function generateAiHubMixModels() {
console.log('Fetching models from AIHubMix API...')
const apiUrl = 'https://aihubmix.com/api/v1/models'
try {
const response = await fetch(apiUrl)
if (!response.ok) {
throw new Error(`API error: ${response.status} ${response.statusText}`)
}
const json: AiHubMixResponse = await response.json()
console.log(`✓ Fetched ${json.data.length} models from AIHubMix`)
// Transform to internal format
console.log('Transforming models...')
const models = json.data.map((m) => transformModel(m))
console.log(`✓ Transformed ${models.length} models`)
// Prepare output
const output = {
version: new Date().toISOString().split('T')[0].replace(/-/g, '.'),
models
}
// Write to aihubmix_models.json
const outputPath = path.join(__dirname, '../data/aihubmix_models.json')
fs.writeFileSync(outputPath, JSON.stringify(output, null, 2) + '\n', 'utf-8')
console.log(`✓ Saved ${models.length} models to ${outputPath}`)
// Also update the main models.json by replacing the models array
const mainModelsPath = path.join(__dirname, '../data/models.json')
const mainModelsData = JSON.parse(fs.readFileSync(mainModelsPath, 'utf-8'))
mainModelsData.models = output.models
fs.writeFileSync(mainModelsPath, JSON.stringify(mainModelsData, null, 2) + '\n', 'utf-8')
console.log(`✓ Updated main models.json with ${models.length} models`)
} catch (error) {
console.error('✗ Failed to generate AIHubMix models:', error)
process.exit(1)
}
}
// Run the script
generateAiHubMixModels().catch(console.error)

View File

@ -0,0 +1,431 @@
#!/usr/bin/env tsx
/**
* Generate providers.json from Cherry Studio provider configuration
* This script parses the Cherry Studio providers.ts file and converts it to catalog format v2
* With automatic models_api configuration for OpenAI-compatible providers
*/
import fs from 'fs'
import path from 'path'
// Endpoint types (must match schema)
type EndpointType =
| 'CHAT_COMPLETIONS'
| 'TEXT_COMPLETIONS'
| 'MESSAGES'
| 'RESPONSES'
| 'GENERATE_CONTENT'
| 'EMBEDDINGS'
| 'RERANK'
| 'IMAGE_GENERATION'
| 'IMAGE_EDIT'
| 'IMAGE_VARIATION'
| 'AUDIO_TRANSCRIPTION'
| 'AUDIO_TRANSLATION'
| 'TEXT_TO_SPEECH'
| 'VIDEO_GENERATION'
// V2 Provider data structure with formats and models_api
interface ProviderConfig {
id: string
name: string
description: string
authentication: string
formats: Array<{
format: string
base_url: string
default?: boolean
}>
supported_endpoints: EndpointType[]
api_compatibility: {
supports_array_content: boolean
supports_stream_options: boolean
supports_developer_role: boolean
supports_service_tier: boolean
supports_thinking_control: boolean
supports_api_version: boolean
}
documentation?: string
website?: string
deprecated: boolean
metadata: {
tags: string[]
}
models_api?: {
endpoints: Array<{
url: string
endpoint_type: EndpointType
format: string
transformer?: string
}>
enabled: boolean
update_frequency: string
}
}
// Simple Cherry Studio provider structure (what we parse from the file)
interface CherryStudioProvider {
id: string
name: string
type: string
apiHost: string
anthropicApiHost?: string
docs?: string
website?: string
}
// Providers to skip (local deployments or special cases)
const SKIP_PROVIDERS = new Set([
'ollama',
'lmstudio',
'new-api',
'ovms',
'xinference',
'vllm',
'cherryai', // Skip CherryAI as it's a special system provider
'azure-openai', // Requires special handling
'vertexai', // Requires special handling
'aws-bedrock', // Requires special handling
'ai-gateway', // Requires special handling
'gpustack' // No API host
])
// Providers without /models API endpoint (no model listing available)
const NO_MODELS_API = new Set(['perplexity', 'cephalon', 'minimax', 'longcat', 'voyageai', 'jina'])
// Providers with custom transformers
const CUSTOM_TRANSFORMERS: Record<string, string> = {
openrouter: 'openrouter',
aihubmix: 'aihubmix'
}
// Providers with custom models endpoint
const CUSTOM_ENDPOINTS: Record<string, string> = {
github: 'https://models.github.ai/inference/models',
copilot: 'https://api.githubcopilot.com/models'
}
// Compatibility arrays from Cherry Studio
const NOT_SUPPORT_ARRAY_CONTENT = [
'deepseek',
'baichuan',
'minimax',
'xirang',
'poe',
'cephalon'
]
const NOT_SUPPORT_STREAM_OPTIONS = ['mistral']
const NOT_SUPPORT_DEVELOPER_ROLE = ['poe', 'qiniu']
const NOT_SUPPORT_THINKING_CONTROL = ['ollama', 'lmstudio', 'nvidia']
const NOT_SUPPORT_API_VERSION = ['github', 'copilot', 'perplexity']
const NOT_SUPPORT_SERVICE_TIER = ['github', 'copilot', 'cerebras']
/**
* Parse Cherry Studio providers.ts file to extract provider configurations
*/
function parseCherryStudioProviders(filePath: string): Record<string, CherryStudioProvider> {
const content = fs.readFileSync(filePath, 'utf-8')
const providers: Record<string, CherryStudioProvider> = {}
// Extract PROVIDER_URLS for documentation/website info
const urlsMatch = content.match(/export const PROVIDER_URLS.*?=\s*{([^}]+(?:{[^}]*}[^}]*)*)\s*}/s)
const urlsData: Record<string, { docs?: string; website?: string }> = {}
if (urlsMatch) {
const urlsContent = urlsMatch[1]
const providerUrlMatches = urlsContent.matchAll(/['"]?(\w[\w-]*)['"]?\s*:\s*{([^}]+(?:{[^}]*}[^}]*)*?)}/g)
for (const match of providerUrlMatches) {
const providerId = match[1]
const urlConfig = match[2]
const docsMatch = urlConfig.match(/docs:\s*['"]([^'"]+)['"]/)?.[1]
const websiteMatch = urlConfig.match(/official:\s*['"]([^'"]+)['"]/)?.[1]
urlsData[providerId] = {
docs: docsMatch,
website: websiteMatch
}
}
}
// Extract SYSTEM_PROVIDERS_CONFIG
const configMatch = content.match(/export const SYSTEM_PROVIDERS_CONFIG.*?=\s*{([^]*?)\n}\s+as const/s)
if (!configMatch) {
throw new Error('Could not find SYSTEM_PROVIDERS_CONFIG in providers.ts')
}
const configContent = configMatch[1]
// Match each provider block
const providerMatches = configContent.matchAll(/['"]?([\w-]+)['"]?:\s*{([^}]+(?:{[^}]*}[^}]*)*?)}/gs)
for (const match of providerMatches) {
const providerId = match[1]
const providerConfig = match[2]
// Skip if in SKIP_PROVIDERS
if (SKIP_PROVIDERS.has(providerId)) {
continue
}
// Extract fields
const idMatch = providerConfig.match(/id:\s*['"]([^'"]+)['"]/)?.[1]
const nameMatch = providerConfig.match(/name:\s*['"]([^'"]+)['"]/)?.[1]
const typeMatch = providerConfig.match(/type:\s*['"]([^'"]+)['"]/)?.[1]
const apiHostMatch = providerConfig.match(/apiHost:\s*['"]([^'"]+)['"]/)?.[1]
const anthropicApiHostMatch = providerConfig.match(/anthropicApiHost:\s*['"]([^'"]+)['"]/)?.[1]
if (!idMatch || !nameMatch || !typeMatch || !apiHostMatch) {
continue
}
// Only process providers with actual API hosts (not empty or localhost for non-supported ones)
if (!apiHostMatch || apiHostMatch === '') {
continue
}
providers[providerId] = {
id: idMatch,
name: nameMatch,
type: typeMatch,
apiHost: apiHostMatch,
anthropicApiHost: anthropicApiHostMatch,
docs: urlsData[providerId]?.docs,
website: urlsData[providerId]?.website
}
}
return providers
}
/**
* Generate models_api configuration for OpenAI-compatible providers
*/
function generateModelsApiConfig(cherryProvider: CherryStudioProvider): ProviderConfig['models_api'] | undefined {
// Skip non-OpenAI types
if (!['openai', 'openai-response'].includes(cherryProvider.type)) {
return undefined
}
// Skip providers without /models API endpoint
if (NO_MODELS_API.has(cherryProvider.id)) {
return undefined
}
const baseUrl = cherryProvider.apiHost.replace(/\/$/, '')
const endpoints: ProviderConfig['models_api']['endpoints'] = []
// Build models endpoint URL for chat completions
let modelsUrl: string
if (CUSTOM_ENDPOINTS[cherryProvider.id]) {
modelsUrl = CUSTOM_ENDPOINTS[cherryProvider.id]
} else {
// If base_url already contains version path (/v1, /v2, /v1beta, /v1alpha, etc.), just append /models
// Otherwise check if provider supports API versioning
// Matches: /v1, /v2, /v1beta, /v1alpha, /v2beta2, etc.
if (/\/v\d+(alpha|beta)?(\d+)?/.test(baseUrl)) {
modelsUrl = `${baseUrl}/models`
} else if (NOT_SUPPORT_API_VERSION.includes(cherryProvider.id)) {
// Providers that don't support /v1/ prefix, use /models directly
modelsUrl = `${baseUrl}/models`
} else {
modelsUrl = `${baseUrl}/v1/models`
}
}
// Chat completions endpoint (most common)
endpoints.push({
url: modelsUrl,
endpoint_type: 'CHAT_COMPLETIONS',
format: 'OPENAI',
...(CUSTOM_TRANSFORMERS[cherryProvider.id] && {
transformer: CUSTOM_TRANSFORMERS[cherryProvider.id]
})
})
// Determine update frequency based on provider type
const updateFrequency = ['openrouter', 'aihubmix'].includes(cherryProvider.id)
? 'realtime' // Aggregators change frequently
: 'daily' // Official providers change less often
return {
endpoints,
enabled: true,
update_frequency: updateFrequency
}
}
/**
* Generate supported endpoints based on provider type
*/
function generateSupportedEndpoints(cherryProvider: CherryStudioProvider): EndpointType[] {
const endpoints: EndpointType[] = []
switch (cherryProvider.type) {
case 'openai':
case 'openai-response':
// OpenAI-compatible providers support chat completions
endpoints.push('CHAT_COMPLETIONS')
// OpenAI official and some aggregators support more endpoints
if (['openai', 'openrouter', 'together'].includes(cherryProvider.id)) {
endpoints.push('EMBEDDINGS')
}
// OpenAI official supports images, audio, and responses API
if (cherryProvider.id === 'openai') {
endpoints.push('RESPONSES', 'IMAGE_GENERATION', 'AUDIO_TRANSCRIPTION', 'TEXT_TO_SPEECH')
}
// If provider has anthropicApiHost, it also supports MESSAGES
if (cherryProvider.anthropicApiHost) {
endpoints.push('MESSAGES')
}
break
case 'anthropic':
// Anthropic uses Messages API
endpoints.push('MESSAGES')
break
case 'gemini':
// Gemini uses generateContent API
endpoints.push('GENERATE_CONTENT')
// Gemini also supports embeddings
endpoints.push('EMBEDDINGS')
break
default:
// Default to chat completions for unknown types
endpoints.push('CHAT_COMPLETIONS')
}
return endpoints
}
/**
* Create catalog provider config from Cherry Studio config
*/
function createProviderConfig(cherryProvider: CherryStudioProvider): ProviderConfig {
const formats: ProviderConfig['formats'] = []
// Add OpenAI format for openai-type providers
if (cherryProvider.type === 'openai' || cherryProvider.type === 'openai-response') {
formats.push({
format: 'OPENAI',
base_url: cherryProvider.apiHost,
default: true
})
}
// Add Anthropic format if anthropicApiHost is present
if (cherryProvider.anthropicApiHost) {
formats.push({
format: 'ANTHROPIC',
base_url: cherryProvider.anthropicApiHost
})
}
// For native Anthropic/Gemini providers
if (cherryProvider.type === 'anthropic') {
formats.push({
format: 'ANTHROPIC',
base_url: cherryProvider.apiHost,
default: true
})
}
if (cherryProvider.type === 'gemini') {
formats.push({
format: 'GEMINI',
base_url: cherryProvider.apiHost,
default: true
})
}
const provider: ProviderConfig = {
id: cherryProvider.id,
name: cherryProvider.name,
description: `${cherryProvider.name} - AI model provider`,
authentication: 'API_KEY',
formats,
supported_endpoints: generateSupportedEndpoints(cherryProvider),
api_compatibility: {
supports_array_content: !NOT_SUPPORT_ARRAY_CONTENT.includes(cherryProvider.id),
supports_stream_options: !NOT_SUPPORT_STREAM_OPTIONS.includes(cherryProvider.id),
supports_developer_role: !NOT_SUPPORT_DEVELOPER_ROLE.includes(cherryProvider.id),
supports_thinking_control: !NOT_SUPPORT_THINKING_CONTROL.includes(cherryProvider.id),
supports_api_version: !NOT_SUPPORT_API_VERSION.includes(cherryProvider.id),
supports_service_tier: !NOT_SUPPORT_SERVICE_TIER.includes(cherryProvider.id)
},
documentation: cherryProvider.docs,
website: cherryProvider.website,
deprecated: false,
metadata: {
tags: cherryProvider.type === 'openai' ? ['aggregator'] : ['official']
}
}
// Add models_api config
const modelsApi = generateModelsApiConfig(cherryProvider)
if (modelsApi) {
provider.models_api = modelsApi
}
return provider
}
async function generateProvidersJson() {
console.log('Generating providers.json from Cherry Studio configuration...\n')
// Path to Cherry Studio providers.ts
const cherryStudioPath = path.resolve(__dirname, '../../../src/renderer/src/config/providers.ts')
if (!fs.existsSync(cherryStudioPath)) {
throw new Error(`Cherry Studio providers.ts not found at: ${cherryStudioPath}`)
}
// Parse Cherry Studio providers
const cherryProviders = parseCherryStudioProviders(cherryStudioPath)
console.log(`Found ${Object.keys(cherryProviders).length} providers in Cherry Studio config`)
// Convert to catalog format
const providers = Object.values(cherryProviders).map(createProviderConfig)
const withModelsApi = providers.filter((p) => p.models_api)
console.log(`Generated ${providers.length} providers`)
console.log(` - With models_api: ${withModelsApi.length}`)
console.log(` - Without models_api: ${providers.length - withModelsApi.length}\n`)
const output = {
version: new Date().toISOString().split('T')[0].replace(/-/g, '.'),
providers
}
const outputPath = path.join(__dirname, '../data/providers.json')
await fs.promises.writeFile(outputPath, JSON.stringify(output, null, 2) + '\n', 'utf-8')
console.log(`✓ Saved to ${outputPath}`)
// List providers with models_api
console.log('\nProviders with models_api:')
withModelsApi.forEach((p) => {
const endpoint = p.models_api!.endpoints[0]
console.log(
` - ${p.id.padEnd(20)} ${endpoint.url}${endpoint.transformer ? ` (transformer: ${endpoint.transformer})` : ''}`
)
})
// List skipped providers
console.log(`\nSkipped ${SKIP_PROVIDERS.size} providers: ${Array.from(SKIP_PROVIDERS).join(', ')}`)
}
generateProvidersJson().catch(console.error)

View File

@ -0,0 +1,84 @@
#!/usr/bin/env tsx
import * as fs from 'fs'
import * as path from 'path'
import { OpenRouterTransformer } from '../src/utils/importers/openrouter/transformer'
import { mergeModelsList, MergeStrategies } from '../src/utils/merge-utils'
import type { OpenRouterResponse } from '../src/utils/importers/openrouter/types'
async function importOpenRouterModels() {
console.log('Fetching models from OpenRouter API...')
const modelsApiUrl = 'https://openrouter.ai/api/v1/models'
const embeddingsApiUrl = 'https://openrouter.ai/api/v1/embeddings/models'
try {
// Fetch from both APIs in parallel
console.log(' - Fetching chat models...')
const [modelsResponse, embeddingsResponse] = await Promise.all([
fetch(modelsApiUrl),
fetch(embeddingsApiUrl)
])
if (!modelsResponse.ok) {
throw new Error(`Models API error: ${modelsResponse.status} ${modelsResponse.statusText}`)
}
if (!embeddingsResponse.ok) {
throw new Error(`Embeddings API error: ${embeddingsResponse.status} ${embeddingsResponse.statusText}`)
}
const modelsJson: OpenRouterResponse = await modelsResponse.json()
const embeddingsJson: OpenRouterResponse = await embeddingsResponse.json()
console.log(`✓ Fetched ${modelsJson.data.length} chat models from OpenRouter`)
console.log(`✓ Fetched ${embeddingsJson.data.length} embedding models from OpenRouter`)
// Combine both arrays
const json: OpenRouterResponse = {
data: [...modelsJson.data, ...embeddingsJson.data]
}
console.log(`✓ Total: ${json.data.length} models from OpenRouter`)
// Transform models
console.log('Transforming models...')
const transformer = new OpenRouterTransformer()
const models = json.data.map((m) => transformer.transform(m))
console.log(`✓ Transformed ${models.length} models`)
// Optional: Save raw OpenRouter data for review
const openrouterOutputPath = path.join(__dirname, '../data/openrouter-models.json')
const openrouterOutput = {
version: new Date().toISOString().split('T')[0].replace(/-/g, '.'),
models
}
fs.writeFileSync(openrouterOutputPath, JSON.stringify(openrouterOutput, null, 2) + '\n', 'utf-8')
console.log(`✓ Saved OpenRouter models to ${openrouterOutputPath}`)
// Load existing models.json
const mainModelsPath = path.join(__dirname, '../data/models.json')
const mainModelsData = JSON.parse(fs.readFileSync(mainModelsPath, 'utf-8'))
// Smart merge - only fill undefined values
console.log('Merging with existing models (preserving non-undefined values)...')
const mergedModels = mergeModelsList(
mainModelsData.models || [],
models,
MergeStrategies.FILL_UNDEFINED
)
// Save
mainModelsData.models = mergedModels
fs.writeFileSync(mainModelsPath, JSON.stringify(mainModelsData, null, 2) + '\n', 'utf-8')
console.log(`✓ Merged models.json: ${mergedModels.length} total models`)
console.log(` - Preserved existing non-undefined values`)
console.log(` - Filled in undefined values from OpenRouter`)
console.log(`\n✓ Import complete!`)
} catch (error) {
console.error('✗ Failed to import OpenRouter models:', error)
process.exit(1)
}
}
// Run the script
importOpenRouterModels().catch(console.error)

View File

@ -1,39 +0,0 @@
#!/usr/bin/env tsx
/**
* Migration Script - Phase 2 Implementation
* Usage: npx tsx migrate.ts
*/
import * as path from 'path'
import { MigrationTool } from '../src/utils/migration'
async function main() {
const packageRoot = path.resolve(__dirname, '..')
const sourceDir = packageRoot
const outputDir = path.join(packageRoot, 'data')
console.log('🔧 Cherry Studio Catalog Migration - Phase 2')
console.log('==========================================')
console.log(`📁 Source: ${sourceDir}`)
console.log(`📁 Output: ${outputDir}`)
console.log('')
const tool = new MigrationTool(
path.join(sourceDir, 'provider_endpoints_support.json'),
path.join(sourceDir, 'model_prices_and_context_window.json'),
outputDir
)
try {
await tool.migrate()
console.log('')
console.log('🎉 Migration completed! Check the src/data/ directory for results.')
} catch (error) {
console.error('❌ Migration failed:', error)
process.exit(1)
}
}
main()

View File

@ -1,38 +0,0 @@
#!/usr/bin/env tsx
import fs from 'fs'
import path from 'path'
// Read the models.json file
const modelsPath = path.join(__dirname, '../data/models.json')
const catalogData = JSON.parse(fs.readFileSync(modelsPath, 'utf8'))
console.log('Total models before filtering:', catalogData.models?.length || 0)
// Check if models array exists
if (!catalogData.models || !Array.isArray(catalogData.models)) {
console.error('❌ No models array found in the file')
process.exit(1)
}
// Filter out models ending with 'search'
const filteredModels = catalogData.models.filter((model: any) => {
if (model.id && model.id.endsWith('search')) {
console.log('Removing model:', model.id)
return false
}
return true
})
console.log('Total models after filtering:', filteredModels.length)
// Update the data with filtered models
const updatedData = {
...catalogData,
models: filteredModels
}
// Write the filtered data back to the file
fs.writeFileSync(modelsPath, JSON.stringify(updatedData, null, 2), 'utf8')
console.log('✅ Successfully removed models ending with "search"')

View File

@ -0,0 +1,368 @@
#!/usr/bin/env tsx
/**
* Batch sync all provider models
* Fetches models from all providers with models_api configured (except OpenRouter and AIHubMix)
* OpenRouter and AIHubMix should be synced manually using import scripts as they are authoritative sources
*/
import { config } from 'dotenv'
import fs from 'fs/promises'
import path from 'path'
import type { ModelConfig,ModelsDataFile, OverridesDataFile, ProvidersDataFile } from '../src/schemas'
import { BaseImporter } from '../src/utils/importers/base/base-importer'
import { OpenAICompatibleTransformer } from '../src/utils/importers/base/base-transformer'
import { deduplicateOverrides,generateOverride, mergeOverrides } from '../src/utils/override-utils'
// Load environment variables
config({ path: path.join(__dirname, '../.env') })
const DATA_DIR = path.join(__dirname, '../data')
// Providers to skip (authoritative sources handled separately)
const SKIP_PROVIDERS = new Set(['openrouter', 'aihubmix'])
// Map provider IDs to environment variable names
const PROVIDER_ENV_MAP: Record<string, string> = {
cherryin: 'CHERRYIN_API_KEY',
silicon: 'SILICON_API_KEY',
ocoolai: 'OCOOLAI_API_KEY',
zhipu: 'ZHIPU_API_KEY',
deepseek: 'DEEPSEEK_API_KEY',
alayanew: 'ALAYANEW_API_KEY',
dmxapi: 'DMXAPI_API_KEY',
aionly: 'AIONLY_API_KEY',
burncloud: 'BURNCLOUD_API_KEY',
tokenflux: 'TOKENFLUX_API_KEY',
'302ai': 'AI_302_API_KEY',
cephalon: 'CEPHALON_API_KEY',
lanyun: 'LANYUN_API_KEY',
ph8: 'PH8_API_KEY',
sophnet: 'SOPHNET_API_KEY',
ppio: 'PPIO_API_KEY',
qiniu: 'QINIU_API_KEY',
openai: 'OPENAI_API_KEY',
github: 'GITHUB_API_KEY',
copilot: 'COPILOT_API_KEY',
yi: 'YI_API_KEY',
moonshot: 'MOONSHOT_API_KEY',
baichuan: 'BAICHUAN_API_KEY',
dashscope: 'DASHSCOPE_API_KEY',
stepfun: 'STEPFUN_API_KEY',
doubao: 'DOUBAO_API_KEY',
infini: 'INFINI_API_KEY',
minimax: 'MINIMAX_API_KEY',
groq: 'GROQ_API_KEY',
together: 'TOGETHER_API_KEY',
fireworks: 'FIREWORKS_API_KEY',
nvidia: 'NVIDIA_API_KEY',
grok: 'GROK_API_KEY',
hyperbolic: 'HYPERBOLIC_API_KEY',
mistral: 'MISTRAL_API_KEY',
jina: 'JINA_API_KEY',
perplexity: 'PERPLEXITY_API_KEY',
modelscope: 'MODELSCOPE_API_KEY',
xirang: 'XIRANG_API_KEY',
hunyuan: 'HUNYUAN_API_KEY',
'tencent-cloud-ti': 'TENCENT_CLOUD_TI_API_KEY',
'baidu-cloud': 'BAIDU_CLOUD_API_KEY',
voyageai: 'VOYAGEAI_API_KEY',
poe: 'POE_API_KEY',
longcat: 'LONGCAT_API_KEY',
huggingface: 'HUGGINGFACE_API_KEY',
cerebras: 'CEREBRAS_API_KEY'
}
/**
* Get API key for a provider from environment variables
*/
function getApiKey(providerId: string): string | undefined {
const envVarName = PROVIDER_ENV_MAP[providerId]
if (!envVarName) return undefined
return process.env[envVarName]
}
interface SyncResult {
providerId: string
status: 'success' | 'skipped' | 'error'
fetched?: number
newModels?: number
overridesGenerated?: number
overridesMerged?: number
error?: string
}
/**
* Sync models from a single provider
*/
async function syncProvider(
providerId: string,
provider: any,
baseModels: ModelConfig[],
existingOverrides: any[]
): Promise<SyncResult> {
try {
console.log(`\n[${providerId}] Syncing models...`)
// Get API key from environment
const apiKey = getApiKey(providerId)
if (!apiKey) {
console.warn(` ⚠ No API key found for ${providerId} (env: ${PROVIDER_ENV_MAP[providerId]})`)
console.warn(` Set ${PROVIDER_ENV_MAP[providerId]} in .env file`)
}
// Initialize importer with default OpenAI-compatible transformer
const importer = new BaseImporter()
const transformer = new OpenAICompatibleTransformer()
// Fetch from all endpoints
const allProviderModels: ModelConfig[] = []
for (const endpoint of provider.models_api.endpoints) {
try {
console.log(` - Fetching from ${endpoint.url}`)
const result = await importer.importFromEndpoint(providerId, endpoint, transformer, apiKey)
allProviderModels.push(...result.models)
console.log(` ✓ Fetched ${result.models.length} models`)
} catch (error) {
console.error(` ✗ Failed to fetch from ${endpoint.url}:`, error instanceof Error ? error.message : error)
}
}
if (allProviderModels.length === 0) {
return {
providerId,
status: 'error',
error: 'No models fetched from any endpoint'
}
}
// Statistics
const stats = {
fetched: allProviderModels.length,
newModels: 0,
overridesGenerated: 0,
overridesMerged: 0
}
// Check for new models (not in base models.json)
const baseModelIds = new Set(baseModels.map((m) => m.id.toLowerCase()))
const newModels = allProviderModels.filter((m) => !baseModelIds.has(m.id.toLowerCase()))
stats.newModels = newModels.length
if (newModels.length > 0) {
console.log(` + Adding ${newModels.length} new models to models.json`)
baseModels.push(...newModels)
}
// Generate or update overrides for existing models
const newOverrides = []
for (const providerModel of allProviderModels) {
const baseModel = baseModels.find((m) => m.id.toLowerCase() === providerModel.id.toLowerCase())
if (!baseModel) continue // Skip new models (already added)
// Always generate override to mark provider support (even if identical)
const generatedOverride = generateOverride(baseModel, providerModel, providerId, {
priority: 0,
alwaysCreate: true // Always create override to mark provider support
})
if (generatedOverride) {
// Check if manual override exists (priority >= 100)
const existingOverride = existingOverrides.find(
(o: any) => o.provider_id === providerId && o.model_id.toLowerCase() === providerModel.id.toLowerCase()
)
if (existingOverride) {
// Merge with existing override (preserve manual edits)
const mergedOverride = mergeOverrides(existingOverride, generatedOverride, {
preserveManual: true,
manualPriorityThreshold: 100
})
newOverrides.push(mergedOverride)
stats.overridesMerged++
} else {
// Add new override
newOverrides.push(generatedOverride)
stats.overridesGenerated++
}
}
}
// Update existingOverrides array
if (newOverrides.length > 0) {
// Remove old auto-generated overrides for this provider (priority < 100)
const filteredOverrides = existingOverrides.filter(
(o: any) => !(o.provider_id === providerId && o.priority < 100)
)
// Add new overrides
existingOverrides.length = 0
existingOverrides.push(...filteredOverrides, ...newOverrides)
console.log(` + Generated ${stats.overridesGenerated} new overrides, merged ${stats.overridesMerged} existing`)
}
return {
providerId,
status: 'success',
...stats
}
} catch (error) {
console.error(`[${providerId}] Error:`, error instanceof Error ? error.message : error)
return {
providerId,
status: 'error',
error: error instanceof Error ? error.message : 'Unknown error'
}
}
}
/**
* Main sync function
*/
async function syncAllProviders() {
console.log('='.repeat(60))
console.log('Batch Provider Model Sync')
console.log('='.repeat(60))
console.log('\nLoading data files...\n')
try {
// Load providers
const providersPath = path.join(DATA_DIR, 'providers.json')
const providersData: ProvidersDataFile = JSON.parse(await fs.readFile(providersPath, 'utf-8'))
// Load models
const modelsPath = path.join(DATA_DIR, 'models.json')
const modelsData: ModelsDataFile = JSON.parse(await fs.readFile(modelsPath, 'utf-8'))
// Load overrides
const overridesPath = path.join(DATA_DIR, 'overrides.json')
let overridesData: OverridesDataFile
try {
overridesData = JSON.parse(await fs.readFile(overridesPath, 'utf-8'))
} catch {
overridesData = {
version: new Date().toISOString().split('T')[0].replace(/-/g, '.'),
overrides: []
}
}
console.log(`Loaded:`)
console.log(` - ${providersData.providers.length} providers`)
console.log(` - ${modelsData.models.length} models`)
console.log(` - ${overridesData.overrides.length} overrides`)
// Filter providers with models_api enabled (excluding skip list)
const providersToSync = providersData.providers.filter(
(p) => p.models_api && p.models_api.enabled && !SKIP_PROVIDERS.has(p.id)
)
console.log(`\nProviders to sync: ${providersToSync.length}`)
console.log(
`Skipping: ${Array.from(SKIP_PROVIDERS).join(', ')} (authoritative sources, use import scripts instead)\n`
)
if (providersToSync.length === 0) {
console.log('No providers to sync.')
return
}
// Check API keys availability
const providersWithKeys = providersToSync.filter((p) => getApiKey(p.id))
const providersWithoutKeys = providersToSync.filter((p) => !getApiKey(p.id))
console.log(`API Keys Status:`)
console.log(` ✓ Found: ${providersWithKeys.length}`)
console.log(` ✗ Missing: ${providersWithoutKeys.length}`)
if (providersWithoutKeys.length > 0) {
console.log(`\nProviders without API keys (will likely fail):`)
providersWithoutKeys.forEach((p) => {
console.log(` - ${p.id.padEnd(20)} (env: ${PROVIDER_ENV_MAP[p.id]})`)
})
console.log(`\nTo configure API keys:`)
console.log(` 1. Copy .env.example to .env`)
console.log(` 2. Fill in your API keys`)
console.log(` 3. Re-run this script\n`)
}
// Sync each provider
const results: SyncResult[] = []
for (const provider of providersToSync) {
const result = await syncProvider(provider.id, provider, modelsData.models, overridesData.overrides)
results.push(result)
// Update last_synced timestamp
if (result.status === 'success' && provider.models_api) {
provider.models_api.last_synced = new Date().toISOString()
}
// Small delay to avoid rate limiting
await new Promise((resolve) => setTimeout(resolve, 1000))
}
// Deduplicate overrides
console.log('\nDeduplicating overrides...')
const beforeCount = overridesData.overrides.length
overridesData.overrides = deduplicateOverrides(overridesData.overrides)
const afterCount = overridesData.overrides.length
if (beforeCount !== afterCount) {
console.log(` Removed ${beforeCount - afterCount} duplicate overrides`)
}
// Save all data files
console.log('\nSaving data files...')
await fs.writeFile(providersPath, JSON.stringify(providersData, null, 2) + '\n', 'utf-8')
await fs.writeFile(modelsPath, JSON.stringify(modelsData, null, 2) + '\n', 'utf-8')
await fs.writeFile(overridesPath, JSON.stringify(overridesData, null, 2) + '\n', 'utf-8')
// Print summary
console.log('\n' + '='.repeat(60))
console.log('Sync Summary')
console.log('='.repeat(60))
const successful = results.filter((r) => r.status === 'success')
const failed = results.filter((r) => r.status === 'error')
console.log(`\nTotal providers: ${results.length}`)
console.log(` ✓ Successful: ${successful.length}`)
console.log(` ✗ Failed: ${failed.length}`)
if (successful.length > 0) {
const totalFetched = successful.reduce((sum, r) => sum + (r.fetched || 0), 0)
const totalNew = successful.reduce((sum, r) => sum + (r.newModels || 0), 0)
const totalOverrides = successful.reduce((sum, r) => sum + (r.overridesGenerated || 0), 0)
const totalMerged = successful.reduce((sum, r) => sum + (r.overridesMerged || 0), 0)
console.log(`\nStatistics:`)
console.log(` - Total models fetched: ${totalFetched}`)
console.log(` - New models added: ${totalNew}`)
console.log(` - Overrides generated: ${totalOverrides}`)
console.log(` - Overrides merged: ${totalMerged}`)
}
if (failed.length > 0) {
console.log(`\nFailed providers:`)
failed.forEach((r) => {
console.log(`${r.providerId}: ${r.error}`)
})
}
console.log('\n' + '='.repeat(60))
console.log('✓ Batch sync completed')
console.log('='.repeat(60))
} catch (error) {
console.error('\n✗ Fatal error:', error)
throw error
}
}
// Run the sync
syncAllProviders().catch((error) => {
console.error('Script failed:', error)
process.exit(1)
})

View File

@ -109,51 +109,28 @@ exports[`Config & Schema > Snapshot Tests > should snapshot provider configurati
"supports_api_version": false,
"supports_array_content": true,
"supports_developer_role": false,
"supports_multimodal": false,
"supports_parallel_tools": false,
"supports_service_tier": false,
"supports_stream_options": false,
"supports_thinking_control": false,
},
"authentication": "API_KEY",
"behaviors": {
"has_auto_retry": false,
"has_real_time_metrics": false,
"provides_fallback_routing": false,
"provides_model_mapping": false,
"provides_usage_analytics": false,
"provides_usage_limits": false,
"requires_api_key_validation": true,
"supports_batch_processing": false,
"supports_custom_models": false,
"supports_health_check": false,
"supports_model_fine_tuning": false,
"supports_model_versioning": false,
"supports_rate_limiting": false,
"supports_streaming": true,
"supports_webhook_events": false,
},
"config_version": "1.0.0",
"deprecated": false,
"description": "A test provider for unit testing",
"documentation": "https://docs.test.com",
"formats": [
{
"base_url": "https://api.test.com/v1",
"default": true,
"format": "OPENAI",
},
],
"id": "test-provider",
"maintenance_mode": false,
"metadata": {
"category": "ai-provider",
"reliability": "high",
"source": "test",
"supportedLanguages": [
"en",
],
"tags": [
"test",
],
},
"model_routing": "DIRECT",
"name": "Test Provider",
"pricing_model": "PER_MODEL",
"special_config": {},
"supported_endpoints": [
"CHAT_COMPLETIONS",
],

View File

@ -74,25 +74,13 @@ describe('Config & Schema', () => {
name: 'Test Provider',
description: 'A test provider for unit testing',
authentication: 'API_KEY',
pricing_model: 'PER_MODEL',
model_routing: 'DIRECT',
behaviors: {
supports_custom_models: false,
provides_model_mapping: false,
supports_model_versioning: false,
provides_fallback_routing: false,
has_auto_retry: false,
supports_health_check: false,
has_real_time_metrics: false,
provides_usage_analytics: false,
supports_webhook_events: false,
requires_api_key_validation: true,
supports_rate_limiting: false,
provides_usage_limits: false,
supports_streaming: true,
supports_batch_processing: false,
supports_model_fine_tuning: false
},
formats: [
{
format: 'OPENAI',
base_url: 'https://api.test.com/v1',
default: true
}
],
supported_endpoints: ['CHAT_COMPLETIONS'],
api_compatibility: {
supports_array_content: true,
@ -100,22 +88,13 @@ describe('Config & Schema', () => {
supports_developer_role: false,
supports_thinking_control: false,
supports_api_version: false,
supports_parallel_tools: false,
supports_multimodal: false,
supports_service_tier: false
},
special_config: {},
documentation: 'https://docs.test.com',
website: 'https://test.com',
deprecated: false,
maintenance_mode: false,
config_version: '1.0.0',
metadata: {
tags: ['test'],
category: 'ai-provider',
source: 'test',
reliability: 'high',
supportedLanguages: ['en']
tags: ['test']
}
})
})

View File

@ -6,25 +6,13 @@
"name": "Test Provider",
"description": "A test provider for unit testing",
"authentication": "API_KEY",
"pricing_model": "PER_MODEL",
"model_routing": "DIRECT",
"behaviors": {
"supports_custom_models": false,
"provides_model_mapping": false,
"supports_model_versioning": false,
"provides_fallback_routing": false,
"has_auto_retry": false,
"supports_health_check": false,
"has_real_time_metrics": false,
"provides_usage_analytics": false,
"supports_webhook_events": false,
"requires_api_key_validation": true,
"supports_rate_limiting": false,
"provides_usage_limits": false,
"supports_streaming": true,
"supports_batch_processing": false,
"supports_model_fine_tuning": false
},
"formats": [
{
"format": "OPENAI",
"base_url": "https://api.test.com/v1",
"default": true
}
],
"supported_endpoints": ["CHAT_COMPLETIONS"],
"api_compatibility": {
"supports_array_content": true,
@ -32,22 +20,13 @@
"supports_developer_role": false,
"supports_thinking_control": false,
"supports_api_version": false,
"supports_parallel_tools": false,
"supports_multimodal": false,
"supports_service_tier": false
},
"special_config": {},
"documentation": "https://docs.test.com",
"website": "https://test.com",
"deprecated": false,
"maintenance_mode": false,
"config_version": "1.0.0",
"metadata": {
"tags": ["test"],
"category": "ai-provider",
"source": "test",
"reliability": "high",
"supportedLanguages": ["en"]
"tags": ["test"]
}
}
]

View File

@ -0,0 +1,205 @@
/**
* Test merge utilities
*/
import { describe, expect, it } from 'vitest'
import { mergeModelsList, MergeStrategies } from '../utils/merge-utils'
import type { ModelConfig } from '../schemas'
describe('Merge Utilities', () => {
describe('mergeModelsList', () => {
it('should merge models with case-insensitive ID matching', () => {
const existing: ModelConfig[] = [
{
id: 'GPT-4',
name: 'GPT-4',
description: 'Existing description',
owned_by: 'openai',
capabilities: ['FUNCTION_CALL'],
input_modalities: ['TEXT'],
output_modalities: ['TEXT'],
context_window: 8000
},
{
id: 'claude-3-opus',
name: 'Claude 3 Opus',
owned_by: 'anthropic',
input_modalities: ['TEXT'],
output_modalities: ['TEXT'],
context_window: 200000
}
]
const incoming: ModelConfig[] = [
{
id: 'gpt-4',
name: 'GPT-4 Updated',
description: 'New description',
owned_by: 'openai',
capabilities: ['FUNCTION_CALL', 'REASONING'],
input_modalities: ['TEXT', 'VISION'],
output_modalities: ['TEXT'],
context_window: 128000
}
]
const result = mergeModelsList(existing, incoming, MergeStrategies.FILL_UNDEFINED)
// Should have 2 models total
expect(result).toHaveLength(2)
// Find the merged gpt-4 model
const gpt4 = result.find((m) => m.id === 'gpt-4')
expect(gpt4).toBeDefined()
expect(gpt4!.id).toBe('gpt-4') // ID should be lowercase
expect(gpt4!.description).toBe('Existing description') // Preserved from existing
expect(gpt4!.context_window).toBe(8000) // Preserved from existing
// Claude model should remain with lowercase ID
const claude = result.find((m) => m.id === 'claude-3-opus')
expect(claude).toBeDefined()
expect(claude!.id).toBe('claude-3-opus')
})
it('should normalize all model IDs to lowercase', () => {
const models: ModelConfig[] = [
{
id: 'GPT-4',
name: 'GPT-4',
owned_by: 'openai',
input_modalities: ['TEXT'],
output_modalities: ['TEXT'],
context_window: 8000
},
{
id: 'Claude-3-Opus',
name: 'Claude 3 Opus',
owned_by: 'anthropic',
input_modalities: ['TEXT'],
output_modalities: ['TEXT'],
context_window: 200000
},
{
id: 'Gemini-Pro',
name: 'Gemini Pro',
owned_by: 'google',
input_modalities: ['TEXT'],
output_modalities: ['TEXT'],
context_window: 32000
}
]
const result = mergeModelsList(models, [], MergeStrategies.FILL_UNDEFINED)
// All IDs should be lowercase
expect(result.every((m) => m.id === m.id.toLowerCase())).toBe(true)
expect(result.find((m) => m.id === 'gpt-4')).toBeDefined()
expect(result.find((m) => m.id === 'claude-3-opus')).toBeDefined()
expect(result.find((m) => m.id === 'gemini-pro')).toBeDefined()
})
it('should merge models with mixed case IDs from different sources', () => {
const existing: ModelConfig[] = [
{
id: 'gpt-4-turbo',
name: 'GPT-4 Turbo',
owned_by: 'openai',
input_modalities: ['TEXT'],
output_modalities: ['TEXT'],
context_window: 128000,
pricing: {
input: { per_million_tokens: 10, currency: 'USD' },
output: { per_million_tokens: 30, currency: 'USD' }
}
}
]
const incoming: ModelConfig[] = [
{
id: 'GPT-4-Turbo',
name: 'GPT-4 Turbo Updated',
owned_by: 'openai',
input_modalities: ['TEXT', 'VISION'],
output_modalities: ['TEXT'],
context_window: 128000,
pricing: {
input: { per_million_tokens: 5, currency: 'USD' },
output: { per_million_tokens: 15, currency: 'USD' }
}
}
]
const result = mergeModelsList(existing, incoming, {
preserveExisting: true,
alwaysOverwrite: ['pricing'] // Always update pricing
})
expect(result).toHaveLength(1)
const model = result[0]
expect(model.id).toBe('gpt-4-turbo') // Lowercase
expect(model.name).toBe('GPT-4 Turbo') // From existing (preserved)
expect(model.pricing?.input.per_million_tokens).toBe(5) // From incoming (alwaysOverwrite)
})
it('should handle new models with uppercase IDs', () => {
const existing: ModelConfig[] = []
const incoming: ModelConfig[] = [
{
id: 'NEW-MODEL',
name: 'New Model',
owned_by: 'test',
input_modalities: ['TEXT'],
output_modalities: ['TEXT'],
context_window: 4000
}
]
const result = mergeModelsList(existing, incoming, MergeStrategies.FILL_UNDEFINED)
expect(result).toHaveLength(1)
expect(result[0].id).toBe('new-model') // Should be lowercase
})
it('should deduplicate models with different case variations when merging', () => {
const existing: ModelConfig[] = [
{
id: 'GPT-4',
name: 'GPT-4 Existing',
owned_by: 'openai',
input_modalities: ['TEXT'],
output_modalities: ['TEXT'],
context_window: 8000,
description: 'Existing model'
}
]
const incoming: ModelConfig[] = [
{
id: 'gpt-4',
name: 'GPT-4 Incoming',
owned_by: 'openai',
input_modalities: ['TEXT'],
output_modalities: ['TEXT'],
context_window: 8000,
max_output_tokens: 4096
},
{
id: 'Gpt-4',
name: 'GPT-4 Another',
owned_by: 'openai',
input_modalities: ['TEXT'],
output_modalities: ['TEXT'],
context_window: 8000
}
]
const result = mergeModelsList(existing, incoming, MergeStrategies.FILL_UNDEFINED)
// Should only have 1 model after merging (all case variations treated as same model)
expect(result).toHaveLength(1)
expect(result[0].id).toBe('gpt-4') // Lowercase
expect(result[0].description).toBe('Existing model') // Preserved from existing
expect(result[0].max_output_tokens).toBe(4096) // Filled from incoming
})
})
})

View File

@ -0,0 +1,430 @@
/**
* Tests for override cleanup and deduplication logic
* Tests deduplicateOverrides(), cleanupRedundantOverrides()
*/
import { describe, expect, it } from 'vitest'
import type { ModelConfig, ProviderModelOverride } from '../schemas'
import { cleanupRedundantOverrides,deduplicateOverrides } from '../utils/override-utils'
describe('deduplicateOverrides', () => {
it('should keep unique overrides unchanged', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 128000 },
priority: 0
},
{
provider_id: 'openrouter',
model_id: 'claude-3-opus',
limits: { context_window: 200000 },
priority: 0
},
{
provider_id: 'aihubmix',
model_id: 'gpt-4',
limits: { context_window: 8192 },
priority: 0
}
]
const result = deduplicateOverrides(overrides)
expect(result).toHaveLength(3)
expect(result).toEqual(expect.arrayContaining(overrides))
})
it('should remove exact duplicates and keep highest priority', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 128000 },
priority: 0
},
{
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 8192 },
priority: 100 // Higher priority
}
]
const result = deduplicateOverrides(overrides)
expect(result).toHaveLength(1)
expect(result[0].priority).toBe(100)
expect(result[0].limits?.context_window).toBe(8192)
})
it('should keep first when priorities are equal', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 128000 },
priority: 50
},
{
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 8192 },
priority: 50
}
]
const result = deduplicateOverrides(overrides)
expect(result).toHaveLength(1)
expect(result[0].limits?.context_window).toBe(128000)
})
it('should handle multiple duplicates with different priorities', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 8192 },
priority: 0
},
{
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 16384 },
priority: 50
},
{
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 128000 },
priority: 100
},
{
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 4096 },
priority: 25
}
]
const result = deduplicateOverrides(overrides)
expect(result).toHaveLength(1)
expect(result[0].priority).toBe(100)
expect(result[0].limits?.context_window).toBe(128000)
})
it('should handle empty array', () => {
const result = deduplicateOverrides([])
expect(result).toHaveLength(0)
})
it('should treat different providers as different keys', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 128000 },
priority: 50
},
{
provider_id: 'aihubmix',
model_id: 'gpt-4',
limits: { context_window: 8192 },
priority: 100
}
]
const result = deduplicateOverrides(overrides)
expect(result).toHaveLength(2)
})
it('should treat different models as different keys', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 128000 },
priority: 0
},
{
provider_id: 'openrouter',
model_id: 'gpt-4-turbo',
limits: { context_window: 128000 },
priority: 0
}
]
const result = deduplicateOverrides(overrides)
expect(result).toHaveLength(2)
})
})
describe('cleanupRedundantOverrides', () => {
const baseModels: ModelConfig[] = [
{
id: 'gpt-4',
name: 'GPT-4',
provider: 'openai',
endpoint_type: 'CHAT_COMPLETIONS',
capabilities: ['FUNCTION_CALL', 'REASONING'],
context_window: 8192,
max_output_tokens: 4096,
pricing: {
currency: 'USD',
input: { per_million_tokens: 30 },
output: { per_million_tokens: 60 }
}
},
{
id: 'claude-3-opus',
name: 'Claude 3 Opus',
provider: 'anthropic',
endpoint_type: 'CHAT_COMPLETIONS',
capabilities: ['FUNCTION_CALL', 'REASONING', 'IMAGE_RECOGNITION'],
context_window: 200000,
max_output_tokens: 4096
}
]
it('should remove override that matches base model exactly', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
// No actual differences from base
priority: 0
}
]
const result = cleanupRedundantOverrides(overrides, baseModels)
expect(result.kept).toHaveLength(0)
expect(result.removed).toHaveLength(1)
expect(result.reasons['openrouter:gpt-4']).toBe('Override matches base model')
})
it('should keep override with different limits', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 128000 },
priority: 0
}
]
const result = cleanupRedundantOverrides(overrides, baseModels)
expect(result.kept).toHaveLength(1)
expect(result.removed).toHaveLength(0)
})
it('should keep override with different pricing', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
pricing: {
currency: 'USD',
input: { per_million_tokens: 25 },
output: { per_million_tokens: 50 }
},
priority: 0
}
]
const result = cleanupRedundantOverrides(overrides, baseModels)
expect(result.kept).toHaveLength(1)
expect(result.removed).toHaveLength(0)
})
it('should keep override with capability changes', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
capabilities: {
add: ['IMAGE_RECOGNITION']
},
priority: 0
}
]
const result = cleanupRedundantOverrides(overrides, baseModels)
expect(result.kept).toHaveLength(1)
expect(result.removed).toHaveLength(0)
})
it('should keep override with disabled flag', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
disabled: true,
reason: 'Not available',
priority: 0
}
]
const result = cleanupRedundantOverrides(overrides, baseModels)
expect(result.kept).toHaveLength(1)
expect(result.removed).toHaveLength(0)
})
it('should keep override with replace_with', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
replace_with: 'gpt-4-turbo',
priority: 0
}
]
const result = cleanupRedundantOverrides(overrides, baseModels)
expect(result.kept).toHaveLength(1)
expect(result.removed).toHaveLength(0)
})
it('should keep override with reasoning configuration', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
reasoning: {
type: 'anthropic',
params: { type: 'enabled', budgetTokens: 10000 }
},
priority: 0
}
]
const result = cleanupRedundantOverrides(overrides, baseModels)
expect(result.kept).toHaveLength(1)
expect(result.removed).toHaveLength(0)
})
it('should keep override with parameter changes', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
parameters: {
temperature: { min: 0, max: 1, default: 0.7 }
},
priority: 0
}
]
const result = cleanupRedundantOverrides(overrides, baseModels)
expect(result.kept).toHaveLength(1)
expect(result.removed).toHaveLength(0)
})
it('should keep override for non-existent base model', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'non-existent-model',
limits: { context_window: 128000 },
priority: 0
}
]
const result = cleanupRedundantOverrides(overrides, baseModels)
expect(result.kept).toHaveLength(1)
expect(result.removed).toHaveLength(0)
})
it('should handle mixed redundant and non-redundant overrides', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
// Redundant
priority: 0
},
{
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 128000 }, // Non-redundant
priority: 50
},
{
provider_id: 'openrouter',
model_id: 'claude-3-opus',
// Redundant
priority: 0
}
]
const result = cleanupRedundantOverrides(overrides, baseModels)
expect(result.kept).toHaveLength(1)
expect(result.removed).toHaveLength(2)
})
it('should handle empty arrays', () => {
const result = cleanupRedundantOverrides([], baseModels)
expect(result.kept).toHaveLength(0)
expect(result.removed).toHaveLength(0)
expect(Object.keys(result.reasons)).toHaveLength(0)
})
it('should keep override if limits match but pricing differs', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: {
context_window: 8192,
max_output_tokens: 4096
},
pricing: {
currency: 'USD',
input: { per_million_tokens: 20 },
output: { per_million_tokens: 40 }
},
priority: 0
}
]
const result = cleanupRedundantOverrides(overrides, baseModels)
expect(result.kept).toHaveLength(1)
expect(result.removed).toHaveLength(0)
})
it('should remove override if limits match base exactly', () => {
const overrides: ProviderModelOverride[] = [
{
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: {
context_window: 8192,
max_output_tokens: 4096
},
priority: 0
}
]
const result = cleanupRedundantOverrides(overrides, baseModels)
expect(result.kept).toHaveLength(0)
expect(result.removed).toHaveLength(1)
})
})

View File

@ -0,0 +1,430 @@
/**
* Tests for override generation logic
* Tests generateOverride(), validateOverrideEnhanced()
*/
import { describe, expect, it } from 'vitest'
import type { ModelConfig, ProviderModelOverride } from '../schemas'
import { generateOverride, validateOverrideEnhanced } from '../utils/override-utils'
describe('generateOverride', () => {
const baseModel: ModelConfig = {
id: 'gpt-4',
name: 'GPT-4',
provider: 'openai',
endpoint_type: 'CHAT_COMPLETIONS',
capabilities: ['FUNCTION_CALL', 'REASONING'],
context_window: 8192,
max_output_tokens: 4096,
pricing: {
currency: 'USD',
input: { per_million_tokens: 30 },
output: { per_million_tokens: 60 }
}
}
it('should return null when models are identical', () => {
const providerModel = { ...baseModel }
const result = generateOverride(baseModel, providerModel, 'openrouter')
expect(result).toBeNull()
})
it('should generate override for pricing difference', () => {
const providerModel: ModelConfig = {
...baseModel,
pricing: {
currency: 'USD',
input: { per_million_tokens: 25 },
output: { per_million_tokens: 50 }
}
}
const result = generateOverride(baseModel, providerModel, 'openrouter')
expect(result).toBeDefined()
expect(result?.provider_id).toBe('openrouter')
expect(result?.model_id).toBe('gpt-4')
expect(result?.pricing).toEqual(providerModel.pricing)
expect(result?.capabilities).toBeUndefined()
expect(result?.limits).toBeUndefined()
})
it('should generate override for capability additions', () => {
const providerModel: ModelConfig = {
...baseModel,
capabilities: ['FUNCTION_CALL', 'REASONING', 'IMAGE_RECOGNITION']
}
const result = generateOverride(baseModel, providerModel, 'openrouter')
expect(result).toBeDefined()
expect(result?.capabilities).toEqual({
add: ['IMAGE_RECOGNITION']
})
})
it('should generate override for capability removals', () => {
const providerModel: ModelConfig = {
...baseModel,
capabilities: ['FUNCTION_CALL']
}
const result = generateOverride(baseModel, providerModel, 'openrouter')
expect(result).toBeDefined()
expect(result?.capabilities).toEqual({
remove: ['REASONING']
})
})
it('should generate override for capability add and remove', () => {
const providerModel: ModelConfig = {
...baseModel,
capabilities: ['FUNCTION_CALL', 'IMAGE_RECOGNITION']
}
const result = generateOverride(baseModel, providerModel, 'openrouter')
expect(result).toBeDefined()
expect(result?.capabilities).toEqual({
add: ['IMAGE_RECOGNITION'],
remove: ['REASONING']
})
})
it('should generate override for context_window change', () => {
const providerModel: ModelConfig = {
...baseModel,
context_window: 128000
}
const result = generateOverride(baseModel, providerModel, 'openrouter')
expect(result).toBeDefined()
expect(result?.limits).toEqual({
context_window: 128000
})
})
it('should generate override for max_output_tokens change', () => {
const providerModel: ModelConfig = {
...baseModel,
max_output_tokens: 16384
}
const result = generateOverride(baseModel, providerModel, 'openrouter')
expect(result).toBeDefined()
expect(result?.limits).toEqual({
max_output_tokens: 16384
})
})
it('should generate override for multiple limit changes', () => {
const providerModel: ModelConfig = {
...baseModel,
context_window: 128000,
max_output_tokens: 16384,
max_input_tokens: 120000
}
const result = generateOverride(baseModel, providerModel, 'openrouter')
expect(result).toBeDefined()
expect(result?.limits).toEqual({
context_window: 128000,
max_output_tokens: 16384,
max_input_tokens: 120000
})
})
it('should generate override for reasoning configuration', () => {
const providerModel: ModelConfig = {
...baseModel,
reasoning: {
type: 'anthropic',
params: {
type: 'enabled',
budgetTokens: 10000
}
}
}
const result = generateOverride(baseModel, providerModel, 'openrouter')
expect(result).toBeDefined()
expect(result?.reasoning).toEqual(providerModel.reasoning)
})
it('should generate override for parameter support changes', () => {
const baseModelWithParams: ModelConfig = {
...baseModel,
parameters: {
temperature: { min: 0, max: 2, default: 1 }
}
}
const providerModel: ModelConfig = {
...baseModelWithParams,
parameters: {
temperature: { min: 0, max: 1, default: 0.7 },
top_p: { min: 0, max: 1, default: 0.9 }
}
}
const result = generateOverride(baseModelWithParams, providerModel, 'openrouter')
expect(result).toBeDefined()
expect(result?.parameters).toEqual({
temperature: { min: 0, max: 1, default: 0.7 },
top_p: { min: 0, max: 1, default: 0.9 }
})
})
it('should generate override with multiple differences', () => {
const providerModel: ModelConfig = {
...baseModel,
capabilities: ['FUNCTION_CALL', 'REASONING', 'IMAGE_RECOGNITION'],
context_window: 128000,
pricing: {
currency: 'USD',
input: { per_million_tokens: 25 },
output: { per_million_tokens: 50 }
}
}
const result = generateOverride(baseModel, providerModel, 'openrouter')
expect(result).toBeDefined()
expect(result?.capabilities).toEqual({ add: ['IMAGE_RECOGNITION'] })
expect(result?.limits).toEqual({ context_window: 128000 })
expect(result?.pricing).toEqual(providerModel.pricing)
})
it('should set custom priority if provided', () => {
const providerModel: ModelConfig = {
...baseModel,
context_window: 128000
}
const result = generateOverride(baseModel, providerModel, 'openrouter', { priority: 50 })
expect(result).toBeDefined()
expect(result?.priority).toBe(50)
})
it('should default priority to 0', () => {
const providerModel: ModelConfig = {
...baseModel,
context_window: 128000
}
const result = generateOverride(baseModel, providerModel, 'openrouter')
expect(result).toBeDefined()
expect(result?.priority).toBe(0)
})
it('should handle models with no capabilities', () => {
const baseModelNoCapabilities: ModelConfig = {
...baseModel,
capabilities: undefined
}
const providerModel: ModelConfig = {
...baseModelNoCapabilities,
capabilities: ['FUNCTION_CALL']
}
const result = generateOverride(baseModelNoCapabilities, providerModel, 'openrouter')
expect(result).toBeDefined()
expect(result?.capabilities).toEqual({ add: ['FUNCTION_CALL'] })
})
it('should handle provider model with no capabilities', () => {
const providerModel: ModelConfig = {
...baseModel,
capabilities: undefined
}
const result = generateOverride(baseModel, providerModel, 'openrouter')
expect(result).toBeDefined()
expect(result?.capabilities).toEqual({
remove: ['FUNCTION_CALL', 'REASONING']
})
})
})
describe('validateOverrideEnhanced', () => {
const baseModel: ModelConfig = {
id: 'gpt-4',
name: 'GPT-4',
provider: 'openai',
endpoint_type: 'CHAT_COMPLETIONS',
capabilities: ['FUNCTION_CALL', 'REASONING'],
context_window: 8192,
max_output_tokens: 4096,
pricing: {
currency: 'USD',
input: { per_million_tokens: 30 },
output: { per_million_tokens: 60 }
}
}
it('should pass validation for valid override', () => {
const override: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 128000 },
priority: 0
}
const result = validateOverrideEnhanced(override, baseModel)
expect(result.valid).toBe(true)
expect(result.errors).toHaveLength(0)
})
it('should error on incomplete pricing', () => {
const override: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
pricing: {
currency: 'USD',
input: { per_million_tokens: 30 }
// Missing output
} as any,
priority: 0
}
const result = validateOverrideEnhanced(override)
expect(result.valid).toBe(false)
expect(result.errors).toContain('Pricing must include both input and output')
})
it('should error on negative pricing', () => {
const override: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
pricing: {
currency: 'USD',
input: { per_million_tokens: -10 },
output: { per_million_tokens: 20 }
},
priority: 0
}
const result = validateOverrideEnhanced(override)
expect(result.valid).toBe(false)
expect(result.errors).toContain('Input pricing cannot be negative')
})
it('should error on capability conflict', () => {
const override: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
capabilities: {
add: ['FUNCTION_CALL', 'IMAGE_RECOGNITION'],
remove: ['FUNCTION_CALL'] // Conflict: in both add and remove
},
priority: 0
}
const result = validateOverrideEnhanced(override, baseModel)
expect(result.valid).toBe(false)
expect(result.errors.some((e) => e.includes('Capability conflict'))).toBe(true)
})
it('should error on non-positive context_window', () => {
const override: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 0 },
priority: 0
}
const result = validateOverrideEnhanced(override)
expect(result.valid).toBe(false)
expect(result.errors).toContain('context_window must be positive')
})
it('should error on non-positive max_output_tokens', () => {
const override: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { max_output_tokens: -100 },
priority: 0
}
const result = validateOverrideEnhanced(override)
expect(result.valid).toBe(false)
expect(result.errors).toContain('max_output_tokens must be positive')
})
it('should warn when max_output_tokens exceeds context_window', () => {
const override: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: {
context_window: 8192,
max_output_tokens: 10000
},
priority: 0
}
const result = validateOverrideEnhanced(override)
expect(result.valid).toBe(true)
expect(result.warnings).toContain('max_output_tokens exceeds context_window')
})
it('should warn when disabled without reason', () => {
const override: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
disabled: true,
priority: 0
}
const result = validateOverrideEnhanced(override, baseModel)
expect(result.valid).toBe(true)
expect(result.warnings).toContain('Disabled override should include a reason')
})
it('should pass when disabled with reason', () => {
const override: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
disabled: true,
reason: 'Deprecated model',
priority: 0
}
const result = validateOverrideEnhanced(override, baseModel)
expect(result.valid).toBe(true)
expect(result.warnings).not.toContain('Disabled override should include a reason')
})
it('should warn when reducing context_window', () => {
const override: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 4096 },
priority: 0
}
const result = validateOverrideEnhanced(override, baseModel)
expect(result.valid).toBe(true)
expect(result.warnings.some((w) => w.includes('Context window reduced'))).toBe(true)
})
})

View File

@ -0,0 +1,412 @@
/**
* Tests for override merging logic
* Tests mergeOverrides()
*/
import { describe, expect, it } from 'vitest'
import type { ProviderModelOverride } from '../schemas'
import { mergeOverrides } from '../utils/override-utils'
describe('mergeOverrides', () => {
it('should preserve manual override completely when preserveManual is true', () => {
const existing: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 8192 },
pricing: {
currency: 'USD',
input: { per_million_tokens: 25 },
output: { per_million_tokens: 50 }
},
priority: 100 // Manual
}
const generated: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 128000 },
pricing: {
currency: 'USD',
input: { per_million_tokens: 30 },
output: { per_million_tokens: 60 }
},
priority: 0
}
const result = mergeOverrides(existing, generated, { preserveManual: true })
expect(result).toEqual(existing)
})
it('should not preserve when preserveManual is false', () => {
const existing: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 8192 },
pricing: {
currency: 'USD',
input: { per_million_tokens: 25 },
output: { per_million_tokens: 50 }
},
priority: 100 // Manual
}
const generated: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 128000 },
pricing: {
currency: 'USD',
input: { per_million_tokens: 30 },
output: { per_million_tokens: 60 }
},
priority: 0
}
const result = mergeOverrides(existing, generated, { preserveManual: false })
// Should merge, not preserve completely
expect(result).not.toEqual(existing)
expect(result.limits).toEqual(existing.limits) // Manual limits take precedence
expect(result.pricing).toEqual(existing.pricing) // Manual pricing takes precedence (isManual=true)
})
it('should preserve manual limits but update auto pricing', () => {
const existing: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 8192 },
pricing: {
currency: 'USD',
input: { per_million_tokens: 25 },
output: { per_million_tokens: 50 }
},
priority: 100 // Manual
}
const generated: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
pricing: {
currency: 'USD',
input: { per_million_tokens: 30 },
output: { per_million_tokens: 60 }
},
priority: 0
}
const result = mergeOverrides(existing, generated)
expect(result.limits).toEqual(existing.limits)
expect(result.pricing).toEqual(existing.pricing) // Manual pricing preserved (priority >= 100)
})
it('should update pricing for auto-generated overrides', () => {
const existing: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
pricing: {
currency: 'USD',
input: { per_million_tokens: 25 },
output: { per_million_tokens: 50 }
},
priority: 0 // Auto-generated
}
const generated: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
pricing: {
currency: 'USD',
input: { per_million_tokens: 30 },
output: { per_million_tokens: 60 }
},
priority: 0
}
const result = mergeOverrides(existing, generated)
expect(result.pricing).toEqual(generated.pricing) // Updated from generated
})
it('should merge capabilities (union of add, remove)', () => {
const existing: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
capabilities: {
add: ['FUNCTION_CALL'],
remove: ['REASONING']
},
priority: 50
}
const generated: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
capabilities: {
add: ['IMAGE_RECOGNITION'],
remove: ['AUDIO_RECOGNITION']
},
priority: 0
}
const result = mergeOverrides(existing, generated)
expect(result.capabilities?.add).toEqual(
expect.arrayContaining(['FUNCTION_CALL', 'IMAGE_RECOGNITION'])
)
expect(result.capabilities?.remove).toEqual(
expect.arrayContaining(['REASONING', 'AUDIO_RECOGNITION'])
)
})
it('should deduplicate merged capabilities', () => {
const existing: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
capabilities: {
add: ['FUNCTION_CALL', 'IMAGE_RECOGNITION']
},
priority: 50
}
const generated: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
capabilities: {
add: ['FUNCTION_CALL', 'AUDIO_RECOGNITION']
},
priority: 0
}
const result = mergeOverrides(existing, generated)
expect(result.capabilities?.add).toHaveLength(3)
expect(result.capabilities?.add).toEqual(
expect.arrayContaining(['FUNCTION_CALL', 'IMAGE_RECOGNITION', 'AUDIO_RECOGNITION'])
)
})
it('should prefer existing force capabilities', () => {
const existing: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
capabilities: {
force: ['FUNCTION_CALL']
},
priority: 50
}
const generated: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
capabilities: {
force: ['IMAGE_RECOGNITION']
},
priority: 0
}
const result = mergeOverrides(existing, generated)
expect(result.capabilities?.force).toEqual(['FUNCTION_CALL'])
})
it('should preserve existing reasoning if present', () => {
const existing: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
reasoning: {
type: 'anthropic',
params: { type: 'enabled', budgetTokens: 10000 }
},
priority: 50
}
const generated: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
reasoning: {
type: 'openai-chat',
params: { reasoning_effort: 'high' }
},
priority: 0
}
const result = mergeOverrides(existing, generated)
expect(result.reasoning).toEqual(existing.reasoning)
})
it('should use generated reasoning if existing has none', () => {
const existing: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
priority: 50
}
const generated: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
reasoning: {
type: 'openai-chat',
params: { reasoning_effort: 'high' }
},
priority: 0
}
const result = mergeOverrides(existing, generated)
expect(result.reasoning).toEqual(generated.reasoning)
})
it('should merge parameters with existing taking precedence', () => {
const existing: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
parameters: {
temperature: { min: 0, max: 1, default: 0.7 }
},
priority: 50
}
const generated: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
parameters: {
temperature: { min: 0, max: 2, default: 1 },
top_p: { min: 0, max: 1, default: 0.9 }
},
priority: 0
}
const result = mergeOverrides(existing, generated)
expect(result.parameters?.temperature).toEqual(existing.parameters?.temperature)
expect(result.parameters?.top_p).toEqual(generated.parameters?.top_p)
})
it('should preserve disabled and replace_with status', () => {
const existing: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
disabled: true,
replace_with: 'gpt-4-turbo',
reason: 'Deprecated',
priority: 100
}
const generated: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 128000 },
priority: 0
}
const result = mergeOverrides(existing, generated)
expect(result.disabled).toBe(true)
expect(result.replace_with).toBe('gpt-4-turbo')
expect(result.reason).toBe('Deprecated')
})
it('should maintain existing priority', () => {
const existing: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 8192 },
priority: 150
}
const generated: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 128000 },
priority: 0
}
const result = mergeOverrides(existing, generated)
expect(result.priority).toBe(150)
})
it('should use custom manual priority threshold', () => {
const existing: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 8192 },
pricing: {
currency: 'USD',
input: { per_million_tokens: 25 },
output: { per_million_tokens: 50 }
},
priority: 50
}
const generated: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
pricing: {
currency: 'USD',
input: { per_million_tokens: 30 },
output: { per_million_tokens: 60 }
},
priority: 0
}
// With threshold 50, existing is considered manual
const result = mergeOverrides(existing, generated, {
preserveManual: true,
manualPriorityThreshold: 50
})
expect(result).toEqual(existing)
})
it('should handle merging when only one has capabilities', () => {
const existing: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
limits: { context_window: 8192 },
priority: 50
}
const generated: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
capabilities: {
add: ['IMAGE_RECOGNITION']
},
priority: 0
}
const result = mergeOverrides(existing, generated)
expect(result.capabilities).toEqual(generated.capabilities)
})
it('should handle empty capabilities arrays', () => {
const existing: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
capabilities: {
add: [],
remove: []
},
priority: 50
}
const generated: ProviderModelOverride = {
provider_id: 'openrouter',
model_id: 'gpt-4',
capabilities: {
add: ['IMAGE_RECOGNITION']
},
priority: 0
}
const result = mergeOverrides(existing, generated)
expect(result.capabilities?.add).toEqual(['IMAGE_RECOGNITION'])
})
})

View File

@ -0,0 +1,90 @@
/**
* Test provider configs with models_api
*/
import { readFileSync } from 'node:fs'
import { describe, expect, it } from 'vitest'
import { ProviderConfigSchema } from '../schemas/provider'
describe('Provider configs with models_api', () => {
const providersData = JSON.parse(readFileSync('./data/providers.json', 'utf8'))
it('should have valid OpenRouter config with models_api', () => {
const openrouter = providersData.providers.find((p: any) => p.id === 'openrouter')
expect(openrouter).toBeDefined()
// Validate schema
const result = ProviderConfigSchema.safeParse(openrouter)
if (!result.success) {
console.error('Validation errors:', result.error.errors)
}
expect(result.success).toBe(true)
// Check models_api config
expect(openrouter.models_api).toBeDefined()
expect(openrouter.models_api.enabled).toBe(true)
expect(openrouter.models_api.endpoints).toHaveLength(1)
expect(openrouter.models_api.endpoints[0].url).toBe('https://openrouter.ai/api/v1/models')
expect(openrouter.models_api.endpoints[0].transformer).toBe('openrouter')
})
it('should have valid AiHubMix config with models_api', () => {
const aihubmix = providersData.providers.find((p: any) => p.id === 'aihubmix')
expect(aihubmix).toBeDefined()
// Validate schema
const result = ProviderConfigSchema.safeParse(aihubmix)
if (!result.success) {
console.error('Validation errors:', result.error.errors)
}
expect(result.success).toBe(true)
// Check models_api config
expect(aihubmix.models_api).toBeDefined()
expect(aihubmix.models_api.enabled).toBe(true)
expect(aihubmix.models_api.endpoints).toHaveLength(1)
expect(aihubmix.models_api.endpoints[0].url).toBe('https://aihubmix.com/v1/models')
expect(aihubmix.models_api.endpoints[0].transformer).toBe('aihubmix')
})
it('should have 14 providers with models_api configured', () => {
const withModelsApi = providersData.providers.filter((p: any) => p.models_api)
expect(withModelsApi.length).toBe(14)
})
it('should validate all providers with models_api', () => {
const withModelsApi = providersData.providers.filter((p: any) => p.models_api)
const failures: string[] = []
for (const provider of withModelsApi) {
const result = ProviderConfigSchema.safeParse(provider)
if (!result.success) {
failures.push(`${provider.id}: ${result.error.errors.map((e) => e.message).join(', ')}`)
}
}
if (failures.length > 0) {
console.error('Validation failures:\n', failures.join('\n'))
}
expect(failures).toHaveLength(0)
})
it('should have correct endpoint structure for all models_api configs', () => {
const withModelsApi = providersData.providers.filter((p: any) => p.models_api)
for (const provider of withModelsApi) {
expect(provider.models_api.endpoints).toBeInstanceOf(Array)
expect(provider.models_api.endpoints.length).toBeGreaterThan(0)
expect(provider.models_api.enabled).toBe(true)
for (const endpoint of provider.models_api.endpoints) {
expect(endpoint.url).toBeDefined()
expect(endpoint.url).toMatch(/^https?:\/\//)
expect(endpoint.endpoint_type).toBeDefined()
expect(endpoint.format).toBeDefined()
}
}
})
})

View File

@ -34,8 +34,9 @@ export const StringRangeSchema = z.object({
})
// Price per token schema (snake_case)
// Allow null for per_million_tokens to handle incomplete pricing data from APIs
export const PricePerTokenSchema = z.object({
per_million_tokens: z.number().nonnegative(),
per_million_tokens: z.number().nonnegative().nullable(),
currency: CurrencySchema.default('USD')
})

View File

@ -28,11 +28,10 @@ export type {
ProviderModelOverride
} from './override'
export type {
ApiFormat,
Authentication,
EndpointType,
McpSupport,
PricingModel,
ProviderBehaviors,
FormatConfig,
ProviderConfig
} from './provider'

View File

@ -1,29 +1,7 @@
/**
* Provider configuration schema definitions
* Defines the structure for AI service provider metadata and capabilities
*/
import * as z from 'zod'
import { MetadataSchema, ProviderIdSchema, VersionSchema } from './common'
// Endpoint types supported by providers
export const EndpointTypeSchema = z.enum([
'CHAT_COMPLETIONS', // /chat/completions
'COMPLETIONS', // /completions
'EMBEDDINGS', // /embeddings
'IMAGE_GENERATION', // /images/generations
'IMAGE_EDIT', // /images/edits
'AUDIO_SPEECH', // /audio/speech (TTS)
'AUDIO_TRANSCRIPTIONS', // /audio/transcriptions (STT)
'MESSAGES', // /messages
'RESPONSES', // /responses
'GENERATE_CONTENT', // :generateContent
'STREAM_GENERATE_CONTENT', // :streamGenerateContent
'RERANK', // /rerank
'MODERATIONS' // /moderations
])
// Authentication methods
export const AuthenticationSchema = z.enum([
'API_KEY', // Standard API Key authentication
@ -31,76 +9,84 @@ export const AuthenticationSchema = z.enum([
'CLOUD_CREDENTIALS' // Cloud service credentials (AWS, GCP, Azure)
])
// Pricing models that affect UI and behavior
export const PricingModelSchema = z.enum([
'UNIFIED', // Unified pricing (like OpenRouter)
'PER_MODEL', // Per-model independent pricing (like OpenAI official)
'TRANSPARENT', // Transparent pricing (like New-API)
'USAGE_BASED', // Dynamic usage-based pricing
'SUBSCRIPTION' // Subscription-based pricing
// Endpoint types - represents the API functionality
export const EndpointTypeSchema = z.enum([
// LLM endpoints
'CHAT_COMPLETIONS', // OpenAI chat completions
'TEXT_COMPLETIONS', // OpenAI text completions
'MESSAGES', // Anthropic messages API
'RESPONSES', // OpenAI responses API (new format with reasoning)
'GENERATE_CONTENT', // Gemini generateContent API
// Embedding endpoints
'EMBEDDINGS',
'RERANK',
// Image endpoints
'IMAGE_GENERATION',
'IMAGE_EDIT',
'IMAGE_VARIATION',
// Audio endpoints
'AUDIO_TRANSCRIPTION',
'AUDIO_TRANSLATION',
'TEXT_TO_SPEECH',
// Video endpoints
'VIDEO_GENERATION'
])
// Model routing strategies affecting performance and reliability
export const ModelRoutingSchema = z.enum([
'INTELLIGENT', // Intelligent routing, auto-select optimal instance
'DIRECT', // Direct routing to specified model
'LOAD_BALANCED', // Load balanced across multiple instances
'GEO_ROUTED', // Geographic location routing
'COST_OPTIMIZED' // Cost-optimized routing
// API format types - represents the protocol/format of the API
export const ApiFormatSchema = z.enum([
'OPENAI', // OpenAI standard format (covers chat, embeddings, images, etc.)
'ANTHROPIC', // Anthropic format
'GEMINI', // Google Gemini API format
'CUSTOM' // Custom/proprietary format
])
// Server-side MCP support configuration
export const McpSupportSchema = z.object({
supported: z.boolean().default(false),
configuration: z
.object({
supports_url_pass_through: z.boolean().default(false),
supported_servers: z.array(z.string()).optional(),
max_concurrent_servers: z.number().optional()
})
.optional()
// Format configuration - maps API format to base URL
export const FormatConfigSchema = z.object({
format: ApiFormatSchema,
base_url: z.string().url(),
default: z.boolean().default(false)
})
// API compatibility configuration
export const ApiCompatibilitySchema = z.object({
supports_array_content: z.boolean().default(true),
supports_stream_options: z.boolean().default(true),
supports_developer_role: z.boolean().default(false),
supports_developer_role: z.boolean().default(true),
supports_service_tier: z.boolean().default(false),
supports_thinking_control: z.boolean().default(false),
supports_api_version: z.boolean().default(false),
supports_parallel_tools: z.boolean().default(false),
supports_multimodal: z.boolean().default(false),
max_file_upload_size: z.number().optional(), // bytes
supported_file_types: z.array(z.string()).optional()
supports_thinking_control: z.boolean().default(true),
supports_api_version: z.boolean().default(true)
})
// Behavior characteristics configuration - replaces categorization, describes actual behavior
export const ProviderBehaviorsSchema = z.object({
// Model management
supports_custom_models: z.boolean().default(false), // Supports user custom models
provides_model_mapping: z.boolean().default(false), // Provides model name mapping
supports_model_versioning: z.boolean().default(false), // Supports model version control
// Models API endpoint configuration
export const ModelsApiEndpointSchema = z.object({
// API endpoint URL
url: z.string().url(),
// Endpoint type (CHAT_COMPLETIONS, EMBEDDINGS, etc.)
endpoint_type: EndpointTypeSchema,
// API format for this endpoint
format: ApiFormatSchema,
// Optional authentication override (if different from provider default)
auth: z
.object({
header_name: z.string().optional(), // e.g., "Authorization", "X-API-Key"
prefix: z.string().optional() // e.g., "Bearer ", "sk-"
})
.optional(),
// Optional custom transformer name if not OpenAI-compatible
transformer: z.string().optional() // e.g., "openrouter", "aihubmix", "custom"
})
// Reliability and fault tolerance
provides_fallback_routing: z.boolean().default(false), // Provides fallback routing
has_auto_retry: z.boolean().default(false), // Has automatic retry mechanism
supports_health_check: z.boolean().default(false), // Supports health checks
// Monitoring and metrics
has_real_time_metrics: z.boolean().default(false), // Has real-time metrics
provides_usage_analytics: z.boolean().default(false), // Provides usage analytics
supports_webhook_events: z.boolean().default(false), // Supports webhook events
// Configuration and management
requires_api_key_validation: z.boolean().default(true), // Requires API key validation
supports_rate_limiting: z.boolean().default(false), // Supports rate limiting
provides_usage_limits: z.boolean().default(false), // Provides usage limit configuration
// Advanced features
supports_streaming: z.boolean().default(true), // Supports streaming responses
supports_batch_processing: z.boolean().default(false), // Supports batch processing
supports_model_fine_tuning: z.boolean().default(false) // Provides model fine-tuning
// Models API configuration
export const ModelsApiConfigSchema = z.object({
// List of endpoints (most providers have one, some have multiple)
endpoints: z.array(ModelsApiEndpointSchema).min(1),
// Enable/disable auto-sync for this provider
enabled: z.boolean().default(true),
// Last successful sync timestamp
last_synced: z.string().optional()
})
// Provider configuration schema
@ -110,51 +96,35 @@ export const ProviderConfigSchema = z.object({
name: z.string(),
description: z.string().optional(),
// Behavior-related configuration
authentication: AuthenticationSchema,
pricing_model: PricingModelSchema,
model_routing: ModelRoutingSchema,
behaviors: ProviderBehaviorsSchema,
// Authentication
authentication: AuthenticationSchema.default('API_KEY'),
// Feature support
supported_endpoints: z
.array(EndpointTypeSchema)
.min(1, 'At least one endpoint must be supported')
.refine((arr) => new Set(arr).size === arr.length, {
message: 'Supported endpoints must be unique'
// API format configurations
// Each provider can support multiple API formats (e.g., OpenAI + Anthropic)
formats: z
.array(FormatConfigSchema)
.min(1)
.refine((formats) => formats.filter((f) => f.default).length <= 1, {
message: 'Only one format can be marked as default'
}),
mcp_support: McpSupportSchema.optional(),
// Supported endpoint types (optional, for documentation)
supported_endpoints: z.array(EndpointTypeSchema).optional(),
// API compatibility - 保留以支持在线更新
api_compatibility: ApiCompatibilitySchema.optional(),
// Default configuration
default_api_host: z.string().optional(),
default_rate_limit: z.number().optional(), // requests per minute
// Model matching assistance
model_id_patterns: z.array(z.string()).optional(),
alias_model_ids: z.record(z.string(), z.string()).optional(), // Model alias mapping
// Special configuration
special_config: MetadataSchema,
// Metadata and links
// Documentation links
documentation: z.string().url().optional(),
status_page: z.string().url().optional(),
pricing_page: z.string().url().optional(),
support_email: z.string().email().optional(),
website: z.string().url().optional(),
// Status management
deprecated: z.boolean().default(false),
deprecation_date: z.iso.datetime().optional(),
maintenance_mode: z.boolean().default(false),
// Version and compatibility
min_app_version: VersionSchema.optional(), // Minimum supported app version
max_app_version: VersionSchema.optional(), // Maximum supported app version
config_version: VersionSchema.default('1.0.0'), // Configuration file version
// Models API configuration (optional)
models_api: ModelsApiConfigSchema.optional(),
// Additional metadata
// Additional metadata (tags, etc.)
metadata: MetadataSchema
})
@ -165,12 +135,12 @@ export const ProviderListSchema = z.object({
})
// Type exports
export type EndpointType = z.infer<typeof EndpointTypeSchema>
export type Authentication = z.infer<typeof AuthenticationSchema>
export type PricingModel = z.infer<typeof PricingModelSchema>
export type ModelRouting = z.infer<typeof ModelRoutingSchema>
export type McpSupport = z.infer<typeof McpSupportSchema>
export type EndpointType = z.infer<typeof EndpointTypeSchema>
export type ApiFormat = z.infer<typeof ApiFormatSchema>
export type FormatConfig = z.infer<typeof FormatConfigSchema>
export type ApiCompatibility = z.infer<typeof ApiCompatibilitySchema>
export type ProviderBehaviors = z.infer<typeof ProviderBehaviorsSchema>
export type ModelsApiEndpoint = z.infer<typeof ModelsApiEndpointSchema>
export type ModelsApiConfig = z.infer<typeof ModelsApiConfigSchema>
export type ProviderConfig = z.infer<typeof ProviderConfigSchema>
export type ProviderList = z.infer<typeof ProviderListSchema>

View File

@ -7,6 +7,17 @@ import type { Modality,ModelCapabilityType, ModelConfig } from '../../../schemas
import type { AiHubMixModel } from './types'
export class AiHubMixTransformer {
/**
* Normalize model ID by extracting the model name from provider/model format and converting to lowercase
* @param modelId - Original model ID (e.g., "openai/GPT-4" or "Claude-3-Opus")
* @returns Normalized lowercase model ID (e.g., "gpt-4" or "claude-3-opus")
*/
private normalizeModelId(modelId: string): string {
// Split by '/' and take the last part, then convert to lowercase
const parts = modelId.split('/')
return parts[parts.length - 1].toLowerCase()
}
/**
* Transform AIHubMix model to internal ModelConfig
* @param apiModel - Model data from AIHubMix API
@ -14,8 +25,9 @@ export class AiHubMixTransformer {
*/
transform(apiModel: AiHubMixModel): ModelConfig {
return {
id: apiModel.model_id,
id: this.normalizeModelId(apiModel.model_id),
description: apiModel.desc || undefined,
owned_by: this.extractProvider(apiModel.model_id) || 'aihubmix',
capabilities: this.mapCapabilities(apiModel.types, apiModel.features),
input_modalities: this.mapModalities(apiModel.input_modalities),
@ -57,6 +69,19 @@ export class AiHubMixTransformer {
}
}
/**
* Extract provider name from model_id
* @param modelId - Model ID (e.g., "openai/gpt-4" or "anthropic/claude-3")
* @returns Provider name (e.g., "openai" or "anthropic"), or undefined if not in format
*/
private extractProvider(modelId: string): string | undefined {
const parts = modelId.split('/')
if (parts.length >= 2) {
return parts[0].toLowerCase()
}
return undefined
}
/**
* Map AIHubMix types and features to internal capabilities
*/

View File

@ -0,0 +1,37 @@
/**
* Generic API fetcher for OpenAI-compatible endpoints
* Handles HTTP requests with timeout and error handling
*/
export interface FetchOptions {
url: string
headers?: Record<string, string>
timeout?: number
}
export class BaseFetcher<TResponse = any> {
/**
* Fetch data from an API endpoint
* @param options Fetch configuration
* @returns Parsed JSON response
*/
async fetch(options: FetchOptions): Promise<TResponse> {
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), options.timeout || 30000)
try {
const response = await fetch(options.url, {
headers: options.headers,
signal: controller.signal
})
if (!response.ok) {
throw new Error(`API error: ${response.status} ${response.statusText}`)
}
return (await response.json()) as TResponse
} finally {
clearTimeout(timeout)
}
}
}

View File

@ -0,0 +1,109 @@
/**
* Generic importer that coordinates fetching and transformation
* Orchestrates the import process for provider model APIs
*/
import type { ModelConfig, ModelsApiEndpoint, ProviderConfig } from '../../../schemas'
import { BaseFetcher } from './base-fetcher'
import type { ITransformer } from './base-transformer'
export interface ImportResult {
providerId: string
endpointType: string
models: ModelConfig[]
fetchedAt: string
count: number
}
export class BaseImporter {
private fetcher: BaseFetcher
constructor() {
this.fetcher = new BaseFetcher()
}
/**
* Import models from a single endpoint
* @param providerId Provider identifier
* @param endpoint Endpoint configuration
* @param transformer Transformer instance
* @param apiKey Optional API key for authentication
* @returns Import result with models
*/
async importFromEndpoint(
providerId: string,
endpoint: ModelsApiEndpoint,
transformer: ITransformer,
apiKey?: string
): Promise<ImportResult> {
// Build headers
const headers: Record<string, string> = {
'Content-Type': 'application/json'
}
// Add API key to headers if provided
if (apiKey) {
if (endpoint.auth) {
// Use custom auth configuration if specified
const headerName = endpoint.auth.header_name || 'Authorization'
const prefix = endpoint.auth.prefix || 'Bearer '
headers[headerName] = `${prefix}${apiKey}`
} else {
// Default to standard Bearer token authentication
headers['Authorization'] = `Bearer ${apiKey}`
}
}
// Fetch raw data
const response = await this.fetcher.fetch({
url: endpoint.url,
headers
})
// Extract models array
const rawModels = transformer.extractModels?.(response) || response.data || response
// Transform to internal format
const models = rawModels.map((m) => transformer.transform(m))
return {
providerId,
endpointType: endpoint.endpoint_type,
models,
fetchedAt: new Date().toISOString(),
count: models.length
}
}
/**
* Import models from all endpoints of a provider
* @param provider Provider configuration
* @param transformerRegistry Transformer registry function
* @param apiKey Optional API key for authentication
* @returns Array of import results
*/
async importFromProvider(
provider: ProviderConfig,
transformerRegistry: (name: string) => ITransformer,
apiKey?: string
): Promise<ImportResult[]> {
if (!provider.models_api?.enabled) {
throw new Error(`Models API not enabled for provider ${provider.id}`)
}
const results: ImportResult[] = []
for (const endpoint of provider.models_api.endpoints) {
// Get transformer
const transformerName = endpoint.transformer || provider.id
const transformer = transformerRegistry(transformerName)
// Import from endpoint
const result = await this.importFromEndpoint(provider.id, endpoint, transformer, apiKey)
results.push(result)
}
return results
}
}

View File

@ -0,0 +1,150 @@
/**
* Base transformer interface and OpenAI-compatible base class
* Provides structure for transforming provider API responses to internal ModelConfig
*/
import type { ModelConfig } from '../../../schemas'
/**
* Generic transformer interface
*/
export interface ITransformer<TInput = any> {
/**
* Transform API model to internal ModelConfig
*/
transform(apiModel: TInput): ModelConfig
/**
* Optional: Validate API response structure
*/
validate?(response: any): boolean
/**
* Optional: Extract models array from response
*/
extractModels?(response: any): TInput[]
}
/**
* Base class for OpenAI-compatible transformers
* Handles common patterns like extracting { data: [...] } responses
*/
export class OpenAICompatibleTransformer implements ITransformer {
/**
* Default implementation extracts from { data: [...] } or direct array
*/
extractModels(response: any): any[] {
if (Array.isArray(response.data)) {
return response.data
}
if (Array.isArray(response)) {
return response
}
throw new Error('Invalid API response structure: expected { data: [] } or []')
}
/**
* Default transformation for OpenAI-compatible model responses
* Minimal transformation - most fields are optional
*/
transform(apiModel: any): ModelConfig {
// Normalize model ID to lowercase
const modelId = (apiModel.id || apiModel.model || '').toLowerCase()
if (!modelId) {
throw new Error('Model ID is required')
}
return {
id: modelId,
name: apiModel.name || modelId,
description: apiModel.description,
owned_by: apiModel.owned_by || 'unknown',
capabilities: this.inferCapabilities(apiModel),
input_modalities: ['TEXT'], // Default to text
output_modalities: ['TEXT'], // Default to text
context_window: apiModel.context_length || apiModel.context_window || 0,
max_output_tokens: apiModel.max_tokens || apiModel.max_output_tokens,
pricing: this.extractPricing(apiModel),
metadata: {
source: 'api',
tags: apiModel.tags || [],
created: apiModel.created,
updated: apiModel.updated
}
}
}
/**
* Infer basic capabilities from model data
*/
protected inferCapabilities(apiModel: any): string[] | undefined {
const capabilities: string[] = []
// Check for common capability indicators
if (apiModel.supports_tools || apiModel.function_calling) {
capabilities.push('FUNCTION_CALL')
}
if (apiModel.supports_vision || apiModel.vision) {
capabilities.push('IMAGE_RECOGNITION')
}
if (apiModel.supports_json_output || apiModel.response_format) {
capabilities.push('STRUCTURED_OUTPUT')
}
return capabilities.length > 0 ? capabilities : undefined
}
/**
* Extract pricing if available
*/
protected extractPricing(apiModel: any): ModelConfig['pricing'] {
if (!apiModel.pricing) return undefined
const pricing = apiModel.pricing
// Handle per-token pricing (convert to per-million)
if (pricing.prompt !== undefined && pricing.completion !== undefined) {
const inputCost = parseFloat(pricing.prompt)
const outputCost = parseFloat(pricing.completion)
if (inputCost <= 0 || outputCost <= 0) return undefined
return {
input: {
per_million_tokens: inputCost * 1_000_000,
currency: 'USD'
},
output: {
per_million_tokens: outputCost * 1_000_000,
currency: 'USD'
}
}
}
// Handle direct per-million pricing
if (
pricing.input?.per_million_tokens != null &&
pricing.output?.per_million_tokens != null &&
!isNaN(pricing.input.per_million_tokens) &&
!isNaN(pricing.output.per_million_tokens)
) {
return {
input: {
per_million_tokens: pricing.input.per_million_tokens,
currency: pricing.input.currency || 'USD'
},
output: {
per_million_tokens: pricing.output.per_million_tokens,
currency: pricing.output.currency || 'USD'
}
}
}
return undefined
}
}

View File

@ -3,6 +3,16 @@
* One-time import utilities for various AI provider catalogs
*/
// Base importer framework
export * from './base/base-fetcher'
export * from './base/base-transformer'
export * from './base/base-importer'
// Provider-specific importers
export * from './aihubmix/importer'
export * from './aihubmix/transformer'
export * from './aihubmix/types'
export * from './openrouter/importer'
export * from './openrouter/transformer'
export * from './openrouter/types'

View File

@ -0,0 +1,74 @@
/**
* OpenRouter model importer
* Fetches and transforms model data from OpenRouter API
*/
import * as fs from 'fs'
import * as path from 'path'
import type { ModelConfig } from '../../../schemas'
import { OpenRouterTransformer } from './transformer'
import type { OpenRouterResponse } from './types'
export class OpenRouterImporter {
private transformer: OpenRouterTransformer
private apiUrl: string
constructor(apiUrl: string = 'https://openrouter.ai/api/v1') {
this.apiUrl = apiUrl
this.transformer = new OpenRouterTransformer()
}
/**
* Import models from OpenRouter API
* @param outputPath - Optional path to save the raw data
* @returns Array of transformed ModelConfig objects
*/
async importModels(outputPath?: string): Promise<ModelConfig[]> {
console.log('Fetching models from OpenRouter API...')
// Fetch from API
const response = await fetch(`${this.apiUrl}/models`)
if (!response.ok) {
throw new Error(`OpenRouter API error: ${response.status} ${response.statusText}`)
}
const data: OpenRouterResponse = await response.json()
console.log(`✓ Fetched ${data.data.length} models from OpenRouter`)
// Transform models
console.log('Transforming models...')
const models = data.data.map((model) => this.transformer.transform(model))
console.log(`✓ Transformed ${models.length} models`)
// Optionally write to file
if (outputPath) {
const output = {
version: new Date().toISOString().split('T')[0].replace(/-/g, '.'),
models
}
fs.writeFileSync(outputPath, JSON.stringify(output, null, 2) + '\n', 'utf-8')
console.log(`✓ Saved to ${outputPath}`)
}
return models
}
/**
* Static method to run importer from CLI
*/
static async run() {
const importer = new OpenRouterImporter()
const outputPath = path.join(process.cwd(), 'data', 'openrouter-models.json')
try {
await importer.importModels(outputPath)
console.log('✓ Import complete')
process.exit(0)
} catch (error) {
console.error('✗ Import failed:', error)
process.exit(1)
}
}
}

View File

@ -0,0 +1,257 @@
/**
* OpenRouter data transformer
* Converts OpenRouter API format to internal ModelConfig schema
*/
import type { Modality, ModelCapabilityType, ModelConfig } from '../../../schemas'
import type { OpenRouterModel } from './types'
export class OpenRouterTransformer {
/**
* Normalize model ID by extracting the model name from provider/model format and converting to lowercase
* @param modelId - Original model ID (e.g., "openrouter/GPT-4" or "anthropic/Claude-3-Opus")
* @returns Normalized lowercase model ID (e.g., "gpt-4" or "claude-3-opus")
*/
private normalizeModelId(modelId: string): string {
// Split by '/' and take the last part, then convert to lowercase
const parts = modelId.split('/')
return parts[parts.length - 1].toLowerCase()
}
/**
* Transform OpenRouter model to internal ModelConfig
* @param apiModel - Model data from OpenRouter API
* @returns Internal model configuration
*/
transform(apiModel: OpenRouterModel): ModelConfig {
const capabilities = this.inferCapabilities(apiModel)
const inputModalities = this.mapModalities(apiModel.architecture.input_modalities)
const outputModalities = this.mapModalities(apiModel.architecture.output_modalities)
const pricing = this.convertPricing(apiModel.pricing)
return {
id: this.normalizeModelId(apiModel.id),
name: apiModel.name,
description: apiModel.description || undefined,
owned_by: 'openrouter',
capabilities: capabilities.length > 0 ? capabilities : undefined,
input_modalities: inputModalities,
output_modalities: outputModalities,
context_window: apiModel.context_length || apiModel.top_provider.context_length,
max_output_tokens: apiModel.top_provider.max_completion_tokens || undefined,
pricing,
metadata: {
source: 'openrouter',
tags: this.extractTags(apiModel),
category: this.inferCategory(apiModel),
original_architecture: apiModel.architecture.modality,
canonical_slug: apiModel.canonical_slug,
created: apiModel.created
}
}
}
/**
* Infer capabilities from supported parameters and architecture
*/
private inferCapabilities(apiModel: OpenRouterModel): ModelCapabilityType[] {
const caps = new Set<ModelCapabilityType>()
// Check architecture modality for embeddings
const outputMods = apiModel.architecture.output_modalities.map((m) => m.toLowerCase())
// Embedding models
if (outputMods.includes('embeddings')) {
caps.add('EMBEDDING')
// Embeddings models don't have other capabilities, return early
return Array.from(caps)
}
// Check supported parameters
const params = apiModel.supported_parameters || []
// Function calling support
if (params.includes('tools') || params.includes('tool_choice')) {
caps.add('FUNCTION_CALL')
}
// Structured output support
if (params.includes('response_format') || params.includes('structured_outputs')) {
caps.add('STRUCTURED_OUTPUT')
}
// Reasoning support
if (params.includes('reasoning') || params.includes('include_reasoning')) {
caps.add('REASONING')
}
// Web search (check if pricing > 0)
if (parseFloat(apiModel.pricing.web_search || '0') > 0) {
caps.add('WEB_SEARCH')
}
// Check architecture modality for media capabilities
const inputMods = apiModel.architecture.input_modalities.map((m) => m.toLowerCase())
// Image capabilities
if (inputMods.includes('image')) {
caps.add('IMAGE_RECOGNITION')
}
if (outputMods.includes('image')) {
caps.add('IMAGE_GENERATION')
}
// Audio capabilities
if (inputMods.includes('audio')) {
caps.add('AUDIO_RECOGNITION')
}
if (outputMods.includes('audio')) {
caps.add('AUDIO_GENERATION')
}
// Video capabilities
if (inputMods.includes('video')) {
caps.add('VIDEO_RECOGNITION')
}
if (outputMods.includes('video')) {
caps.add('VIDEO_GENERATION')
}
return Array.from(caps)
}
/**
* Map OpenRouter modalities to internal Modality types
*/
private mapModalities(modalityList: string[]): Modality[] {
const modalities = new Set<Modality>()
modalityList.forEach((m) => {
const normalized = m.toLowerCase()
switch (normalized) {
case 'text':
modalities.add('TEXT')
break
case 'image':
modalities.add('VISION')
break
case 'audio':
modalities.add('AUDIO')
break
case 'video':
modalities.add('VIDEO')
break
case 'embeddings':
// Embeddings is an output-only modality, treat input as TEXT
modalities.add('TEXT')
break
}
})
const result = Array.from(modalities)
// Default to TEXT if no modalities found
if (result.length === 0) {
return ['TEXT']
}
return result
}
/**
* Convert OpenRouter pricing to internal format
* OpenRouter uses per-token pricing as strings, we need per-million-tokens as numbers
*/
private convertPricing(pricing: OpenRouterModel['pricing']): ModelConfig['pricing'] {
const promptCost = parseFloat(pricing.prompt || '0')
const completionCost = parseFloat(pricing.completion || '0')
const cacheReadCost = parseFloat(pricing.input_cache_read || '0')
// If all costs are 0 or negative (OpenRouter uses -1 for unknown/dynamic pricing), return undefined
if (promptCost <= 0 && completionCost <= 0) {
return undefined
}
// If either cost is negative, return undefined (invalid pricing)
if (promptCost < 0 || completionCost < 0) {
return undefined
}
const result: ModelConfig['pricing'] = {
input: {
per_million_tokens: promptCost * 1_000_000,
currency: 'USD'
},
output: {
per_million_tokens: completionCost * 1_000_000,
currency: 'USD'
}
}
// Add cache pricing if available
if (cacheReadCost > 0) {
result.cache_read = {
per_million_tokens: cacheReadCost * 1_000_000,
currency: 'USD'
}
}
return result
}
/**
* Extract tags from supported parameters
*/
private extractTags(apiModel: OpenRouterModel): string[] {
const tags: string[] = []
// Add modality as tag
tags.push(apiModel.architecture.modality)
// Add some key supported parameters as tags
const interestingParams = [
'tools',
'function_calling',
'reasoning',
'web_search',
'structured_outputs',
'vision'
]
apiModel.supported_parameters.forEach((param) => {
if (interestingParams.some((ip) => param.includes(ip))) {
tags.push(param)
}
})
// Add tokenizer type if not "Other"
if (apiModel.architecture.tokenizer && apiModel.architecture.tokenizer !== 'Other') {
tags.push(apiModel.architecture.tokenizer)
}
return Array.from(new Set(tags)).filter(Boolean)
}
/**
* Infer category from architecture modality
*/
private inferCategory(apiModel: OpenRouterModel): string {
const modality = apiModel.architecture.modality.toLowerCase()
if (modality.includes('image') && modality.includes('->image')) {
return 'image-generation'
}
if (modality.includes('video') && modality.includes('->video')) {
return 'video-generation'
}
if (modality.includes('audio') && modality.includes('->audio')) {
return 'audio-generation'
}
return 'language-model'
}
}

View File

@ -0,0 +1,99 @@
/**
* OpenRouter API types
* Based on https://openrouter.ai/api/v1/models
*/
export interface OpenRouterModel {
/** Model identifier (e.g., "anthropic/claude-3-opus") */
id: string
/** Canonical slug with version (e.g., "anthropic/claude-3-opus-20240229") */
canonical_slug: string
/** Hugging Face model ID if available */
hugging_face_id: string | null
/** Display name */
name: string
/** Unix timestamp of model creation */
created: number
/** Model description/documentation */
description: string
/** Maximum context length in tokens */
context_length: number
/** Architecture and modality information */
architecture: {
/** Modality string (e.g., "text->text", "text+image->text") */
modality: string
/** Input modality types */
input_modalities: string[]
/** Output modality types */
output_modalities: string[]
/** Tokenizer type */
tokenizer: string
/** Instruction type if applicable */
instruct_type: string | null
}
/** Pricing information (per token as strings) */
pricing: {
/** Cost per prompt token */
prompt: string
/** Cost per completion token */
completion: string
/** Cost per request (base fee) */
request: string
/** Cost per image in request */
image: string
/** Cost for web search feature */
web_search: string
/** Cost for internal reasoning tokens */
internal_reasoning: string
/** Cost for reading cached inputs */
input_cache_read: string
}
/** Top provider configuration */
top_provider: {
/** Context length from top provider */
context_length: number
/** Maximum completion tokens */
max_completion_tokens: number | null
/** Whether content is moderated */
is_moderated: boolean
}
/** Per-request limits if any */
per_request_limits: Record<string, any> | null
/** Supported API parameters */
supported_parameters: string[]
/** Default parameter values */
default_parameters: {
temperature: number | null
top_p: number | null
frequency_penalty: number | null
}
}
export interface OpenRouterResponse {
/** Array of model data */
data: OpenRouterModel[]
}

View File

@ -0,0 +1,9 @@
/**
* Utility functions export
*/
export * from './merge-utils'
export * from './override-utils'
export * from './migration'
export * from './schema'
export * from './validate-type'

View File

@ -0,0 +1,260 @@
/**
* Merge utilities for smart data merging
* Only overwrites undefined values in existing data
*/
import type { ModelConfig, ProviderConfig } from '../schemas'
/**
* Smart merge options
*/
export interface MergeOptions {
/**
* If true, only overwrite undefined values in existing object
* If false, overwrite all values from new object
* @default true
*/
preserveExisting?: boolean
/**
* Fields to always overwrite regardless of preserveExisting setting
* Useful for fields that should always be updated (e.g., pricing)
*/
alwaysOverwrite?: string[]
/**
* Fields to never overwrite regardless of preserveExisting setting
* Useful for manually curated fields
*/
neverOverwrite?: string[]
}
/**
* Deep merge two objects, only overwriting undefined values in existing object
*
* @param existing - The existing object with potentially undefined values
* @param incoming - The new object with updated values
* @param options - Merge options
* @returns Merged object
*
* @example
* ```ts
* const existing = { id: 'model-1', description: undefined, pricing: { input: 1 } }
* const incoming = { id: 'model-1', description: 'New desc', pricing: { input: 2 } }
* const result = mergeObjects(existing, incoming)
* // Result: { id: 'model-1', description: 'New desc', pricing: { input: 1 } }
* ```
*/
export function mergeObjects<T extends Record<string, any>>(
existing: T,
incoming: Partial<T>,
options: MergeOptions = {}
): T {
const {
preserveExisting = true,
alwaysOverwrite = [],
neverOverwrite = []
} = options
const result = { ...existing }
for (const key in incoming) {
// Skip if field should never be overwritten
if (neverOverwrite.includes(key)) {
continue
}
const incomingValue = incoming[key]
const existingValue = existing[key]
// Always overwrite if field is in alwaysOverwrite list
if (alwaysOverwrite.includes(key)) {
result[key] = incomingValue as any
continue
}
// If not preserving existing, just overwrite
if (!preserveExisting) {
result[key] = incomingValue as any
continue
}
// Only overwrite if existing value is undefined
if (existingValue === undefined && incomingValue !== undefined) {
result[key] = incomingValue as any
} else if (
typeof existingValue === 'object' &&
existingValue !== null &&
!Array.isArray(existingValue) &&
typeof incomingValue === 'object' &&
incomingValue !== null &&
!Array.isArray(incomingValue)
) {
// Recursively merge nested objects
result[key] = mergeObjects(existingValue, incomingValue, options) as any
}
// Otherwise, keep existing value (including arrays)
}
return result
}
/**
* Merge a list of models, matching by ID (case-insensitive)
*
* @param existingModels - Current models array
* @param incomingModels - New models array to merge
* @param options - Merge options
* @returns Merged models array
*
* @example
* ```ts
* const existing = [{ id: 'GPT-4', description: 'Old' }, { id: 'm2', description: undefined }]
* const incoming = [{ id: 'gpt-4', description: 'New' }, { id: 'm2', description: 'New2' }]
* const result = mergeModelsList(existing, incoming)
* // gpt-4: matches GPT-4, merges and uses lowercase ID
* // m2: gets 'New2' description (was undefined)
* ```
*/
export function mergeModelsList(
existingModels: ModelConfig[],
incomingModels: ModelConfig[],
options: MergeOptions = {}
): ModelConfig[] {
// Create a map of existing models by lowercase ID
// Store both the normalized ID and original model
const existingMap = new Map<string, ModelConfig>()
for (const model of existingModels) {
const normalizedId = model.id.toLowerCase()
existingMap.set(normalizedId, model)
}
// Merge incoming models with existing
const mergedModels: ModelConfig[] = []
const processedIds = new Set<string>()
for (const incomingModel of incomingModels) {
const normalizedId = incomingModel.id.toLowerCase()
// Skip if we already processed this ID (deduplication within incoming list)
if (processedIds.has(normalizedId)) {
continue
}
const existing = existingMap.get(normalizedId)
if (existing) {
// Merge with existing, use incoming ID (should already be lowercase)
const merged = mergeObjects(existing, incomingModel, options)
// Ensure merged model uses lowercase ID
merged.id = normalizedId
mergedModels.push(merged)
} else {
// Add new model with lowercase ID
const newModel = { ...incomingModel, id: normalizedId }
mergedModels.push(newModel)
}
processedIds.add(normalizedId)
}
// Add any existing models that weren't in incoming list
for (const existing of existingModels) {
const normalizedId = existing.id.toLowerCase()
if (!processedIds.has(normalizedId)) {
// Ensure existing model uses lowercase ID
mergedModels.push({ ...existing, id: normalizedId })
}
}
return mergedModels
}
/**
* Merge a list of providers, matching by ID
*
* @param existingProviders - Current providers array
* @param incomingProviders - New providers array to merge
* @param options - Merge options
* @returns Merged providers array
*/
export function mergeProvidersList(
existingProviders: ProviderConfig[],
incomingProviders: ProviderConfig[],
options: MergeOptions = {}
): ProviderConfig[] {
// Create a map of existing providers by ID
const existingMap = new Map<string, ProviderConfig>()
for (const provider of existingProviders) {
existingMap.set(provider.id, provider)
}
// Merge incoming providers with existing
const mergedProviders: ProviderConfig[] = []
const processedIds = new Set<string>()
for (const incomingProvider of incomingProviders) {
const existing = existingMap.get(incomingProvider.id)
if (existing) {
// Merge with existing
const merged = mergeObjects(existing, incomingProvider, options)
mergedProviders.push(merged)
} else {
// Add new provider
mergedProviders.push(incomingProvider)
}
processedIds.add(incomingProvider.id)
}
// Add any existing providers that weren't in incoming list
for (const existing of existingProviders) {
if (!processedIds.has(existing.id)) {
mergedProviders.push(existing)
}
}
return mergedProviders
}
/**
* Preset merge strategies
*/
export const MergeStrategies = {
/**
* Only fill in undefined values, preserve all existing data
*/
FILL_UNDEFINED: {
preserveExisting: true,
alwaysOverwrite: [],
neverOverwrite: []
} as MergeOptions,
/**
* Update pricing and metadata, but preserve manually curated fields
*/
UPDATE_DYNAMIC: {
preserveExisting: true,
alwaysOverwrite: ['pricing', 'metadata'],
neverOverwrite: ['description', 'capabilities']
} as MergeOptions,
/**
* Full overwrite (replace everything)
*/
FULL_REPLACE: {
preserveExisting: false,
alwaysOverwrite: [],
neverOverwrite: []
} as MergeOptions,
/**
* Preserve manual edits, only update system fields
*/
PRESERVE_MANUAL: {
preserveExisting: true,
alwaysOverwrite: ['pricing', 'context_window', 'max_output_tokens'],
neverOverwrite: ['description', 'capabilities', 'input_modalities', 'output_modalities']
} as MergeOptions
}

View File

@ -3,7 +3,7 @@
* Provides centralized logic for applying provider-specific model overrides
*/
import type { ModelConfig, ProviderModelOverride } from '../schemas'
import type { CapabilityOverride, ModelConfig, ProviderModelOverride } from '../schemas'
/**
* Error thrown when an override cannot be applied
@ -39,11 +39,11 @@ export function applyOverrides(
}
// Apply capability modifications
let capabilities = [...baseModel.capabilities]
let capabilities = baseModel.capabilities ? [...baseModel.capabilities] : []
if (override.capabilities) {
if (override.capabilities.force) {
// Force: completely replace capabilities
capabilities = override.capabilities.force
capabilities = [...override.capabilities.force]
} else {
// Add new capabilities
if (override.capabilities.add) {
@ -91,7 +91,7 @@ export function validateOverride(baseModel: ModelConfig, override: ProviderModel
const warnings: string[] = []
// Check if removing all capabilities
if (override.capabilities?.remove) {
if (override.capabilities?.remove && baseModel.capabilities) {
const remainingCaps = baseModel.capabilities.filter(
(cap) => !override.capabilities!.remove!.includes(cap)
)
@ -104,6 +104,7 @@ export function validateOverride(baseModel: ModelConfig, override: ProviderModel
if (override.limits) {
if (
override.limits.context_window &&
baseModel.context_window &&
override.limits.context_window < baseModel.context_window
) {
warnings.push(
@ -112,6 +113,7 @@ export function validateOverride(baseModel: ModelConfig, override: ProviderModel
}
if (
override.limits.max_output_tokens &&
baseModel.max_output_tokens &&
override.limits.max_output_tokens < baseModel.max_output_tokens
) {
warnings.push(
@ -127,3 +129,365 @@ export function validateOverride(baseModel: ModelConfig, override: ProviderModel
return warnings
}
/**
* Deep equality check for comparing objects
*/
function deepEqual(a: any, b: any): boolean {
return JSON.stringify(a) === JSON.stringify(b)
}
/**
* Compare two model configurations and generate an override
* Only creates override fields where provider model differs from base
* @param baseModel The base model configuration
* @param providerModel The provider-specific model configuration
* @param providerId Provider identifier
* @param options Generation options
* @param options.priority Priority level (default: 0)
* @param options.alwaysCreate If true, creates override even when identical to mark provider support (default: false)
* @returns Generated override or null if no differences and alwaysCreate is false
*/
export function generateOverride(
baseModel: ModelConfig,
providerModel: ModelConfig,
providerId: string,
options: { priority?: number; alwaysCreate?: boolean } = {}
): ProviderModelOverride | null {
const override: Partial<ProviderModelOverride> = {
provider_id: providerId,
model_id: baseModel.id,
priority: options.priority ?? 0
}
let hasChanges = false
// Compare capabilities
const capDiff = compareCapabilities(baseModel.capabilities || [], providerModel.capabilities || [])
if (capDiff) {
override.capabilities = capDiff
hasChanges = true
}
// Compare limits
const limitsDiff = compareLimits(baseModel, providerModel)
if (limitsDiff) {
override.limits = limitsDiff
hasChanges = true
}
// Compare pricing
if (!deepEqual(baseModel.pricing, providerModel.pricing) && providerModel.pricing) {
override.pricing = providerModel.pricing
hasChanges = true
}
// Compare reasoning
if (!deepEqual(baseModel.reasoning, providerModel.reasoning) && providerModel.reasoning) {
override.reasoning = providerModel.reasoning
hasChanges = true
}
// Compare parameters
const paramsDiff = compareParameters(baseModel.parameters, providerModel.parameters)
if (paramsDiff) {
override.parameters = paramsDiff
hasChanges = true
}
// If alwaysCreate is true, return override even if no changes
// This creates an empty override to mark that provider supports this model
if (options.alwaysCreate) {
return override as ProviderModelOverride
}
return hasChanges ? (override as ProviderModelOverride) : null
}
/**
* Compare capabilities and generate add/remove operations
*/
function compareCapabilities(
base: ModelConfig['capabilities'] = [],
provider: ModelConfig['capabilities'] = []
): CapabilityOverride | null {
if (!base && !provider) return null
const baseArray = base || []
const providerArray = provider || []
const add = providerArray.filter((c) => !baseArray.includes(c))
const remove = baseArray.filter((c) => !providerArray.includes(c))
if (add.length === 0 && remove.length === 0) {
return null
}
return {
...(add.length > 0 && { add }),
...(remove.length > 0 && { remove })
}
}
/**
* Compare limits and return only differences
*/
function compareLimits(
base: ModelConfig,
provider: ModelConfig
): { context_window?: number; max_output_tokens?: number; max_input_tokens?: number } | null {
const limits: any = {}
let hasChanges = false
if (base.context_window !== provider.context_window && provider.context_window) {
limits.context_window = provider.context_window
hasChanges = true
}
if (base.max_output_tokens !== provider.max_output_tokens && provider.max_output_tokens) {
limits.max_output_tokens = provider.max_output_tokens
hasChanges = true
}
if (base.max_input_tokens !== provider.max_input_tokens && provider.max_input_tokens) {
limits.max_input_tokens = provider.max_input_tokens
hasChanges = true
}
return hasChanges ? limits : null
}
/**
* Compare parameter support
*/
function compareParameters(base?: any, provider?: any): any | null {
if (!provider || !base) {
return null
}
const diff: any = {}
let hasChanges = false
// Compare each parameter field
for (const key of Object.keys(provider)) {
if (!deepEqual(base[key], provider[key])) {
diff[key] = provider[key]
hasChanges = true
}
}
return hasChanges ? diff : null
}
/**
* Merge capability overrides from existing and generated
*/
function mergeCapabilityOverrides(
existing?: CapabilityOverride,
generated?: CapabilityOverride
): CapabilityOverride | undefined {
if (!existing && !generated) return undefined
if (!existing) return generated
if (!generated) return existing
const add = [...new Set([...(existing.add || []), ...(generated.add || [])])]
const remove = [...new Set([...(existing.remove || []), ...(generated.remove || [])])]
return {
...(add.length > 0 && { add }),
...(remove.length > 0 && { remove }),
force: existing.force || generated.force
}
}
/**
* Merge auto-generated override with existing manual override
* Manual overrides (priority >= 100) take precedence over auto-generated ones
*
* @param existing - Existing override (may be manual)
* @param generated - Auto-generated override from API sync
* @param options - Merge options
* @returns Merged override with manual fields taking precedence
*/
export function mergeOverrides(
existing: ProviderModelOverride,
generated: ProviderModelOverride,
options: {
preserveManual?: boolean
manualPriorityThreshold?: number
} = {}
): ProviderModelOverride {
const threshold = options.manualPriorityThreshold ?? 100
const isManual = existing.priority >= threshold
if (isManual && options.preserveManual) {
return existing // Keep manual completely unchanged
}
// Merge: manual fields > auto fields
return {
provider_id: existing.provider_id,
model_id: existing.model_id,
capabilities: mergeCapabilityOverrides(existing.capabilities, generated.capabilities),
limits: existing.limits || generated.limits,
pricing: isManual ? existing.pricing : generated.pricing, // Pricing always from latest unless manual
reasoning: existing.reasoning || generated.reasoning,
parameters: { ...generated.parameters, ...existing.parameters },
disabled: existing.disabled,
replace_with: existing.replace_with,
reason: existing.reason,
priority: existing.priority
}
}
/**
* Deduplicate overrides by provider_id + model_id
* Keeps highest priority when duplicates found
*
* @param overrides - Array of overrides that may contain duplicates
* @returns Deduplicated array with highest priority override for each provider+model pair
*/
export function deduplicateOverrides(overrides: ProviderModelOverride[]): ProviderModelOverride[] {
const map = new Map<string, ProviderModelOverride>()
for (const override of overrides) {
const key = `${override.provider_id}:${override.model_id}`
const existing = map.get(key)
if (!existing || override.priority > existing.priority) {
map.set(key, override)
}
}
return Array.from(map.values())
}
/**
* Check if override is redundant (matches base model exactly)
*/
function isOverrideRedundant(override: ProviderModelOverride, base: ModelConfig): boolean {
// Status fields (disabled, replace_with) make it non-redundant
if (override.disabled || override.replace_with) return false
// Check if all fields match base
let hasNonMatchingField = false
if (override.capabilities) hasNonMatchingField = true
if (override.limits) {
if (
(override.limits.context_window && override.limits.context_window !== base.context_window) ||
(override.limits.max_output_tokens &&
override.limits.max_output_tokens !== base.max_output_tokens) ||
(override.limits.max_input_tokens && override.limits.max_input_tokens !== base.max_input_tokens)
) {
hasNonMatchingField = true
}
}
if (override.pricing && !deepEqual(override.pricing, base.pricing)) hasNonMatchingField = true
if (override.reasoning && !deepEqual(override.reasoning, base.reasoning)) hasNonMatchingField = true
if (override.parameters) hasNonMatchingField = true
return !hasNonMatchingField
}
/**
* Remove redundant overrides that match base model exactly
*
* @param overrides - Array of overrides to clean
* @param baseModels - Array of base models to compare against
* @returns Object with kept overrides, removed overrides, and removal reasons
*/
export function cleanupRedundantOverrides(
overrides: ProviderModelOverride[],
baseModels: ModelConfig[]
): {
kept: ProviderModelOverride[]
removed: ProviderModelOverride[]
reasons: Record<string, string>
} {
const baseMap = new Map(baseModels.map((m) => [m.id, m]))
const kept: ProviderModelOverride[] = []
const removed: ProviderModelOverride[] = []
const reasons: Record<string, string> = {}
for (const override of overrides) {
const baseModel = baseMap.get(override.model_id)
if (!baseModel) {
kept.push(override)
continue
}
// Check if redundant
if (isOverrideRedundant(override, baseModel)) {
removed.push(override)
reasons[`${override.provider_id}:${override.model_id}`] = 'Override matches base model'
} else {
kept.push(override)
}
}
return { kept, removed, reasons }
}
/**
* Enhanced validation with business rules beyond schema validation
*
* @param override - Override to validate
* @param baseModel - Optional base model for additional validation
* @returns Validation result with errors and warnings
*/
export function validateOverrideEnhanced(
override: ProviderModelOverride,
baseModel?: ModelConfig
): { valid: boolean; errors: string[]; warnings: string[] } {
const errors: string[] = []
const warnings: string[] = []
// Schema validation (existing)
if (baseModel) {
warnings.push(...validateOverride(baseModel, override))
}
// Business rules
if (override.pricing) {
if (!override.pricing.input || !override.pricing.output) {
errors.push('Pricing must include both input and output')
}
if (override.pricing.input && override.pricing.input.per_million_tokens < 0) {
errors.push('Input pricing cannot be negative')
}
if (override.pricing.output && override.pricing.output.per_million_tokens < 0) {
errors.push('Output pricing cannot be negative')
}
}
if (override.capabilities) {
const { add = [], remove = [] } = override.capabilities
const overlap = add.filter((c) => remove.includes(c))
if (overlap.length) {
errors.push(`Capability conflict: ${overlap.join(', ')} appears in both add and remove`)
}
}
if (override.limits) {
if (override.limits.max_output_tokens && override.limits.context_window) {
if (override.limits.max_output_tokens > override.limits.context_window) {
warnings.push('max_output_tokens exceeds context_window')
}
}
if (override.limits.context_window !== undefined && override.limits.context_window <= 0) {
errors.push('context_window must be positive')
}
if (override.limits.max_output_tokens !== undefined && override.limits.max_output_tokens <= 0) {
errors.push('max_output_tokens must be positive')
}
}
if (override.disabled && !override.reason) {
warnings.push('Disabled override should include a reason')
}
return { valid: errors.length === 0, errors, warnings }
}

View File

@ -99,16 +99,29 @@ export class SchemaValidator {
}
if (includeWarnings) {
if (!config.behaviors.requiresApiKeyValidation) {
warnings.push('Provider does not require API key validation - ensure this is intentional')
// Check formats configuration
if (!config.formats || config.formats.length === 0) {
warnings.push('No API formats defined for provider')
}
if (config.endpoints.length === 0) {
warnings.push('No endpoints defined for provider')
// Check if there's a default format
if (config.formats && !config.formats.some((f: any) => f.default)) {
warnings.push('No default format specified - first format will be used as default')
}
if (config.pricingModel === 'UNIFIED' && !config.behaviors.providesModelMapping) {
warnings.push('Unified pricing model without model mapping may cause confusion')
// Check for multiple default formats
if (config.formats && config.formats.filter((f: any) => f.default).length > 1) {
warnings.push('Multiple default formats specified - only one should be marked as default')
}
// Check if documentation is provided
if (!config.documentation) {
warnings.push('No documentation URL provided')
}
// Check supported_endpoints
if (!config.supported_endpoints || config.supported_endpoints.length === 0) {
warnings.push('No supported endpoints specified')
}
}

View File

@ -12,6 +12,7 @@ import {
OverridesDataFileSchema
} from '@/lib/catalog-types'
import { safeParseWithValidation, validateString, ValidationError, createErrorResponse } from '@/lib/validation'
import { applyOverrides, OverrideApplicationError } from '../../../../src/utils/override-utils'
const DATA_DIR = path.join(process.cwd(), '../data')
@ -63,9 +64,9 @@ function detectModifications(
return modifications.pricing || modifications.limits ? modifications : null
}
export async function GET(request: NextRequest, { params }: { params: { modelId: string; providerId: string } }) {
export async function GET(request: NextRequest, { params }: { params: Promise<{ modelId: string; providerId: string }> }) {
try {
const { modelId, providerId } = params
const { modelId, providerId } = await params
// Validate parameters
const validModelId = validateString(modelId, 'modelId')
@ -140,9 +141,9 @@ const ProviderModelUpdateResponseSchema = z.object({
model: ModelSchema
})
export async function PUT(request: NextRequest, { params }: { params: { modelId: string; providerId: string } }) {
export async function PUT(request: NextRequest, { params }: { params: Promise<{ modelId: string; providerId: string }> }) {
try {
const { modelId, providerId } = params
const { modelId, providerId } = await params
// Validate parameters
const validModelId = validateString(modelId, 'modelId')

View File

@ -9,9 +9,9 @@ import { createErrorResponse, safeParseWithValidation, ValidationError } from '@
const DATA_DIR = path.join(process.cwd(), '../data')
export async function GET(request: NextRequest, { params }: { params: { modelId: string } }) {
export async function GET(request: NextRequest, { params }: { params: Promise<{ modelId: string }> }) {
try {
const { modelId } = params
const { modelId } = await params
// Read and validate models data using Zod
const modelsDataPath = path.join(DATA_DIR, 'models.json')
@ -43,24 +43,12 @@ export async function GET(request: NextRequest, { params }: { params: { modelId:
}
}
export async function PUT(request: NextRequest, { params }: { params: { modelId: string } }) {
export async function PUT(request: NextRequest, { params }: { params: Promise<{ modelId: string }> }) {
try {
const { modelId } = params
const { modelId } = await params
// Read and validate request body using Zod
const requestBody = await request.json()
const updatedModel = await safeParseWithValidation(
JSON.stringify(requestBody),
ModelSchema,
'Invalid model data in request body'
)
// Validate that the model ID matches
if (updatedModel.id !== modelId) {
return NextResponse.json(createErrorResponse('Model ID in request body must match URL parameter', 400), {
status: 400
})
}
// Read current models data using Zod
const modelsDataPath = path.join(DATA_DIR, 'models.json')
@ -71,12 +59,28 @@ export async function PUT(request: NextRequest, { params }: { params: { modelId:
'Invalid models data format in file'
)
// Find and update the model
// Find the model
const modelIndex = modelsData.models.findIndex((m) => m.id === modelId)
if (modelIndex === -1) {
return NextResponse.json(createErrorResponse('Model not found', 404), { status: 404 })
}
const existingModel = modelsData.models[modelIndex]
// Merge existing model with updates (partial update support)
const mergedModel = {
...existingModel,
...requestBody,
id: modelId // Ensure ID cannot be changed
}
// Validate the merged model
const updatedModel = await safeParseWithValidation(
JSON.stringify(mergedModel),
ModelSchema,
'Invalid model data after merge'
)
// Create updated models array (immutability)
const updatedModels = [
...modelsData.models.slice(0, modelIndex),
@ -111,3 +115,8 @@ export async function PUT(request: NextRequest, { params }: { params: { modelId:
)
}
}
export async function PATCH(request: NextRequest, { params }: { params: Promise<{ modelId: string }> }) {
// PATCH is just an alias for PUT in this case, both support partial updates
return PUT(request, { params })
}

View File

@ -3,10 +3,11 @@ import type { NextRequest } from 'next/server'
import { NextResponse } from 'next/server'
import path from 'path'
import type { Model } from '@/lib/catalog-types'
import type { Model, ProviderModelOverride } from '@/lib/catalog-types'
import {
ModelSchema,
ModelsDataFileSchema
ModelsDataFileSchema,
OverridesDataFileSchema
} from '@/lib/catalog-types'
import {
createErrorResponse,
@ -18,14 +19,101 @@ import {
const DATA_DIR = path.join(process.cwd(), '../data')
/**
* Apply provider overrides to a model
*/
function applyOverride(model: Model, override: ProviderModelOverride, providerId: string): Model {
const result = { ...model }
// Apply capabilities override
if (override.capabilities) {
let capabilities = [...(model.capabilities || [])]
if (override.capabilities.add) {
capabilities.push(...override.capabilities.add)
}
if (override.capabilities.remove) {
capabilities = capabilities.filter(c => !override.capabilities.remove?.includes(c))
}
if (override.capabilities.force) {
capabilities = override.capabilities.force
}
result.capabilities = [...new Set(capabilities)] // Deduplicate
}
// Apply limits override
if (override.limits) {
if (override.limits.context_window !== undefined) {
result.context_window = override.limits.context_window
}
if (override.limits.max_output_tokens !== undefined) {
result.max_output_tokens = override.limits.max_output_tokens
}
}
// Apply pricing override
if (override.pricing) {
result.pricing = override.pricing
}
// Apply reasoning override
if (override.reasoning) {
result.reasoning = override.reasoning
}
// Apply parameters override
if (override.parameters) {
result.parameters = { ...result.parameters, ...override.parameters }
}
// Set provider (override the owned_by to show which provider this model is being accessed through)
result.owned_by = providerId
return result
}
function filterModels(
models: readonly Model[],
overrides: readonly ProviderModelOverride[],
search?: string,
capabilities?: string[],
providers?: string[]
): Model[] {
let filtered = [...models]
// Build override map for quick lookup
const overrideMap = new Map<string, Map<string, ProviderModelOverride>>()
for (const override of overrides) {
if (!overrideMap.has(override.provider_id)) {
overrideMap.set(override.provider_id, new Map())
}
overrideMap.get(override.provider_id)!.set(override.model_id.toLowerCase(), override)
}
// If providers filter is specified, apply overrides and filter
if (providers && providers.length > 0) {
const results: Model[] = []
for (const model of filtered) {
for (const providerId of providers) {
// Check if this model is available for this provider
const matchesOwnedBy = model.owned_by && model.owned_by === providerId
const matchesSource = model.metadata?.source && model.metadata.source === providerId
const override = overrideMap.get(providerId)?.get(model.id.toLowerCase())
if (matchesOwnedBy || matchesSource || override) {
// Apply override if exists, otherwise use base model
const finalModel = override
? applyOverride(model, override, providerId)
: { ...model, owned_by: providerId } // Set provider even without override
results.push(finalModel)
}
}
}
filtered = results
}
if (search) {
const searchLower = search.toLowerCase()
filtered = filtered.filter(
@ -37,11 +125,7 @@ function filterModels(
}
if (capabilities && capabilities.length > 0) {
filtered = filtered.filter((model) => capabilities.some((cap) => model.capabilities.includes(cap)))
}
if (providers && providers.length > 0) {
filtered = filtered.filter((model) => model.owned_by && providers.includes(model.owned_by))
filtered = filtered.filter((model) => model.capabilities && capabilities.some((cap) => model.capabilities.includes(cap)))
}
return filtered
@ -96,9 +180,19 @@ export async function GET(request: NextRequest) {
'Invalid models data format in file'
)
// Read and validate overrides data using Zod
const overridesDataPath = path.join(DATA_DIR, 'overrides.json')
const overridesDataRaw = await fs.readFile(overridesDataPath, 'utf-8')
const overridesData = await safeParseWithValidation(
overridesDataRaw,
OverridesDataFileSchema,
'Invalid overrides data format in file'
)
// Filter models with type safety
const filteredModels = filterModels(
modelsData.models,
overridesData.overrides,
validatedParams.search,
validatedParams.capabilities,
validatedParams.providers

View File

@ -9,9 +9,9 @@ import { createErrorResponse, safeParseWithValidation, ValidationError } from '@
const DATA_DIR = path.join(process.cwd(), '../data')
export async function GET(request: NextRequest, { params }: { params: { providerId: string } }) {
export async function GET(request: NextRequest, { params }: { params: Promise<{ providerId: string }> }) {
try {
const { providerId } = params
const { providerId } = await params
// Read and validate providers data using Zod
const providersDataPath = path.join(DATA_DIR, 'providers.json')
@ -43,24 +43,12 @@ export async function GET(request: NextRequest, { params }: { params: { provider
}
}
export async function PUT(request: NextRequest, { params }: { params: { providerId: string } }) {
export async function PUT(request: NextRequest, { params }: { params: Promise<{ providerId: string }> }) {
try {
const { providerId } = params
const { providerId } = await params
// Read and validate request body using Zod
const requestBody = await request.json()
const updatedProvider = await safeParseWithValidation(
JSON.stringify(requestBody),
ProviderSchema,
'Invalid provider data in request body'
)
// Validate that the provider ID matches
if (updatedProvider.id !== providerId) {
return NextResponse.json(createErrorResponse('Provider ID in request body must match URL parameter', 400), {
status: 400
})
}
// Read current providers data using Zod
const providersDataPath = path.join(DATA_DIR, 'providers.json')
@ -71,12 +59,28 @@ export async function PUT(request: NextRequest, { params }: { params: { provider
'Invalid providers data format in file'
)
// Find and update the provider
// Find the provider
const providerIndex = providersData.providers.findIndex((p) => p.id === providerId)
if (providerIndex === -1) {
return NextResponse.json(createErrorResponse('Provider not found', 404), { status: 404 })
}
const existingProvider = providersData.providers[providerIndex]
// Merge existing provider with updates (partial update support)
const mergedProvider = {
...existingProvider,
...requestBody,
id: providerId // Ensure ID cannot be changed
}
// Validate the merged provider
const updatedProvider = await safeParseWithValidation(
JSON.stringify(mergedProvider),
ProviderSchema,
'Invalid provider data after merge'
)
// Create updated providers array (immutability)
const updatedProviders = [
...providersData.providers.slice(0, providerIndex),
@ -111,3 +115,8 @@ export async function PUT(request: NextRequest, { params }: { params: { provider
)
}
}
export async function PATCH(request: NextRequest, { params }: { params: Promise<{ providerId: string }> }) {
// PATCH is just an alias for PUT in this case, both support partial updates
return PUT(request, { params })
}

View File

@ -0,0 +1,231 @@
import { promises as fs } from 'fs'
import type { NextRequest } from 'next/server'
import { NextResponse } from 'next/server'
import path from 'path'
import type { ModelsDataFile, ProvidersDataFile, OverridesDataFile } from '@/lib/catalog-types'
import { ModelsDataFileSchema, ProvidersDataFileSchema, OverridesDataFileSchema } from '@/lib/catalog-types'
import { createErrorResponse, safeParseWithValidation, ValidationError } from '@/lib/validation'
import { BaseImporter } from '../../../../../../src/utils/importers/base/base-importer'
import { OpenRouterTransformer } from '../../../../../../src/utils/importers/openrouter/transformer'
import { AiHubMixTransformer } from '../../../../../../src/utils/importers/aihubmix/transformer'
import { OpenAICompatibleTransformer } from '../../../../../../src/utils/importers/base/base-transformer'
import { mergeModelsList, MergeStrategies } from '../../../../../../src/utils/merge-utils'
import { generateOverride, mergeOverrides, deduplicateOverrides } from '../../../../../../src/utils/override-utils'
const DATA_DIR = path.join(process.cwd(), '../data')
/**
* Sync models from provider API
* POST /api/catalog/providers/[providerId]/sync
*/
export async function POST(request: NextRequest, { params }: { params: Promise<{ providerId: string }> }) {
try {
const { providerId } = await params
const body = await request.json().catch(() => ({}))
const apiKey = body.apiKey as string | undefined
// Read providers data
const providersDataPath = path.join(DATA_DIR, 'providers.json')
const providersDataRaw = await fs.readFile(providersDataPath, 'utf-8')
const providersData = await safeParseWithValidation(
providersDataRaw,
ProvidersDataFileSchema,
'Invalid providers data format'
)
// Find provider
const provider = providersData.providers.find((p) => p.id === providerId)
if (!provider) {
return NextResponse.json(createErrorResponse('Provider not found', 404), { status: 404 })
}
// Check if provider has models_api configured
if (!provider.models_api || !provider.models_api.enabled) {
return NextResponse.json(
createErrorResponse(
'Provider does not have models_api configured or it is disabled',
400,
{ providerId, has_models_api: !!provider.models_api, enabled: provider.models_api?.enabled }
),
{ status: 400 }
)
}
// Read current models data
const modelsDataPath = path.join(DATA_DIR, 'models.json')
const modelsDataRaw = await fs.readFile(modelsDataPath, 'utf-8')
const modelsData = await safeParseWithValidation(
modelsDataRaw,
ModelsDataFileSchema,
'Invalid models data format'
)
// Read current overrides data
const overridesDataPath = path.join(DATA_DIR, 'overrides.json')
let overridesData: OverridesDataFile
try {
const overridesDataRaw = await fs.readFile(overridesDataPath, 'utf-8')
overridesData = await safeParseWithValidation(
overridesDataRaw,
OverridesDataFileSchema,
'Invalid overrides data format'
)
} catch (error) {
// If overrides.json doesn't exist, create empty structure
overridesData = {
version: new Date().toISOString().split('T')[0].replace(/-/g, '.'),
overrides: []
}
}
// Initialize importer and transformer
const importer = new BaseImporter()
let transformer
// Select transformer based on provider
if (providerId === 'openrouter') {
transformer = new OpenRouterTransformer()
} else if (providerId === 'aihubmix') {
transformer = new AiHubMixTransformer()
} else {
// Use default OpenAI-compatible transformer
transformer = new OpenAICompatibleTransformer()
}
// Import models from all endpoints
const importResults = []
const allProviderModels = []
for (const endpoint of provider.models_api.endpoints) {
try {
const result = await importer.importFromEndpoint(providerId, endpoint, transformer, apiKey)
importResults.push(result)
allProviderModels.push(...result.models)
} catch (error) {
console.error(`Failed to import from endpoint ${endpoint.url}:`, error)
importResults.push({
providerId,
endpointType: endpoint.endpoint_type,
models: [],
fetchedAt: new Date().toISOString(),
count: 0,
error: error instanceof Error ? error.message : 'Unknown error'
})
}
}
// Statistics
const stats = {
fetched: allProviderModels.length,
newModels: 0,
updatedModels: 0,
overridesGenerated: 0,
overridesMerged: 0
}
// Merge with existing models.json
const existingModelIds = new Set(modelsData.models.map((m) => m.id.toLowerCase()))
const newModels = allProviderModels.filter((m) => !existingModelIds.has(m.id.toLowerCase()))
stats.newModels = newModels.length
// Add new models to models.json
if (newModels.length > 0) {
modelsData.models = [...modelsData.models, ...newModels]
stats.updatedModels += newModels.length
}
// Generate or update overrides for existing models
const newOverrides = []
for (const providerModel of allProviderModels) {
const baseModel = modelsData.models.find((m) => m.id.toLowerCase() === providerModel.id.toLowerCase())
if (!baseModel) continue // Skip new models (already added above)
// Always generate override to mark provider support (even if identical)
const generatedOverride = generateOverride(baseModel, providerModel, providerId, {
priority: 0,
alwaysCreate: true
})
if (generatedOverride) {
// Check if manual override exists (priority >= 100)
const existingOverride = overridesData.overrides.find(
(o) => o.provider_id === providerId && o.model_id.toLowerCase() === providerModel.id.toLowerCase()
)
if (existingOverride) {
// Merge with existing override
const mergedOverride = mergeOverrides(existingOverride, generatedOverride, {
preserveManual: true,
manualPriorityThreshold: 100
})
newOverrides.push(mergedOverride)
stats.overridesMerged++
} else {
// Add new override
newOverrides.push(generatedOverride)
stats.overridesGenerated++
}
}
}
// Update overrides data
if (newOverrides.length > 0) {
// Remove old auto-generated overrides for this provider (priority < 100)
const filteredOverrides = overridesData.overrides.filter(
(o) => !(o.provider_id === providerId && o.priority < 100)
)
// Add new overrides
overridesData.overrides = [...filteredOverrides, ...newOverrides]
// Deduplicate
overridesData.overrides = deduplicateOverrides(overridesData.overrides)
}
// Update last_synced timestamp in provider config
const updatedProvider = {
...provider,
models_api: {
...provider.models_api,
last_synced: new Date().toISOString()
}
}
const providerIndex = providersData.providers.findIndex((p) => p.id === providerId)
providersData.providers[providerIndex] = updatedProvider
// Save all data files
await fs.writeFile(providersDataPath, JSON.stringify(providersData, null, 2) + '\n', 'utf-8')
await fs.writeFile(modelsDataPath, JSON.stringify(modelsData, null, 2) + '\n', 'utf-8')
await fs.writeFile(overridesDataPath, JSON.stringify(overridesData, null, 2) + '\n', 'utf-8')
// Return sync report
return NextResponse.json({
success: true,
providerId,
syncedAt: new Date().toISOString(),
statistics: stats,
importResults: importResults.map((r) => ({
endpointType: r.endpointType,
count: r.count,
error: (r as any).error
}))
})
} catch (error) {
if (error instanceof ValidationError) {
console.error('Validation error:', error.message, error.details)
return NextResponse.json(createErrorResponse(error.message, 400, error.details), { status: 400 })
}
console.error('Error syncing provider models:', error)
return NextResponse.json(
createErrorResponse(
'Failed to sync provider models',
500,
error instanceof Error ? error.message : 'Unknown error'
),
{ status: 500 }
)
}
}

View File

@ -2,6 +2,7 @@ import './globals.css'
import type { Metadata } from 'next'
import { Geist, Geist_Mono } from 'next/font/google'
import { Toaster } from '@/components/ui/sonner'
const geistSans = Geist({
variable: '--font-geist-sans',
@ -14,8 +15,8 @@ const geistMono = Geist_Mono({
})
export const metadata: Metadata = {
title: 'Create Next App',
description: 'Generated by create next app'
title: 'Catalog Management',
description: 'Manage AI model and provider catalog'
}
export default function RootLayout({
@ -25,7 +26,10 @@ export default function RootLayout({
}>) {
return (
<html lang="en">
<body className={`${geistSans.variable} ${geistMono.variable} antialiased`}>{children}</body>
<body className={`${geistSans.variable} ${geistMono.variable} antialiased`}>
{children}
<Toaster />
</body>
</html>
)
}

View File

@ -19,9 +19,11 @@ import { Input } from '@/components/ui/input'
import { Separator } from '@/components/ui/separator'
import { Table, TableBody, TableCell, TableHead, TableHeader, TableRow } from '@/components/ui/table'
import { Textarea } from '@/components/ui/textarea'
import { ModelEditForm } from '@/components/model-edit-form'
// Import SWR hooks and utilities
import { getErrorMessage, useDebounce, useModels, useUpdateModel } from '@/lib/api-client'
import { getErrorMessage, useDebounce, useModels, useProviders, useUpdateModel } from '@/lib/api-client'
import type { CapabilityType, Model } from '@/lib/catalog-types'
import { toast } from 'sonner'
// Type-safe capabilities list
const CAPABILITIES: readonly CapabilityType[] = [
@ -94,6 +96,7 @@ export default function CatalogReview() {
const [currentPage, setCurrentPage] = useState(1)
const [editingModel, setEditingModel] = useState<Model | null>(null)
const [jsonContent, setJsonContent] = useState('')
const [editMode, setEditMode] = useState<'form' | 'json'>('form')
// Debounce search to avoid excessive API calls
const debouncedSearch = useDebounce(search, 300)
@ -111,6 +114,11 @@ export default function CatalogReview() {
providers: selectedProviders.length > 0 ? selectedProviders : undefined
})
// SWR hook for fetching all providers
const { data: providersData } = useProviders({
limit: 100 // Maximum allowed limit
})
// SWR mutation for updating models
const { trigger: updateModel, isMutating: isUpdating } = useUpdateModel()
@ -130,22 +138,33 @@ export default function CatalogReview() {
setJsonContent(JSON.stringify(model, null, 2))
}
const handleSave = async () => {
const handleSave = async (data?: Partial<Model>) => {
if (!editingModel) return
try {
// Validate JSON before sending
const updatedModel = JSON.parse(jsonContent) as unknown
let updatedModel: Partial<Model>
// Basic validation - the API will do thorough validation
if (!updatedModel || typeof updatedModel !== 'object') {
throw new Error('Invalid JSON format')
if (data) {
// Form submission
updatedModel = data
} else {
// JSON submission
const parsed = JSON.parse(jsonContent) as unknown
if (!parsed || typeof parsed !== 'object') {
throw new Error('Invalid JSON format')
}
updatedModel = parsed as Partial<Model>
}
// Use SWR mutation for optimistic update
await updateModel({
id: editingModel.id,
data: updatedModel as Partial<Model>
data: updatedModel
})
// Show success toast
toast.success('Model updated successfully', {
description: `${editingModel.id} has been updated`
})
// Close dialog and reset form
@ -153,16 +172,15 @@ export default function CatalogReview() {
setJsonContent('')
} catch (error) {
console.error('Error saving model:', error)
// Error will be handled by SWR and displayed in UI
// Show error toast
toast.error('Failed to update model', {
description: error instanceof Error ? error.message : 'Unknown error occurred'
})
}
}
// Type-safe function to extract unique providers
const getUniqueProviders = (): string[] => {
return [
...new Set(models.map((model) => model.owned_by).filter((provider): provider is string => Boolean(provider)))
]
}
// Get all unique providers from providers.json
const allProviders = providersData?.data?.map((p) => p.id) || []
return (
<div className="container mx-auto p-6 space-y-6">
@ -211,7 +229,7 @@ export default function CatalogReview() {
<div>
<label className="text-sm font-medium mb-2 block">Providers</label>
<div className="flex flex-wrap gap-2">
{getUniqueProviders().map((provider) => (
{allProviders.map((provider) => (
<Badge
key={provider}
variant={selectedProviders.includes(provider) ? 'default' : 'outline'}
@ -296,27 +314,62 @@ export default function CatalogReview() {
Edit
</Button>
</DialogTrigger>
<DialogContent className="max-w-4xl max-h-[80vh] overflow-auto">
<DialogContent className="max-w-4xl max-h-[90vh] overflow-hidden flex flex-col">
<DialogHeader>
<DialogTitle>Edit Model Configuration</DialogTitle>
<DialogDescription>
Modify the JSON configuration for {model.name || model.id}
</DialogDescription>
</DialogHeader>
<div className="space-y-4">
<Textarea
value={jsonContent}
onChange={(e) => setJsonContent(e.target.value)}
className="min-h-[400px] font-mono text-sm"
/>
<div className="flex gap-2 justify-end">
<Button variant="outline" onClick={() => setEditingModel(null)}>
Cancel
</Button>
<Button onClick={handleSave} disabled={isUpdating}>
{isUpdating ? 'Saving...' : 'Save Changes'}
</Button>
<div className="flex items-center justify-between">
<div>
<DialogTitle>Edit Model Configuration</DialogTitle>
<DialogDescription>
{editMode === 'form' ? 'Use the form below' : 'Edit JSON'} to modify {model.name || model.id}
</DialogDescription>
</div>
<div className="flex gap-2">
<Button
variant={editMode === 'form' ? 'default' : 'outline'}
size="sm"
onClick={() => setEditMode('form')}>
Form
</Button>
<Button
variant={editMode === 'json' ? 'default' : 'outline'}
size="sm"
onClick={() => setEditMode('json')}>
JSON
</Button>
</div>
</div>
</DialogHeader>
<div className="flex-1 overflow-auto">
{editMode === 'form' ? (
<ModelEditForm
model={model}
onSave={handleSave}
onCancel={() => setEditingModel(null)}
isSaving={isUpdating}
/>
) : (
<div className="space-y-4">
<Textarea
value={jsonContent}
onChange={(e) => setJsonContent(e.target.value)}
className="min-h-[500px] font-mono text-sm"
/>
<div className="flex gap-3 justify-end">
<Button
variant="outline"
onClick={() => setEditingModel(null)}
className="min-w-[100px]">
Cancel
</Button>
<Button
onClick={() => handleSave()}
disabled={isUpdating}
className="min-w-[140px] bg-primary hover:bg-primary/90">
{isUpdating ? 'Saving...' : 'Save Changes'}
</Button>
</div>
</div>
)}
</div>
</DialogContent>
</Dialog>

View File

@ -19,9 +19,11 @@ import { Input } from '@/components/ui/input'
import { Separator } from '@/components/ui/separator'
import { Table, TableBody, TableCell, TableHead, TableHeader, TableRow } from '@/components/ui/table'
import { Textarea } from '@/components/ui/textarea'
import { ProviderEditForm } from '@/components/provider-edit-form'
// Import SWR hooks and utilities
import { getErrorMessage, useDebounce, useProviders, useUpdateProvider } from '@/lib/api-client'
import { getErrorMessage, useDebounce, useProviders, useSyncProvider, useUpdateProvider } from '@/lib/api-client'
import type { Provider } from '@/lib/catalog-types'
import { toast } from 'sonner'
// Simple Pagination Component
function SimplePagination({
@ -71,6 +73,7 @@ export default function ProvidersPage() {
const [currentPage, setCurrentPage] = useState(1)
const [editingProvider, setEditingProvider] = useState<Provider | null>(null)
const [jsonContent, setJsonContent] = useState('')
const [editMode, setEditMode] = useState<'form' | 'json'>('form')
// Debounce search to avoid excessive API calls
const debouncedSearch = useDebounce(search, 300)
@ -90,6 +93,9 @@ export default function ProvidersPage() {
// SWR mutation for updating providers
const { trigger: updateProvider, isMutating: isUpdating } = useUpdateProvider()
// SWR mutation for syncing provider models
const { trigger: syncProvider, isMutating: isSyncing } = useSyncProvider()
// Extract data from SWR response
const providers = providersData?.data || []
const pagination = providersData?.pagination || {
@ -106,22 +112,33 @@ export default function ProvidersPage() {
setJsonContent(JSON.stringify(provider, null, 2))
}
const handleSave = async () => {
const handleSave = async (data?: Partial<Provider>) => {
if (!editingProvider) return
try {
// Validate JSON before sending
const updatedProvider = JSON.parse(jsonContent) as unknown
let updatedProvider: Partial<Provider>
// Basic validation - the API will do thorough validation
if (!updatedProvider || typeof updatedProvider !== 'object') {
throw new Error('Invalid JSON format')
if (data) {
// Form submission
updatedProvider = data
} else {
// JSON submission
const parsed = JSON.parse(jsonContent) as unknown
if (!parsed || typeof parsed !== 'object') {
throw new Error('Invalid JSON format')
}
updatedProvider = parsed as Partial<Provider>
}
// Use SWR mutation for optimistic update
await updateProvider({
id: editingProvider.id,
data: updatedProvider as Partial<Provider>
data: updatedProvider
})
// Show success toast
toast.success('Provider updated successfully', {
description: `${editingProvider.name} has been updated`
})
// Close dialog and reset form
@ -129,15 +146,55 @@ export default function ProvidersPage() {
setJsonContent('')
} catch (error) {
console.error('Error saving provider:', error)
// Error will be handled by SWR and displayed in UI
// Show error toast
toast.error('Failed to update provider', {
description: error instanceof Error ? error.message : 'Unknown error occurred'
})
}
}
// Type-safe function to extract provider capabilities
const getCapabilities = (behaviors: Record<string, unknown>): string[] => {
return Object.entries(behaviors)
.filter(([_, value]) => value === true)
.map(([key, _]) => key.replace(/_/g, ' ').replace(/\b\w/g, (letter) => letter.toUpperCase()))
const handleSync = async (provider: Provider) => {
if (!provider.models_api || !provider.models_api.enabled) {
toast.error('Sync not available', {
description: 'This provider does not have models_api configured'
})
return
}
try {
// Show loading toast
const loadingToast = toast.loading(`Syncing models from ${provider.name}...`, {
description: 'This may take a few moments'
})
// Trigger sync
const result = await syncProvider({
id: provider.id,
apiKey: undefined // TODO: Add API key input if needed
})
// Dismiss loading toast
toast.dismiss(loadingToast)
// Show success toast with statistics
const stats = result.statistics
toast.success(`Successfully synced ${provider.name}`, {
description: `Fetched: ${stats.fetched}, New: ${stats.newModels}, Overrides: ${stats.overridesGenerated + stats.overridesMerged}`
})
// Refresh provider list to show updated last_synced
refetchProviders()
} catch (error) {
console.error('Error syncing provider:', error)
toast.error('Failed to sync models', {
description: error instanceof Error ? error.message : 'Unknown error occurred'
})
}
}
// Type-safe function to extract provider formats
const getFormats = (provider: Provider): string[] => {
return provider.formats?.map((f) => f.format) || []
}
return (
@ -192,9 +249,8 @@ export default function ProvidersPage() {
<TableHead>ID</TableHead>
<TableHead>Name</TableHead>
<TableHead>Authentication</TableHead>
<TableHead>Pricing Model</TableHead>
<TableHead>Formats</TableHead>
<TableHead>Endpoints</TableHead>
<TableHead>Capabilities</TableHead>
<TableHead>Status</TableHead>
<TableHead>Actions</TableHead>
</TableRow>
@ -214,35 +270,30 @@ export default function ProvidersPage() {
<TableCell>
<Badge variant="outline">{provider.authentication}</Badge>
</TableCell>
<TableCell>
<Badge variant="secondary">{provider.pricing_model}</Badge>
</TableCell>
<TableCell>
<div className="flex flex-wrap gap-1 max-w-xs">
{provider.supported_endpoints.slice(0, 2).map((endpoint) => (
<Badge key={endpoint} variant="outline" className="text-xs">
{endpoint}
{getFormats(provider).slice(0, 2).map((format) => (
<Badge key={format} variant="secondary" className="text-xs">
{format}
</Badge>
))}
{provider.supported_endpoints.length > 2 && (
<Badge variant="outline" className="text-xs">
+{provider.supported_endpoints.length - 2}
{getFormats(provider).length > 2 && (
<Badge variant="secondary" className="text-xs">
+{getFormats(provider).length - 2}
</Badge>
)}
</div>
</TableCell>
<TableCell>
<div className="flex flex-wrap gap-1 max-w-xs">
{getCapabilities(provider.behaviors)
.slice(0, 2)
.map((capability) => (
<Badge key={capability} variant="secondary" className="text-xs">
{capability}
</Badge>
))}
{getCapabilities(provider.behaviors).length > 2 && (
<Badge variant="secondary" className="text-xs">
+{getCapabilities(provider.behaviors).length - 2}
{provider.supported_endpoints?.slice(0, 2).map((endpoint) => (
<Badge key={endpoint} variant="outline" className="text-xs">
{endpoint}
</Badge>
)) || <span className="text-muted-foreground text-xs">N/A</span>}
{(provider.supported_endpoints?.length || 0) > 2 && (
<Badge variant="outline" className="text-xs">
+{(provider.supported_endpoints?.length || 0) - 2}
</Badge>
)}
</div>
@ -254,12 +305,7 @@ export default function ProvidersPage() {
Deprecated
</Badge>
)}
{provider.maintenance_mode && (
<Badge variant="outline" className="text-xs">
Maintenance
</Badge>
)}
{!provider.deprecated && !provider.maintenance_mode && (
{!provider.deprecated && (
<Badge variant="default" className="text-xs">
Active
</Badge>
@ -267,34 +313,87 @@ export default function ProvidersPage() {
</div>
</TableCell>
<TableCell>
<Dialog>
<DialogTrigger asChild>
<Button variant="outline" size="sm" onClick={() => handleEdit(provider)}>
Edit
<div className="flex gap-2">
{provider.models_api && provider.models_api.enabled && (
<Button
variant="default"
size="sm"
onClick={() => handleSync(provider)}
disabled={isSyncing}
title={
provider.models_api.last_synced
? `Last synced: ${new Date(provider.models_api.last_synced).toLocaleString()}`
: 'Sync models from provider API'
}>
{isSyncing ? 'Syncing...' : 'Sync'}
</Button>
</DialogTrigger>
<DialogContent className="max-w-4xl max-h-[80vh] overflow-auto">
)}
<Dialog>
<DialogTrigger asChild>
<Button variant="outline" size="sm" onClick={() => handleEdit(provider)}>
Edit
</Button>
</DialogTrigger>
<DialogContent className="max-w-4xl max-h-[90vh] overflow-hidden flex flex-col">
<DialogHeader>
<DialogTitle>Edit Provider Configuration</DialogTitle>
<DialogDescription>Modify the JSON configuration for {provider.name}</DialogDescription>
</DialogHeader>
<div className="space-y-4">
<Textarea
value={jsonContent}
onChange={(e) => setJsonContent(e.target.value)}
className="min-h-[400px] font-mono text-sm"
/>
<div className="flex gap-2 justify-end">
<Button variant="outline" onClick={() => setEditingProvider(null)}>
Cancel
</Button>
<Button onClick={handleSave} disabled={isUpdating}>
{isUpdating ? 'Saving...' : 'Save Changes'}
</Button>
<div className="flex items-center justify-between">
<div>
<DialogTitle>Edit Provider Configuration</DialogTitle>
<DialogDescription>
{editMode === 'form' ? 'Use the form below' : 'Edit JSON'} to modify {provider.name}
</DialogDescription>
</div>
<div className="flex gap-2">
<Button
variant={editMode === 'form' ? 'default' : 'outline'}
size="sm"
onClick={() => setEditMode('form')}>
Form
</Button>
<Button
variant={editMode === 'json' ? 'default' : 'outline'}
size="sm"
onClick={() => setEditMode('json')}>
JSON
</Button>
</div>
</div>
</DialogHeader>
<div className="flex-1 overflow-auto">
{editMode === 'form' ? (
<ProviderEditForm
provider={provider}
onSave={handleSave}
onCancel={() => setEditingProvider(null)}
isSaving={isUpdating}
/>
) : (
<div className="space-y-4">
<Textarea
value={jsonContent}
onChange={(e) => setJsonContent(e.target.value)}
className="min-h-[500px] font-mono text-sm"
/>
<div className="flex gap-3 justify-end">
<Button
variant="outline"
onClick={() => setEditingProvider(null)}
className="min-w-[100px]">
Cancel
</Button>
<Button
onClick={() => handleSave()}
disabled={isUpdating}
className="min-w-[140px] bg-primary hover:bg-primary/90">
{isUpdating ? 'Saving...' : 'Save Changes'}
</Button>
</div>
</div>
)}
</div>
</DialogContent>
</Dialog>
</DialogContent>
</Dialog>
</div>
</TableCell>
</TableRow>
))}

View File

@ -0,0 +1,210 @@
'use client'
import { useState } from 'react'
import { Badge } from './ui/badge'
import { Button } from './ui/button'
import { Input } from './ui/input'
import { Label } from './ui/label'
import { Textarea } from './ui/textarea'
import type { Model } from '@/lib/catalog-types'
const CAPABILITIES = [
'FUNCTION_CALL',
'REASONING',
'IMAGE_RECOGNITION',
'IMAGE_GENERATION',
'AUDIO_RECOGNITION',
'AUDIO_GENERATION',
'EMBEDDING',
'RERANK',
'AUDIO_TRANSCRIPT',
'VIDEO_RECOGNITION',
'VIDEO_GENERATION',
'STRUCTURED_OUTPUT',
'FILE_INPUT',
'WEB_SEARCH',
'CODE_EXECUTION',
'FILE_SEARCH',
'COMPUTER_USE'
] as const
const MODALITIES = ['TEXT', 'VISION', 'AUDIO', 'VIDEO', 'VECTOR'] as const
interface ModelEditFormProps {
model: Model
onSave: (model: Partial<Model>) => void
onCancel: () => void
isSaving?: boolean
}
export function ModelEditForm({ model, onSave, onCancel, isSaving }: ModelEditFormProps) {
const [formData, setFormData] = useState({
id: model.id,
description: model.description || '',
capabilities: model.capabilities || [],
input_modalities: model.input_modalities || [],
output_modalities: model.output_modalities || ['TEXT'],
context_window: model.context_window?.toString() || '',
max_output_tokens: model.max_output_tokens?.toString() || ''
})
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault()
const updatedModel: Partial<Model> = {
description: formData.description || undefined,
capabilities: formData.capabilities.length > 0 ? formData.capabilities : undefined,
input_modalities: formData.input_modalities.length > 0 ? formData.input_modalities : undefined,
output_modalities: formData.output_modalities.length > 0 ? formData.output_modalities : ['TEXT'],
context_window: formData.context_window ? parseInt(formData.context_window) : undefined,
max_output_tokens: formData.max_output_tokens ? parseInt(formData.max_output_tokens) : undefined
}
onSave(updatedModel)
}
const toggleCapability = (capability: string) => {
setFormData((prev) => ({
...prev,
capabilities: prev.capabilities.includes(capability)
? prev.capabilities.filter((c) => c !== capability)
: [...prev.capabilities, capability]
}))
}
const toggleModality = (modality: string, type: 'input' | 'output') => {
const field = type === 'input' ? 'input_modalities' : 'output_modalities'
setFormData((prev) => ({
...prev,
[field]: prev[field].includes(modality)
? prev[field].filter((m) => m !== modality)
: [...prev[field], modality]
}))
}
return (
<form onSubmit={handleSubmit} className="space-y-6">
{/* Model ID - Read only */}
<div className="space-y-2">
<Label htmlFor="id">Model ID</Label>
<Input id="id" value={formData.id} disabled className="font-mono" />
</div>
{/* Description */}
<div className="space-y-2">
<Label htmlFor="description">Description</Label>
<Textarea
id="description"
value={formData.description}
onChange={(e) => setFormData((prev) => ({ ...prev, description: e.target.value }))}
rows={4}
placeholder="Model description..."
/>
</div>
{/* Capabilities */}
<div className="space-y-2">
<Label>Capabilities</Label>
<div className="flex flex-wrap gap-2 p-3 rounded-md min-h-[60px] bg-muted/10">
{CAPABILITIES.map((capability) => (
<Badge
key={capability}
variant={formData.capabilities.includes(capability) ? 'default' : 'secondary'}
className={`cursor-pointer transition-all ${
formData.capabilities.includes(capability)
? 'bg-primary text-primary-foreground hover:bg-primary/90 border-2 border-primary'
: 'bg-secondary/50 text-secondary-foreground hover:bg-secondary/80 border-2 border-transparent'
}`}
onClick={() => toggleCapability(capability)}>
{capability.replace(/_/g, ' ')}
</Badge>
))}
</div>
<p className="text-sm text-muted-foreground">Click to toggle capabilities</p>
</div>
{/* Input Modalities */}
<div className="space-y-2">
<Label>Input Modalities</Label>
<div className="flex flex-wrap gap-2 p-3 rounded-md bg-muted/10">
{MODALITIES.map((modality) => (
<Badge
key={modality}
variant={formData.input_modalities.includes(modality) ? 'default' : 'secondary'}
className={`cursor-pointer transition-all ${
formData.input_modalities.includes(modality)
? 'bg-primary text-primary-foreground hover:bg-primary/90 border-2 border-primary'
: 'bg-secondary/50 text-secondary-foreground hover:bg-secondary/80 border-2 border-transparent'
}`}
onClick={() => toggleModality(modality, 'input')}>
{modality}
</Badge>
))}
</div>
</div>
{/* Output Modalities */}
<div className="space-y-2">
<Label>Output Modalities</Label>
<div className="flex flex-wrap gap-2 p-3 rounded-md bg-muted/10">
{MODALITIES.map((modality) => (
<Badge
key={modality}
variant={formData.output_modalities.includes(modality) ? 'default' : 'secondary'}
className={`cursor-pointer transition-all ${
formData.output_modalities.includes(modality)
? 'bg-primary text-primary-foreground hover:bg-primary/90 border-2 border-primary'
: 'bg-secondary/50 text-secondary-foreground hover:bg-secondary/80 border-2 border-transparent'
}`}
onClick={() => toggleModality(modality, 'output')}>
{modality}
</Badge>
))}
</div>
</div>
{/* Numeric Fields */}
<div className="grid grid-cols-2 gap-4">
<div className="space-y-2">
<Label htmlFor="context_window">Context Window</Label>
<Input
id="context_window"
type="number"
value={formData.context_window}
onChange={(e) => setFormData((prev) => ({ ...prev, context_window: e.target.value }))}
placeholder="e.g., 128000"
/>
</div>
<div className="space-y-2">
<Label htmlFor="max_output_tokens">Max Output Tokens</Label>
<Input
id="max_output_tokens"
type="number"
value={formData.max_output_tokens}
onChange={(e) => setFormData((prev) => ({ ...prev, max_output_tokens: e.target.value }))}
placeholder="e.g., 8192"
/>
</div>
</div>
{/* Actions */}
<div className="flex justify-end gap-3">
<Button
type="button"
variant="outline"
onClick={onCancel}
disabled={isSaving}
className="min-w-[100px]">
Cancel
</Button>
<Button
type="submit"
disabled={isSaving}
className="min-w-[140px] bg-primary hover:bg-primary/90">
{isSaving ? 'Saving...' : 'Save Changes'}
</Button>
</div>
</form>
)
}

View File

@ -7,8 +7,7 @@ import { cn } from '@/lib/utils'
const navigation = [
{ name: 'Models', href: '/' },
{ name: 'Providers', href: '/providers' },
{ name: 'Overrides', href: '/overrides' }
{ name: 'Providers', href: '/providers' }
]
export function Navigation() {

View File

@ -0,0 +1,300 @@
'use client'
import { useState } from 'react'
import { Badge } from './ui/badge'
import { Button } from './ui/button'
import { Input } from './ui/input'
import { Label } from './ui/label'
import { Select, SelectContent, SelectItem, SelectTrigger, SelectValue } from './ui/select'
import { Textarea } from './ui/textarea'
import type { Provider } from '@/lib/catalog-types'
import { EndpointTypeSchema, ApiFormatSchema, AuthenticationSchema } from '../../src/schemas/provider'
// Extract enum values from schemas
const ENDPOINT_TYPES = EndpointTypeSchema.options
const API_FORMATS = ApiFormatSchema.options
const AUTHENTICATION_TYPES = AuthenticationSchema.options
interface ProviderEditFormProps {
provider: Provider
onSave: (provider: Partial<Provider>) => void
onCancel: () => void
isSaving?: boolean
}
export function ProviderEditForm({ provider, onSave, onCancel, isSaving }: ProviderEditFormProps) {
const [formData, setFormData] = useState({
id: provider.id,
name: provider.name,
description: provider.description || '',
authentication: provider.authentication || 'API_KEY',
supported_endpoints: provider.supported_endpoints || ['CHAT_COMPLETIONS'],
formats: provider.formats || [],
deprecated: provider.deprecated || false,
documentation: provider.documentation || '',
website: provider.website || ''
})
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault()
const updatedProvider: Partial<Provider> = {
name: formData.name,
description: formData.description || undefined,
authentication: formData.authentication as any,
supported_endpoints: formData.supported_endpoints,
formats: formData.formats,
deprecated: formData.deprecated,
documentation: formData.documentation || undefined,
website: formData.website || undefined
}
onSave(updatedProvider)
}
const toggleEndpoint = (endpoint: string) => {
setFormData((prev) => ({
...prev,
supported_endpoints: prev.supported_endpoints.includes(endpoint)
? prev.supported_endpoints.filter((e) => e !== endpoint)
: [...prev.supported_endpoints, endpoint]
}))
}
const addFormat = () => {
setFormData((prev) => ({
...prev,
formats: [
...prev.formats,
{
format: 'OPENAI' as any,
base_url: '',
default: prev.formats.length === 0
}
]
}))
}
const removeFormat = (index: number) => {
setFormData((prev) => ({
...prev,
formats: prev.formats.filter((_, i) => i !== index)
}))
}
const updateFormat = (index: number, field: string, value: any) => {
setFormData((prev) => ({
...prev,
formats: prev.formats.map((f, i) =>
i === index
? {
...f,
[field]: value
}
: f
)
}))
}
const setDefaultFormat = (index: number) => {
setFormData((prev) => ({
...prev,
formats: prev.formats.map((f, i) => ({
...f,
default: i === index
}))
}))
}
return (
<form onSubmit={handleSubmit} className="space-y-6 max-h-[70vh] overflow-y-auto px-1">
{/* Provider ID - Read only */}
<div className="space-y-2">
<Label htmlFor="id">Provider ID</Label>
<Input id="id" value={formData.id} disabled className="font-mono" />
</div>
{/* Name */}
<div className="space-y-2">
<Label htmlFor="name">Name *</Label>
<Input
id="name"
value={formData.name}
onChange={(e) => setFormData((prev) => ({ ...prev, name: e.target.value }))}
required
/>
</div>
{/* Description */}
<div className="space-y-2">
<Label htmlFor="description">Description</Label>
<Textarea
id="description"
value={formData.description}
onChange={(e) => setFormData((prev) => ({ ...prev, description: e.target.value }))}
rows={3}
placeholder="Provider description..."
/>
</div>
{/* Authentication */}
<div className="space-y-2">
<Label htmlFor="authentication">Authentication</Label>
<Select
value={formData.authentication}
onValueChange={(value) => setFormData((prev) => ({ ...prev, authentication: value }))}>
<SelectTrigger>
<SelectValue />
</SelectTrigger>
<SelectContent>
{AUTHENTICATION_TYPES.map((type) => (
<SelectItem key={type} value={type}>
{type.replace(/_/g, ' ')}
</SelectItem>
))}
</SelectContent>
</Select>
</div>
{/* Supported Endpoints */}
<div className="space-y-2">
<Label>Supported Endpoints</Label>
<div className="flex flex-wrap gap-2 p-3 rounded-md min-h-[60px] bg-muted/10">
{ENDPOINT_TYPES.map((endpoint) => (
<Badge
key={endpoint}
variant={formData.supported_endpoints.includes(endpoint) ? 'default' : 'secondary'}
className={`cursor-pointer transition-all ${
formData.supported_endpoints.includes(endpoint)
? 'bg-primary text-primary-foreground hover:bg-primary/90 border-2 border-primary'
: 'bg-secondary/50 text-secondary-foreground hover:bg-secondary/80 border-2 border-transparent'
}`}
onClick={() => toggleEndpoint(endpoint)}>
{endpoint.replace(/_/g, ' ')}
</Badge>
))}
</div>
<p className="text-sm text-muted-foreground">Click to toggle supported endpoints</p>
</div>
{/* API Formats */}
<div className="space-y-3">
<div className="flex items-center justify-between">
<Label>API Formats</Label>
<Button type="button" variant="outline" size="sm" onClick={addFormat}>
+ Add Format
</Button>
</div>
{formData.formats.map((format, index) => (
<div key={index} className="p-4 border rounded-md space-y-3 bg-muted/50">
<div className="flex items-center justify-between">
<span className="text-sm font-medium">Format {index + 1}</span>
<div className="flex items-center gap-2">
{!format.default && (
<Button
type="button"
variant="ghost"
size="sm"
onClick={() => setDefaultFormat(index)}
className="text-xs">
Set Default
</Button>
)}
{format.default && <Badge variant="secondary">Default</Badge>}
<Button type="button" variant="ghost" size="sm" onClick={() => removeFormat(index)}>
</Button>
</div>
</div>
<div className="grid grid-cols-2 gap-3">
<div className="space-y-2">
<Label className="text-xs">Format</Label>
<Select value={format.format} onValueChange={(value) => updateFormat(index, 'format', value)}>
<SelectTrigger className="h-9">
<SelectValue />
</SelectTrigger>
<SelectContent>
{API_FORMATS.map((fmt) => (
<SelectItem key={fmt} value={fmt}>
{fmt}
</SelectItem>
))}
</SelectContent>
</Select>
</div>
<div className="space-y-2">
<Label className="text-xs">Base URL</Label>
<Input
value={format.base_url}
onChange={(e) => updateFormat(index, 'base_url', e.target.value)}
placeholder="https://api.example.com"
className="h-9"
/>
</div>
</div>
</div>
))}
</div>
{/* Documentation & Website */}
<div className="grid grid-cols-2 gap-4">
<div className="space-y-2">
<Label htmlFor="documentation">Documentation URL</Label>
<Input
id="documentation"
type="url"
value={formData.documentation}
onChange={(e) => setFormData((prev) => ({ ...prev, documentation: e.target.value }))}
placeholder="https://docs.example.com"
/>
</div>
<div className="space-y-2">
<Label htmlFor="website">Website URL</Label>
<Input
id="website"
type="url"
value={formData.website}
onChange={(e) => setFormData((prev) => ({ ...prev, website: e.target.value }))}
placeholder="https://example.com"
/>
</div>
</div>
{/* Deprecated */}
<div className="flex items-center gap-2">
<input
type="checkbox"
id="deprecated"
checked={formData.deprecated}
onChange={(e) => setFormData((prev) => ({ ...prev, deprecated: e.target.checked }))}
className="w-4 h-4"
/>
<Label htmlFor="deprecated" className="cursor-pointer">
Mark as deprecated
</Label>
</div>
{/* Actions */}
<div className="flex justify-end gap-3 sticky bottom-0 bg-background pt-4 border-t">
<Button
type="button"
variant="outline"
onClick={onCancel}
disabled={isSaving}
className="min-w-[100px]">
Cancel
</Button>
<Button
type="submit"
disabled={isSaving}
className="min-w-[140px] bg-primary hover:bg-primary/90">
{isSaving ? 'Saving...' : 'Save Changes'}
</Button>
</div>
</form>
)
}

View File

@ -0,0 +1,26 @@
"use client"
import * as React from "react"
import * as LabelPrimitive from "@radix-ui/react-label"
import { cva, type VariantProps } from "class-variance-authority"
import { cn } from "@/lib/utils"
const labelVariants = cva(
"text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70"
)
const Label = React.forwardRef<
React.ElementRef<typeof LabelPrimitive.Root>,
React.ComponentPropsWithoutRef<typeof LabelPrimitive.Root> &
VariantProps<typeof labelVariants>
>(({ className, ...props }, ref) => (
<LabelPrimitive.Root
ref={ref}
className={cn(labelVariants(), className)}
{...props}
/>
))
Label.displayName = LabelPrimitive.Root.displayName
export { Label }

View File

@ -0,0 +1,158 @@
"use client"
import * as React from "react"
import * as SelectPrimitive from "@radix-ui/react-select"
import { cn } from "@/lib/utils"
import { CheckIcon, ChevronDownIcon, ChevronUpIcon } from "@radix-ui/react-icons"
const Select = SelectPrimitive.Root
const SelectGroup = SelectPrimitive.Group
const SelectValue = SelectPrimitive.Value
const SelectTrigger = React.forwardRef<
React.ElementRef<typeof SelectPrimitive.Trigger>,
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Trigger>
>(({ className, children, ...props }, ref) => (
<SelectPrimitive.Trigger
ref={ref}
className={cn(
"flex h-9 w-full items-center justify-between whitespace-nowrap rounded-md border border-input bg-transparent px-3 py-2 text-sm shadow-sm ring-offset-background data-[placeholder]:text-muted-foreground focus:outline-none focus:ring-1 focus:ring-ring disabled:cursor-not-allowed disabled:opacity-50 [&>span]:line-clamp-1",
className
)}
{...props}
>
{children}
<SelectPrimitive.Icon asChild>
<ChevronDownIcon className="h-4 w-4 opacity-50" />
</SelectPrimitive.Icon>
</SelectPrimitive.Trigger>
))
SelectTrigger.displayName = SelectPrimitive.Trigger.displayName
const SelectScrollUpButton = React.forwardRef<
React.ElementRef<typeof SelectPrimitive.ScrollUpButton>,
React.ComponentPropsWithoutRef<typeof SelectPrimitive.ScrollUpButton>
>(({ className, ...props }, ref) => (
<SelectPrimitive.ScrollUpButton
ref={ref}
className={cn(
"flex cursor-default items-center justify-center py-1",
className
)}
{...props}
>
<ChevronUpIcon className="h-4 w-4" />
</SelectPrimitive.ScrollUpButton>
))
SelectScrollUpButton.displayName = SelectPrimitive.ScrollUpButton.displayName
const SelectScrollDownButton = React.forwardRef<
React.ElementRef<typeof SelectPrimitive.ScrollDownButton>,
React.ComponentPropsWithoutRef<typeof SelectPrimitive.ScrollDownButton>
>(({ className, ...props }, ref) => (
<SelectPrimitive.ScrollDownButton
ref={ref}
className={cn(
"flex cursor-default items-center justify-center py-1",
className
)}
{...props}
>
<ChevronDownIcon className="h-4 w-4" />
</SelectPrimitive.ScrollDownButton>
))
SelectScrollDownButton.displayName =
SelectPrimitive.ScrollDownButton.displayName
const SelectContent = React.forwardRef<
React.ElementRef<typeof SelectPrimitive.Content>,
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Content>
>(({ className, children, position = "popper", ...props }, ref) => (
<SelectPrimitive.Portal>
<SelectPrimitive.Content
ref={ref}
className={cn(
"relative z-50 max-h-[--radix-select-content-available-height] min-w-[8rem] overflow-y-auto overflow-x-hidden rounded-md border bg-popover text-popover-foreground shadow-md data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2 data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2 origin-[--radix-select-content-transform-origin]",
position === "popper" &&
"data-[side=bottom]:translate-y-1 data-[side=left]:-translate-x-1 data-[side=right]:translate-x-1 data-[side=top]:-translate-y-1",
className
)}
position={position}
{...props}
>
<SelectScrollUpButton />
<SelectPrimitive.Viewport
className={cn(
"p-1",
position === "popper" &&
"h-[var(--radix-select-trigger-height)] w-full min-w-[var(--radix-select-trigger-width)]"
)}
>
{children}
</SelectPrimitive.Viewport>
<SelectScrollDownButton />
</SelectPrimitive.Content>
</SelectPrimitive.Portal>
))
SelectContent.displayName = SelectPrimitive.Content.displayName
const SelectLabel = React.forwardRef<
React.ElementRef<typeof SelectPrimitive.Label>,
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Label>
>(({ className, ...props }, ref) => (
<SelectPrimitive.Label
ref={ref}
className={cn("px-2 py-1.5 text-sm font-semibold", className)}
{...props}
/>
))
SelectLabel.displayName = SelectPrimitive.Label.displayName
const SelectItem = React.forwardRef<
React.ElementRef<typeof SelectPrimitive.Item>,
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Item>
>(({ className, children, ...props }, ref) => (
<SelectPrimitive.Item
ref={ref}
className={cn(
"relative flex w-full cursor-default select-none items-center rounded-sm py-1.5 pl-2 pr-8 text-sm outline-none focus:bg-accent focus:text-accent-foreground data-[disabled]:pointer-events-none data-[disabled]:opacity-50",
className
)}
{...props}
>
<span className="absolute right-2 flex h-3.5 w-3.5 items-center justify-center">
<SelectPrimitive.ItemIndicator>
<CheckIcon className="h-4 w-4" />
</SelectPrimitive.ItemIndicator>
</span>
<SelectPrimitive.ItemText>{children}</SelectPrimitive.ItemText>
</SelectPrimitive.Item>
))
SelectItem.displayName = SelectPrimitive.Item.displayName
const SelectSeparator = React.forwardRef<
React.ElementRef<typeof SelectPrimitive.Separator>,
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Separator>
>(({ className, ...props }, ref) => (
<SelectPrimitive.Separator
ref={ref}
className={cn("-mx-1 my-1 h-px bg-muted", className)}
{...props}
/>
))
SelectSeparator.displayName = SelectPrimitive.Separator.displayName
export {
Select,
SelectGroup,
SelectValue,
SelectTrigger,
SelectContent,
SelectLabel,
SelectItem,
SelectSeparator,
SelectScrollUpButton,
SelectScrollDownButton,
}

View File

@ -0,0 +1,31 @@
"use client"
import { useTheme } from "next-themes"
import { Toaster as Sonner } from "sonner"
type ToasterProps = React.ComponentProps<typeof Sonner>
const Toaster = ({ ...props }: ToasterProps) => {
const { theme = "system" } = useTheme()
return (
<Sonner
theme={theme as ToasterProps["theme"]}
className="toaster group"
toastOptions={{
classNames: {
toast:
"group toast group-[.toaster]:bg-background group-[.toaster]:text-foreground group-[.toaster]:border-border group-[.toaster]:shadow-lg",
description: "group-[.toast]:text-muted-foreground",
actionButton:
"group-[.toast]:bg-primary group-[.toast]:text-primary-foreground",
cancelButton:
"group-[.toast]:bg-muted group-[.toast]:text-muted-foreground",
},
}}
{...props}
/>
)
}
export { Toaster }

View File

@ -115,6 +115,13 @@ export class ApiClient {
body: data
}),
// Sync provider models
sync: (id: string, apiKey?: string) => ({
url: `${API_BASE}/providers/${id}/sync`,
method: 'POST',
body: { apiKey }
}),
// Delete a provider (if implemented)
delete: (id: string) => ({
url: `${API_BASE}/providers/${id}`,
@ -229,6 +236,32 @@ export function useUpdateProvider() {
)
}
// Mutation for syncing provider models
export function useSyncProvider() {
return useSWRMutation(
'/api/catalog/providers',
async (url: string, { arg }: { arg: { id: string; apiKey?: string } }) => {
const response = await fetch(`${url}/${arg.id}/sync`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ apiKey: arg.apiKey })
})
if (!response.ok) {
const errorData = await response.json()
const error: ExtendedApiError = {
error: errorData.error || 'Failed to sync provider models',
status: response.status,
info: errorData
}
throw error
}
return await response.json()
}
)
}
// Utility function for global error handling
export function handleApiError(error: unknown): ExtendedApiError {
if (error && typeof error === 'object' && 'error' in error) {

View File

@ -7,145 +7,27 @@ import * as z from 'zod'
// Import schemas from catalog package
import {
ModelConfigSchema,
ModelListSchema
ModelListSchema,
OverrideListSchema,
ProviderConfigSchema,
ProviderListSchema,
ProviderModelOverrideSchema as CatalogProviderModelOverrideSchema
} from '../../src/schemas'
// Base parameter schemas
const ParameterRangeSchema = z.object({
supported: z.literal(true),
min: z.number().positive(),
max: z.number().positive(),
default: z.number().positive()
})
const ParameterBooleanSchema = z.object({
supported: z.boolean()
})
const ParameterUnsupportedSchema = z.object({
supported: z.literal(false)
})
const ParameterValueSchema = z.union([ParameterRangeSchema, ParameterBooleanSchema, ParameterUnsupportedSchema])
// Pricing schema
const PricingInfoSchema = z.object({
input: z.object({
per_million_tokens: z.number().nonnegative(),
currency: z.string().length(3) // ISO 4217 currency codes
}),
output: z.object({
per_million_tokens: z.number().nonnegative(),
currency: z.string().length(3)
})
})
// Complete Model schema - use from catalog package
export const ModelSchema = ModelConfigSchema
// Provider behaviors schema
const ProviderBehaviorsSchema = z
.object({
supports_custom_models: z.boolean(),
provides_model_mapping: z.boolean(),
supports_model_versioning: z.boolean(),
provides_fallback_routing: z.boolean(),
has_auto_retry: z.boolean(),
supports_health_check: z.boolean(),
has_real_time_metrics: z.boolean(),
provides_usage_analytics: z.boolean(),
supports_webhook_events: z.boolean(),
requires_api_key_validation: z.boolean(),
supports_rate_limiting: z.boolean(),
provides_usage_limits: z.boolean(),
supports_streaming: z.boolean(),
supports_batch_processing: z.boolean(),
supports_model_fine_tuning: z.boolean()
})
.loose() // Allow extensions
// API compatibility schema
const ApiCompatibilitySchema = z
.object({
supports_array_content: z.boolean().optional(),
supports_stream_options: z.boolean().optional(),
supports_developer_role: z.boolean().optional(),
supports_service_tier: z.boolean().optional(),
supports_thinking_control: z.boolean().optional(),
supports_api_version: z.boolean().optional(),
supports_parallel_tools: z.boolean().optional(),
supports_multimodal: z.boolean().optional()
})
.loose()
// Special configuration schema (flexible)
const SpecialConfigSchema = z.record(z.string(), z.unknown())
// Provider metadata schema
const ProviderMetadataSchema = z
.object({
source: z.string().optional(),
tags: z.array(z.string()).optional(),
reliability: z.enum(['low', 'medium', 'high']).optional()
})
.loose()
// Complete Provider schema
export const ProviderSchema = z.object({
id: z.string().min(1),
name: z.string().min(1),
description: z.string().optional(),
authentication: z.string().min(1),
pricing_model: z.string().min(1),
model_routing: z.string().min(1),
behaviors: ProviderBehaviorsSchema,
supported_endpoints: z.array(z.string()),
api_compatibility: ApiCompatibilitySchema.optional(),
default_api_host: z.url().optional(),
default_rate_limit: z.number().positive().optional(),
model_id_patterns: z.array(z.string()).optional(),
alias_model_ids: z.record(z.string(), z.string()).optional(),
documentation: z.string().url().optional(),
website: z.string().url().optional(),
deprecated: z.boolean(),
maintenance_mode: z.boolean(),
config_version: z.string().min(1),
special_config: SpecialConfigSchema.optional(),
metadata: ProviderMetadataSchema.optional()
})
// Complete Provider schema - use from catalog package
export const ProviderSchema = ProviderConfigSchema
// Data file schemas - use from catalog package
export const ModelsDataFileSchema = ModelListSchema
export const ProvidersDataFileSchema = ProviderListSchema
export const ProvidersDataFileSchema = z.object({
version: z.string().min(1),
providers: z.array(ProviderSchema)
})
// Override schemas
const OverrideLimitsSchema = z.object({
context_window: z.number().positive().optional(),
max_output_tokens: z.number().positive().optional()
})
export const ProviderModelOverrideSchema = z.object({
provider_id: z.string().min(1),
model_id: z.string().min(1),
disabled: z.boolean().default(false),
reason: z.string().optional(),
last_updated: z.string().optional(),
updated_by: z.string().optional(),
priority: z.number().default(100),
limits: OverrideLimitsSchema.optional(),
pricing: PricingInfoSchema.optional()
})
export const OverridesDataFileSchema = z.object({
version: z.string().min(1),
overrides: z.array(ProviderModelOverrideSchema)
})
// Override schemas - use from catalog package
export const ProviderModelOverrideSchema = CatalogProviderModelOverrideSchema
export const OverridesDataFileSchema = OverrideListSchema
// Pagination schemas
export const PaginationInfoSchema = z.object({
@ -226,7 +108,27 @@ export const ModalityTypeSchema = z.enum(['TEXT', 'VISION', 'AUDIO', 'VIDEO'])
export const AuthenticationTypeSchema = z.enum(['API_KEY', 'OAUTH', 'NONE', 'CUSTOM'])
export const EndpointTypeSchema = z.enum(['CHAT_COMPLETIONS', 'MESSAGES', 'RESPONSES', 'EMBEDDINGS', 'RERANK'])
export const EndpointTypeSchema = z.enum([
// LLM endpoints
'CHAT_COMPLETIONS',
'TEXT_COMPLETIONS',
'MESSAGES',
'RESPONSES',
'GENERATE_CONTENT',
// Embedding endpoints
'EMBEDDINGS',
'RERANK',
// Image endpoints
'IMAGE_GENERATION',
'IMAGE_EDIT',
'IMAGE_VARIATION',
// Audio endpoints
'AUDIO_TRANSCRIPTION',
'AUDIO_TRANSLATION',
'TEXT_TO_SPEECH',
// Video endpoints
'VIDEO_GENERATION'
])
// Validation utilities using Zod
@ -321,12 +223,12 @@ export function validateQueryParams(params: URLSearchParams): z.infer<typeof Que
// Type-safe error response creation
export function createErrorResponse(
message: string,
status: number = 500,
details?: unknown
): z.infer<typeof ApiErrorSchema> {
const error: z.infer<typeof ApiErrorSchema> = { error: message }
if (details !== undefined) {
;(error as any).details = details
// Type assertion needed because ApiErrorSchema allows optional details field
Object.assign(error, { details })
}
return error
}

View File

@ -1,16 +1,6 @@
import type { NextConfig } from 'next'
const nextConfig: NextConfig = {
// Configure static file serving from external directory
async rewrites() {
return [
// Proxy API requests to the catalog API
{
source: '/api/catalog/:path*',
destination: 'http://localhost:3001/api/catalog/:path*'
}
]
},
// Add custom headers for static files
async headers() {
return [

View File

@ -11,11 +11,15 @@
"dependencies": {
"@radix-ui/react-dialog": "^1.1.15",
"@radix-ui/react-icons": "^1.3.2",
"@radix-ui/react-label": "^2.1.8",
"@radix-ui/react-select": "^2.2.6",
"@radix-ui/react-separator": "^1.1.8",
"@radix-ui/react-slot": "^1.2.4",
"next": "16.0.6",
"next-themes": "^0.4.6",
"react": "19.2.0",
"react-dom": "19.2.0",
"sonner": "^2.0.7",
"swr": "^2.3.7",
"zod": "^4.0.0"
},

View File

@ -2017,6 +2017,8 @@ __metadata:
dependencies:
"@radix-ui/react-dialog": "npm:^1.1.15"
"@radix-ui/react-icons": "npm:^1.3.2"
"@radix-ui/react-label": "npm:^2.1.8"
"@radix-ui/react-select": "npm:^2.2.6"
"@radix-ui/react-separator": "npm:^1.1.8"
"@radix-ui/react-slot": "npm:^1.2.4"
"@tailwindcss/postcss": "npm:^4"
@ -2026,8 +2028,10 @@ __metadata:
eslint: "npm:^9"
eslint-config-next: "npm:16.0.6"
next: "npm:16.0.6"
next-themes: "npm:^0.4.6"
react: "npm:19.2.0"
react-dom: "npm:19.2.0"
sonner: "npm:^2.0.7"
swr: "npm:^2.3.7"
tailwindcss: "npm:^4"
typescript: "npm:^5"
@ -2040,6 +2044,8 @@ __metadata:
resolution: "@cherrystudio/catalog@workspace:packages/catalog"
dependencies:
"@types/json-schema": "npm:^7.0.15"
"@types/node": "npm:^24.10.2"
dotenv: "npm:^17.2.3"
json-schema: "npm:^0.4.0"
tsdown: "npm:^0.16.6"
typescript: "npm:^5.9.3"
@ -7748,6 +7754,25 @@ __metadata:
languageName: node
linkType: hard
"@radix-ui/react-label@npm:^2.1.8":
version: 2.1.8
resolution: "@radix-ui/react-label@npm:2.1.8"
dependencies:
"@radix-ui/react-primitive": "npm:2.1.4"
peerDependencies:
"@types/react": "*"
"@types/react-dom": "*"
react: ^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc
react-dom: ^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc
peerDependenciesMeta:
"@types/react":
optional: true
"@types/react-dom":
optional: true
checksum: 10c0/8b130380bd54bafb0dc652270c8cf035ceeb78faab82f78c0a76fc33cc0718e8455ff880e0db1b6c10f203ff342bf1f941544eb258c1fd85ae5b49b53cdf1a3d
languageName: node
linkType: hard
"@radix-ui/react-menu@npm:2.1.16":
version: 2.1.16
resolution: "@radix-ui/react-menu@npm:2.1.16"
@ -13074,6 +13099,15 @@ __metadata:
languageName: node
linkType: hard
"@types/node@npm:^24.10.2":
version: 24.10.2
resolution: "@types/node@npm:24.10.2"
dependencies:
undici-types: "npm:~7.16.0"
checksum: 10c0/560c894e1a9bf7468718ceca8cd520361fd0d3fcc0b020c2f028fc722b28b5b56aecd16736a9b753d52a14837c066cf23480a8582ead59adc63a7e4333bc976c
languageName: node
linkType: hard
"@types/pako@npm:^1.0.2":
version: 1.0.7
resolution: "@types/pako@npm:1.0.7"
@ -18600,6 +18634,13 @@ __metadata:
languageName: node
linkType: hard
"dotenv@npm:^17.2.3":
version: 17.2.3
resolution: "dotenv@npm:17.2.3"
checksum: 10c0/c884403209f713214a1b64d4d1defa4934c2aa5b0002f5a670ae298a51e3c3ad3ba79dfee2f8df49f01ae74290fcd9acdb1ab1d09c7bfb42b539036108bb2ba0
languageName: node
linkType: hard
"drizzle-kit@npm:^0.31.4":
version: 0.31.4
resolution: "drizzle-kit@npm:0.31.4"
@ -25691,6 +25732,16 @@ __metadata:
languageName: node
linkType: hard
"next-themes@npm:^0.4.6":
version: 0.4.6
resolution: "next-themes@npm:0.4.6"
peerDependencies:
react: ^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc
react-dom: ^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc
checksum: 10c0/83590c11d359ce7e4ced14f6ea9dd7a691d5ce6843fe2dc520fc27e29ae1c535118478d03e7f172609c41b1ef1b8da6b8dd2d2acd6cd79cac1abbdbd5b99f2c4
languageName: node
linkType: hard
"next@npm:16.0.6":
version: 16.0.6
resolution: "next@npm:16.0.6"
@ -30332,6 +30383,16 @@ __metadata:
languageName: node
linkType: hard
"sonner@npm:^2.0.7":
version: 2.0.7
resolution: "sonner@npm:2.0.7"
peerDependencies:
react: ^18.0.0 || ^19.0.0 || ^19.0.0-rc
react-dom: ^18.0.0 || ^19.0.0 || ^19.0.0-rc
checksum: 10c0/6966ab5e892ed6aab579a175e4a24f3b48747f0fc21cb68c3e33cb41caa7a0eebeb098c210545395e47a18d585eb8734ae7dd12d2bd18c8a3294a1ee73f997d9
languageName: node
linkType: hard
"source-map-js@npm:^1.0.1, source-map-js@npm:^1.0.2, source-map-js@npm:^1.2.0, source-map-js@npm:^1.2.1":
version: 1.2.1
resolution: "source-map-js@npm:1.2.1"
@ -32198,6 +32259,13 @@ __metadata:
languageName: node
linkType: hard
"undici-types@npm:~7.16.0":
version: 7.16.0
resolution: "undici-types@npm:7.16.0"
checksum: 10c0/3033e2f2b5c9f1504bdc5934646cb54e37ecaca0f9249c983f7b1fc2e87c6d18399ebb05dc7fd5419e02b2e915f734d872a65da2e3eeed1813951c427d33cc9a
languageName: node
linkType: hard
"undici@npm:6.21.2":
version: 6.21.2
resolution: "undici@npm:6.21.2"