feat: more control for service tier (#8888)

* feat(types): 添加对服务层参数的支持并完善Provider类型

为Provider类型添加isSupportServiceTier和serviceTier字段以支持服务层参数
添加isOpenAIServiceTier类型守卫函数验证服务层类型
扩展SystemProviderId枚举类型并添加ProviderSupportedServiceTier类型

* refactor(types): 将 isSystemProvider 移动到 types 模块并重构系统提供商 ID 定义

将 isSystemProvider 函数从 config/providers.ts 移动到 types/index.ts 以更好组织代码
重构系统提供商 ID 为 SystemProviderIds 常量对象并添加类型检查函数
更新所有引用 isSystemProvider 的导入路径

* refactor(llm): 将系统提供商数组改为配置对象结构

重构系统提供商数据结构,从数组改为键值对象配置,便于维护和扩展

* refactor(providers): 将系统提供商配置移动到config/providers文件

* refactor: 重命名函数isSupportedFlexServiceTier为isSupportFlexServiceTierModel

统一函数命名风格,提高代码可读性

* refactor(types): 优化OpenAIServiceTier类型定义和校验逻辑

将OpenAIServiceTier定义为常量枚举类型,提升类型安全性
使用Object.values优化类型校验性能
统一服务层参数支持标志命名风格为isNotSupport前缀

* feat(OpenAI): 添加priority服务层级选项

在OpenAIServiceTiers类型和设置选项中新增priority服务层级

* refactor(store): 移除未使用的OpenAIServiceTiers和SystemProviderIds导入

* fix(OpenAISettingsGroup): 添加priority到FALL_BACK_SERVICE_TIER映射

* feat(provider): 支持在提供商设置中配置 service_tier 参数

将 service_tier 配置从全局设置迁移到提供商设置中,并添加相关 UI 和逻辑支持

* refactor(service-tier): 统一服务层级命名并添加Groq支持

将OpenAIServiceTiers的常量值从大写改为小写以保持命名一致性
新增GroqServiceTiers及相关类型守卫
重构BaseApiClient中的服务层级处理逻辑以支持多供应商

* fix(store): 更新持久化存储版本至128并添加迁移逻辑

添加从127到128版本的迁移逻辑,将openAI的serviceTier设置迁移至provider配置

* feat(设置): 添加 Groq 服务层级选项并更新相关翻译

为 Groq 提供商添加特定的服务层级选项(on_demand 和 performance),同时更新中文翻译文件以包含新的选项

* feat(i18n): 添加服务层级和长运行模式的多语言支持

* fix(ProviderSettings): 修正服务层级选项的变量名错误

* refactor(providers): 将 PROVIDER_CONFIG 重命名为 PROVIDER_URLS 并更新相关引用

* refactor(types): 优化类型守卫使用 Object.hasOwn 替代 Object.values

简化类型守卫实现,使用 Object.hasOwn 直接检查属性存在性,提升代码简洁性

* chore: 更新 openai 依赖至 5.12.0 版本

* fix(openai): 修复 service_tier 类型断言问题

groq 有不同的 service tier 配置,不符合 openai 接口类型,因此需要显式类型断言

* fix(openai): 处理空输入时返回默认空字符串

* fix(openai): 修复 Groq 服务层级类型不匹配问题

将 service_tier 强制转换为 OpenAIServiceTier 类型,因为 Groq 的服务层级配置与 OpenAI 接口类型不兼容

* fix(测试): 修正系统提供者名称匹配测试的预期结果

将 matchKeywordsInProvider 和 matchKeywordsInModel 测试中对 'SystemProvider' 的预期结果从 false 改为 true,以匹配实际功能需求

* test(api): 添加SYSTEM_MODELS到模拟配置中

* refactor(config): 更新系统模型配置和类型定义

- 将vertexai和dashscope的模型配置从空数组更新为对应的系统模型
- 修改SYSTEM_MODELS的类型定义以包含SystemProviderId
- 移除未使用的模型配置如o3、gitee-ai和zhinao

* test(match): 更新系统提供商的测试用例以匹配id而非name

* test(services): 更新ApiService测试中的模型配置模拟

修改测试文件中的模型配置模拟,使用vi.importActual获取原始模块并扩展模拟实现,移除不再使用的SYSTEM_MODELS导入

* fix(openai): 更新openai依赖版本并修复嵌入模型处理逻辑

修复openai客户端中嵌入模型处理逻辑,当模型名称包含"jina"时不使用base64编码
移除平台相关头信息以解决兼容性问题
更新package.json中openai依赖版本至5.12.0

* refactor(OpenAISettingsGroup): 移除不必要的fallback逻辑

* Revert "refactor(OpenAISettingsGroup): 移除不必要的fallback逻辑"

This reverts commit 2837f73cf6.

* fix(OpenAISettingsGroup): 修复服务层级回退逻辑以支持Groq提供商

当服务层级模式不在可选范围内时,根据提供商类型设置不同的默认值。对于Groq提供商使用on_demand,其他情况使用auto。

* refactor(types): 简化类型定义从值类型改为键类型

将SystemProviderId、OpenAIServiceTier和GroqServiceTier的类型定义从获取值类型改为直接使用键类型,使代码更简洁

* chore: 更新 openai 依赖至 5.12.0 并应用补丁

* test(naming): 添加getFancyProviderName的测试用例

* test(utils): 添加对系统提供商名称的匹配测试

添加对系统提供商名称"Alibaba"的匹配测试,确保matchKeywordsInModel函数能正确识别系统提供商的名称

* test(utils): 更新系统提供者的i18n名称匹配测试

添加对系统提供者i18n名称匹配的额外测试用例,验证不匹配情况

* chore: 删除旧补丁

* fix(openai): 为commonParams添加类型注解以增强类型安全

* fix(aiCore): 服务层级设置返回未定义而非默认值

* test(匹配逻辑): 更新系统提供商的i18n名称匹配测试

修改测试用例以明确系统提供商不应通过name字段匹配
添加对'Alibaba'的匹配测试
This commit is contained in:
Phantom 2025-08-07 17:31:08 +08:00 committed by GitHub
parent ffb23909fa
commit 9ad0dc36b7
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
37 changed files with 1373 additions and 1165 deletions

View File

@ -1,279 +0,0 @@
diff --git a/client.js b/client.js
index 33b4ff6309d5f29187dab4e285d07dac20340bab..8f568637ee9e4677585931fb0284c8165a933f69 100644
--- a/client.js
+++ b/client.js
@@ -433,7 +433,7 @@ class OpenAI {
'User-Agent': this.getUserAgent(),
'X-Stainless-Retry-Count': String(retryCount),
...(options.timeout ? { 'X-Stainless-Timeout': String(Math.trunc(options.timeout / 1000)) } : {}),
- ...(0, detect_platform_1.getPlatformHeaders)(),
+ // ...(0, detect_platform_1.getPlatformHeaders)(),
'OpenAI-Organization': this.organization,
'OpenAI-Project': this.project,
},
diff --git a/client.mjs b/client.mjs
index c34c18213073540ebb296ea540b1d1ad39527906..1ce1a98256d7e90e26ca963582f235b23e996e73 100644
--- a/client.mjs
+++ b/client.mjs
@@ -430,7 +430,7 @@ export class OpenAI {
'User-Agent': this.getUserAgent(),
'X-Stainless-Retry-Count': String(retryCount),
...(options.timeout ? { 'X-Stainless-Timeout': String(Math.trunc(options.timeout / 1000)) } : {}),
- ...getPlatformHeaders(),
+ // ...getPlatformHeaders(),
'OpenAI-Organization': this.organization,
'OpenAI-Project': this.project,
},
diff --git a/core/error.js b/core/error.js
index a12d9d9ccd242050161adeb0f82e1b98d9e78e20..fe3a5462480558bc426deea147f864f12b36f9bd 100644
--- a/core/error.js
+++ b/core/error.js
@@ -40,7 +40,7 @@ class APIError extends OpenAIError {
if (!status || !headers) {
return new APIConnectionError({ message, cause: (0, errors_1.castToError)(errorResponse) });
}
- const error = errorResponse?.['error'];
+ const error = errorResponse?.['error'] || errorResponse;
if (status === 400) {
return new BadRequestError(status, error, message, headers);
}
diff --git a/core/error.mjs b/core/error.mjs
index 83cefbaffeb8c657536347322d8de9516af479a2..63334b7972ec04882aa4a0800c1ead5982345045 100644
--- a/core/error.mjs
+++ b/core/error.mjs
@@ -36,7 +36,7 @@ export class APIError extends OpenAIError {
if (!status || !headers) {
return new APIConnectionError({ message, cause: castToError(errorResponse) });
}
- const error = errorResponse?.['error'];
+ const error = errorResponse?.['error'] || errorResponse;
if (status === 400) {
return new BadRequestError(status, error, message, headers);
}
diff --git a/resources/embeddings.js b/resources/embeddings.js
index 2404264d4ba0204322548945ebb7eab3bea82173..8f1bc45cc45e0797d50989d96b51147b90ae6790 100644
--- a/resources/embeddings.js
+++ b/resources/embeddings.js
@@ -5,52 +5,64 @@ exports.Embeddings = void 0;
const resource_1 = require("../core/resource.js");
const utils_1 = require("../internal/utils.js");
class Embeddings extends resource_1.APIResource {
- /**
- * Creates an embedding vector representing the input text.
- *
- * @example
- * ```ts
- * const createEmbeddingResponse =
- * await client.embeddings.create({
- * input: 'The quick brown fox jumped over the lazy dog',
- * model: 'text-embedding-3-small',
- * });
- * ```
- */
- create(body, options) {
- const hasUserProvidedEncodingFormat = !!body.encoding_format;
- // No encoding_format specified, defaulting to base64 for performance reasons
- // See https://github.com/openai/openai-node/pull/1312
- let encoding_format = hasUserProvidedEncodingFormat ? body.encoding_format : 'base64';
- if (hasUserProvidedEncodingFormat) {
- (0, utils_1.loggerFor)(this._client).debug('embeddings/user defined encoding_format:', body.encoding_format);
- }
- const response = this._client.post('/embeddings', {
- body: {
- ...body,
- encoding_format: encoding_format,
- },
- ...options,
- });
- // if the user specified an encoding_format, return the response as-is
- if (hasUserProvidedEncodingFormat) {
- return response;
- }
- // in this stage, we are sure the user did not specify an encoding_format
- // and we defaulted to base64 for performance reasons
- // we are sure then that the response is base64 encoded, let's decode it
- // the returned result will be a float32 array since this is OpenAI API's default encoding
- (0, utils_1.loggerFor)(this._client).debug('embeddings/decoding base64 embeddings from base64');
- return response._thenUnwrap((response) => {
- if (response && response.data) {
- response.data.forEach((embeddingBase64Obj) => {
- const embeddingBase64Str = embeddingBase64Obj.embedding;
- embeddingBase64Obj.embedding = (0, utils_1.toFloat32Array)(embeddingBase64Str);
- });
- }
- return response;
- });
- }
+ /**
+ * Creates an embedding vector representing the input text.
+ *
+ * @example
+ * ```ts
+ * const createEmbeddingResponse =
+ * await client.embeddings.create({
+ * input: 'The quick brown fox jumped over the lazy dog',
+ * model: 'text-embedding-3-small',
+ * });
+ * ```
+ */
+ create(body, options) {
+ const hasUserProvidedEncodingFormat = !!body.encoding_format;
+ // No encoding_format specified, defaulting to base64 for performance reasons
+ // See https://github.com/openai/openai-node/pull/1312
+ let encoding_format = hasUserProvidedEncodingFormat
+ ? body.encoding_format
+ : "base64";
+ if (body.model.includes("jina")) {
+ encoding_format = undefined;
+ }
+ if (hasUserProvidedEncodingFormat) {
+ (0, utils_1.loggerFor)(this._client).debug(
+ "embeddings/user defined encoding_format:",
+ body.encoding_format
+ );
+ }
+ const response = this._client.post("/embeddings", {
+ body: {
+ ...body,
+ encoding_format: encoding_format,
+ },
+ ...options,
+ });
+ // if the user specified an encoding_format, return the response as-is
+ if (hasUserProvidedEncodingFormat || body.model.includes("jina")) {
+ return response;
+ }
+ // in this stage, we are sure the user did not specify an encoding_format
+ // and we defaulted to base64 for performance reasons
+ // we are sure then that the response is base64 encoded, let's decode it
+ // the returned result will be a float32 array since this is OpenAI API's default encoding
+ (0, utils_1.loggerFor)(this._client).debug(
+ "embeddings/decoding base64 embeddings from base64"
+ );
+ return response._thenUnwrap((response) => {
+ if (response && response.data && typeof response.data[0]?.embedding === 'string') {
+ response.data.forEach((embeddingBase64Obj) => {
+ const embeddingBase64Str = embeddingBase64Obj.embedding;
+ embeddingBase64Obj.embedding = (0, utils_1.toFloat32Array)(
+ embeddingBase64Str
+ );
+ });
+ }
+ return response;
+ });
+ }
}
exports.Embeddings = Embeddings;
//# sourceMappingURL=embeddings.js.map
diff --git a/resources/embeddings.mjs b/resources/embeddings.mjs
index 19dcaef578c194a89759c4360073cfd4f7dd2cbf..0284e9cc615c900eff508eb595f7360a74bd9200 100644
--- a/resources/embeddings.mjs
+++ b/resources/embeddings.mjs
@@ -2,51 +2,61 @@
import { APIResource } from "../core/resource.mjs";
import { loggerFor, toFloat32Array } from "../internal/utils.mjs";
export class Embeddings extends APIResource {
- /**
- * Creates an embedding vector representing the input text.
- *
- * @example
- * ```ts
- * const createEmbeddingResponse =
- * await client.embeddings.create({
- * input: 'The quick brown fox jumped over the lazy dog',
- * model: 'text-embedding-3-small',
- * });
- * ```
- */
- create(body, options) {
- const hasUserProvidedEncodingFormat = !!body.encoding_format;
- // No encoding_format specified, defaulting to base64 for performance reasons
- // See https://github.com/openai/openai-node/pull/1312
- let encoding_format = hasUserProvidedEncodingFormat ? body.encoding_format : 'base64';
- if (hasUserProvidedEncodingFormat) {
- loggerFor(this._client).debug('embeddings/user defined encoding_format:', body.encoding_format);
- }
- const response = this._client.post('/embeddings', {
- body: {
- ...body,
- encoding_format: encoding_format,
- },
- ...options,
- });
- // if the user specified an encoding_format, return the response as-is
- if (hasUserProvidedEncodingFormat) {
- return response;
- }
- // in this stage, we are sure the user did not specify an encoding_format
- // and we defaulted to base64 for performance reasons
- // we are sure then that the response is base64 encoded, let's decode it
- // the returned result will be a float32 array since this is OpenAI API's default encoding
- loggerFor(this._client).debug('embeddings/decoding base64 embeddings from base64');
- return response._thenUnwrap((response) => {
- if (response && response.data) {
- response.data.forEach((embeddingBase64Obj) => {
- const embeddingBase64Str = embeddingBase64Obj.embedding;
- embeddingBase64Obj.embedding = toFloat32Array(embeddingBase64Str);
- });
- }
- return response;
- });
- }
+ /**
+ * Creates an embedding vector representing the input text.
+ *
+ * @example
+ * ```ts
+ * const createEmbeddingResponse =
+ * await client.embeddings.create({
+ * input: 'The quick brown fox jumped over the lazy dog',
+ * model: 'text-embedding-3-small',
+ * });
+ * ```
+ */
+ create(body, options) {
+ const hasUserProvidedEncodingFormat = !!body.encoding_format;
+ // No encoding_format specified, defaulting to base64 for performance reasons
+ // See https://github.com/openai/openai-node/pull/1312
+ let encoding_format = hasUserProvidedEncodingFormat
+ ? body.encoding_format
+ : "base64";
+ if (body.model.includes("jina")) {
+ encoding_format = undefined;
+ }
+ if (hasUserProvidedEncodingFormat) {
+ loggerFor(this._client).debug(
+ "embeddings/user defined encoding_format:",
+ body.encoding_format
+ );
+ }
+ const response = this._client.post("/embeddings", {
+ body: {
+ ...body,
+ encoding_format: encoding_format,
+ },
+ ...options,
+ });
+ // if the user specified an encoding_format, return the response as-is
+ if (hasUserProvidedEncodingFormat || body.model.includes("jina")) {
+ return response;
+ }
+ // in this stage, we are sure the user did not specify an encoding_format
+ // and we defaulted to base64 for performance reasons
+ // we are sure then that the response is base64 encoded, let's decode it
+ // the returned result will be a float32 array since this is OpenAI API's default encoding
+ loggerFor(this._client).debug(
+ "embeddings/decoding base64 embeddings from base64"
+ );
+ return response._thenUnwrap((response) => {
+ if (response && response.data && typeof response.data[0]?.embedding === 'string') {
+ response.data.forEach((embeddingBase64Obj) => {
+ const embeddingBase64Str = embeddingBase64Obj.embedding;
+ embeddingBase64Obj.embedding = toFloat32Array(embeddingBase64Str);
+ });
+ }
+ return response;
+ });
+ }
}
//# sourceMappingURL=embeddings.mjs.map

View File

@ -0,0 +1,344 @@
diff --git a/client.js b/client.js
index 22cc08d77ce849842a28f684c20dd5738152efa4..0c20f96405edbe7724b87517115fa2a61934b343 100644
--- a/client.js
+++ b/client.js
@@ -444,7 +444,7 @@ class OpenAI {
'User-Agent': this.getUserAgent(),
'X-Stainless-Retry-Count': String(retryCount),
...(options.timeout ? { 'X-Stainless-Timeout': String(Math.trunc(options.timeout / 1000)) } : {}),
- ...(0, detect_platform_1.getPlatformHeaders)(),
+ // ...(0, detect_platform_1.getPlatformHeaders)(),
'OpenAI-Organization': this.organization,
'OpenAI-Project': this.project,
},
diff --git a/client.mjs b/client.mjs
index 7f1af99fb30d2cae03eea6687b53e6c7828faceb..fd66373a5eff31a5846084387a3fd97956c9ad48 100644
--- a/client.mjs
+++ b/client.mjs
@@ -1,43 +1,41 @@
// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
var _OpenAI_instances, _a, _OpenAI_encoder, _OpenAI_baseURLOverridden;
-import { __classPrivateFieldGet, __classPrivateFieldSet } from "./internal/tslib.mjs";
-import { uuid4 } from "./internal/utils/uuid.mjs";
-import { validatePositiveInteger, isAbsoluteURL, safeJSON } from "./internal/utils/values.mjs";
-import { sleep } from "./internal/utils/sleep.mjs";
-import { castToError, isAbortError } from "./internal/errors.mjs";
-import { getPlatformHeaders } from "./internal/detect-platform.mjs";
-import * as Shims from "./internal/shims.mjs";
-import * as Opts from "./internal/request-options.mjs";
-import * as qs from "./internal/qs/index.mjs";
-import { VERSION } from "./version.mjs";
+import { APIPromise } from "./core/api-promise.mjs";
import * as Errors from "./core/error.mjs";
import * as Pagination from "./core/pagination.mjs";
import * as Uploads from "./core/uploads.mjs";
-import * as API from "./resources/index.mjs";
-import { APIPromise } from "./core/api-promise.mjs";
-import { Batches, } from "./resources/batches.mjs";
-import { Completions, } from "./resources/completions.mjs";
-import { Embeddings, } from "./resources/embeddings.mjs";
-import { Files, } from "./resources/files.mjs";
-import { Images, } from "./resources/images.mjs";
-import { Models } from "./resources/models.mjs";
-import { Moderations, } from "./resources/moderations.mjs";
-import { Webhooks } from "./resources/webhooks.mjs";
+import { isRunningInBrowser } from "./internal/detect-platform.mjs";
+import { castToError, isAbortError } from "./internal/errors.mjs";
+import { buildHeaders } from "./internal/headers.mjs";
+import * as qs from "./internal/qs/index.mjs";
+import * as Opts from "./internal/request-options.mjs";
+import * as Shims from "./internal/shims.mjs";
+import { __classPrivateFieldGet, __classPrivateFieldSet } from "./internal/tslib.mjs";
+import { readEnv } from "./internal/utils/env.mjs";
+import { formatRequestDetails, loggerFor, parseLogLevel, } from "./internal/utils/log.mjs";
+import { sleep } from "./internal/utils/sleep.mjs";
+import { uuid4 } from "./internal/utils/uuid.mjs";
+import { isAbsoluteURL, isEmptyObj, safeJSON, validatePositiveInteger } from "./internal/utils/values.mjs";
import { Audio } from "./resources/audio/audio.mjs";
+import { Batches, } from "./resources/batches.mjs";
import { Beta } from "./resources/beta/beta.mjs";
import { Chat } from "./resources/chat/chat.mjs";
+import { Completions, } from "./resources/completions.mjs";
import { Containers, } from "./resources/containers/containers.mjs";
+import { Embeddings, } from "./resources/embeddings.mjs";
import { Evals, } from "./resources/evals/evals.mjs";
+import { Files, } from "./resources/files.mjs";
import { FineTuning } from "./resources/fine-tuning/fine-tuning.mjs";
import { Graders } from "./resources/graders/graders.mjs";
+import { Images, } from "./resources/images.mjs";
+import * as API from "./resources/index.mjs";
+import { Models } from "./resources/models.mjs";
+import { Moderations, } from "./resources/moderations.mjs";
import { Responses } from "./resources/responses/responses.mjs";
import { Uploads as UploadsAPIUploads, } from "./resources/uploads/uploads.mjs";
import { VectorStores, } from "./resources/vector-stores/vector-stores.mjs";
-import { isRunningInBrowser } from "./internal/detect-platform.mjs";
-import { buildHeaders } from "./internal/headers.mjs";
-import { readEnv } from "./internal/utils/env.mjs";
-import { formatRequestDetails, loggerFor, parseLogLevel, } from "./internal/utils/log.mjs";
-import { isEmptyObj } from "./internal/utils/values.mjs";
+import { Webhooks } from "./resources/webhooks.mjs";
+import { VERSION } from "./version.mjs";
/**
* API Client for interfacing with the OpenAI API.
*/
@@ -441,7 +439,7 @@ export class OpenAI {
'User-Agent': this.getUserAgent(),
'X-Stainless-Retry-Count': String(retryCount),
...(options.timeout ? { 'X-Stainless-Timeout': String(Math.trunc(options.timeout / 1000)) } : {}),
- ...getPlatformHeaders(),
+ // ...getPlatformHeaders(),
'OpenAI-Organization': this.organization,
'OpenAI-Project': this.project,
},
diff --git a/core/error.js b/core/error.js
index c302cc356f0f24b50c3f5a0aa3ea0b79ae1e9a8d..164ee2ee31cd7eea8f70139e25d140b763e91d36 100644
--- a/core/error.js
+++ b/core/error.js
@@ -40,7 +40,7 @@ class APIError extends OpenAIError {
if (!status || !headers) {
return new APIConnectionError({ message, cause: (0, errors_1.castToError)(errorResponse) });
}
- const error = errorResponse?.['error'];
+ const error = errorResponse?.['error'] || errorResponse;
if (status === 400) {
return new BadRequestError(status, error, message, headers);
}
diff --git a/core/error.mjs b/core/error.mjs
index 75f5b0c328cc4894478f3490a00dbf6abd96fc12..269f46f96e9fad1f7a1649a3810562abc7fae37f 100644
--- a/core/error.mjs
+++ b/core/error.mjs
@@ -36,7 +36,7 @@ export class APIError extends OpenAIError {
if (!status || !headers) {
return new APIConnectionError({ message, cause: castToError(errorResponse) });
}
- const error = errorResponse?.['error'];
+ const error = errorResponse?.['error'] || errorResponse;
if (status === 400) {
return new BadRequestError(status, error, message, headers);
}
diff --git a/resources/embeddings.js b/resources/embeddings.js
index 2404264d4ba0204322548945ebb7eab3bea82173..93b9e286f62101b5aa7532e96ddba61f682ece3f 100644
--- a/resources/embeddings.js
+++ b/resources/embeddings.js
@@ -5,52 +5,64 @@ exports.Embeddings = void 0;
const resource_1 = require("../core/resource.js");
const utils_1 = require("../internal/utils.js");
class Embeddings extends resource_1.APIResource {
- /**
- * Creates an embedding vector representing the input text.
- *
- * @example
- * ```ts
- * const createEmbeddingResponse =
- * await client.embeddings.create({
- * input: 'The quick brown fox jumped over the lazy dog',
- * model: 'text-embedding-3-small',
- * });
- * ```
- */
- create(body, options) {
- const hasUserProvidedEncodingFormat = !!body.encoding_format;
- // No encoding_format specified, defaulting to base64 for performance reasons
- // See https://github.com/openai/openai-node/pull/1312
- let encoding_format = hasUserProvidedEncodingFormat ? body.encoding_format : 'base64';
- if (hasUserProvidedEncodingFormat) {
- (0, utils_1.loggerFor)(this._client).debug('embeddings/user defined encoding_format:', body.encoding_format);
- }
- const response = this._client.post('/embeddings', {
- body: {
- ...body,
- encoding_format: encoding_format,
- },
- ...options,
- });
- // if the user specified an encoding_format, return the response as-is
- if (hasUserProvidedEncodingFormat) {
- return response;
- }
- // in this stage, we are sure the user did not specify an encoding_format
- // and we defaulted to base64 for performance reasons
- // we are sure then that the response is base64 encoded, let's decode it
- // the returned result will be a float32 array since this is OpenAI API's default encoding
- (0, utils_1.loggerFor)(this._client).debug('embeddings/decoding base64 embeddings from base64');
- return response._thenUnwrap((response) => {
- if (response && response.data) {
- response.data.forEach((embeddingBase64Obj) => {
- const embeddingBase64Str = embeddingBase64Obj.embedding;
- embeddingBase64Obj.embedding = (0, utils_1.toFloat32Array)(embeddingBase64Str);
- });
- }
- return response;
- });
+ /**
+ * Creates an embedding vector representing the input text.
+ *
+ * @example
+ * ```ts
+ * const createEmbeddingResponse =
+ * await client.embeddings.create({
+ * input: 'The quick brown fox jumped over the lazy dog',
+ * model: 'text-embedding-3-small',
+ * });
+ * ```
+ */
+ create(body, options) {
+ const hasUserProvidedEncodingFormat = !!body.encoding_format;
+ // No encoding_format specified, defaulting to base64 for performance reasons
+ // See https://github.com/openai/openai-node/pull/1312
+ let encoding_format = hasUserProvidedEncodingFormat
+ ? body.encoding_format
+ : "base64";
+ if (body.model.includes("jina")) {
+ encoding_format = undefined;
+ }
+ if (hasUserProvidedEncodingFormat) {
+ (0, utils_1.loggerFor)(this._client).debug(
+ "embeddings/user defined encoding_format:",
+ body.encoding_format
+ );
}
+ const response = this._client.post("/embeddings", {
+ body: {
+ ...body,
+ encoding_format: encoding_format,
+ },
+ ...options,
+ });
+ // if the user specified an encoding_format, return the response as-is
+ if (hasUserProvidedEncodingFormat || body.model.includes("jina")) {
+ return response;
+ }
+ // in this stage, we are sure the user did not specify an encoding_format
+ // and we defaulted to base64 for performance reasons
+ // we are sure then that the response is base64 encoded, let's decode it
+ // the returned result will be a float32 array since this is OpenAI API's default encoding
+ (0, utils_1.loggerFor)(this._client).debug(
+ "embeddings/decoding base64 embeddings from base64"
+ );
+ return response._thenUnwrap((response) => {
+ if (response && response.data && typeof response.data[0]?.embedding === 'string') {
+ response.data.forEach((embeddingBase64Obj) => {
+ const embeddingBase64Str = embeddingBase64Obj.embedding;
+ embeddingBase64Obj.embedding = (0, utils_1.toFloat32Array)(
+ embeddingBase64Str
+ );
+ });
+ }
+ return response;
+ });
+ }
}
exports.Embeddings = Embeddings;
//# sourceMappingURL=embeddings.js.map
diff --git a/resources/embeddings.mjs b/resources/embeddings.mjs
index 19dcaef578c194a89759c4360073cfd4f7dd2cbf..42c903fadb03c707356a983603ff09e4152ecf11 100644
--- a/resources/embeddings.mjs
+++ b/resources/embeddings.mjs
@@ -2,51 +2,61 @@
import { APIResource } from "../core/resource.mjs";
import { loggerFor, toFloat32Array } from "../internal/utils.mjs";
export class Embeddings extends APIResource {
- /**
- * Creates an embedding vector representing the input text.
- *
- * @example
- * ```ts
- * const createEmbeddingResponse =
- * await client.embeddings.create({
- * input: 'The quick brown fox jumped over the lazy dog',
- * model: 'text-embedding-3-small',
- * });
- * ```
- */
- create(body, options) {
- const hasUserProvidedEncodingFormat = !!body.encoding_format;
- // No encoding_format specified, defaulting to base64 for performance reasons
- // See https://github.com/openai/openai-node/pull/1312
- let encoding_format = hasUserProvidedEncodingFormat ? body.encoding_format : 'base64';
- if (hasUserProvidedEncodingFormat) {
- loggerFor(this._client).debug('embeddings/user defined encoding_format:', body.encoding_format);
- }
- const response = this._client.post('/embeddings', {
- body: {
- ...body,
- encoding_format: encoding_format,
- },
- ...options,
- });
- // if the user specified an encoding_format, return the response as-is
- if (hasUserProvidedEncodingFormat) {
- return response;
- }
- // in this stage, we are sure the user did not specify an encoding_format
- // and we defaulted to base64 for performance reasons
- // we are sure then that the response is base64 encoded, let's decode it
- // the returned result will be a float32 array since this is OpenAI API's default encoding
- loggerFor(this._client).debug('embeddings/decoding base64 embeddings from base64');
- return response._thenUnwrap((response) => {
- if (response && response.data) {
- response.data.forEach((embeddingBase64Obj) => {
- const embeddingBase64Str = embeddingBase64Obj.embedding;
- embeddingBase64Obj.embedding = toFloat32Array(embeddingBase64Str);
- });
- }
- return response;
- });
+ /**
+ * Creates an embedding vector representing the input text.
+ *
+ * @example
+ * ```ts
+ * const createEmbeddingResponse =
+ * await client.embeddings.create({
+ * input: 'The quick brown fox jumped over the lazy dog',
+ * model: 'text-embedding-3-small',
+ * });
+ * ```
+ */
+ create(body, options) {
+ const hasUserProvidedEncodingFormat = !!body.encoding_format;
+ // No encoding_format specified, defaulting to base64 for performance reasons
+ // See https://github.com/openai/openai-node/pull/1312
+ let encoding_format = hasUserProvidedEncodingFormat
+ ? body.encoding_format
+ : "base64";
+ if (body.model.includes("jina")) {
+ encoding_format = undefined;
+ }
+ if (hasUserProvidedEncodingFormat) {
+ loggerFor(this._client).debug(
+ "embeddings/user defined encoding_format:",
+ body.encoding_format
+ );
}
+ const response = this._client.post("/embeddings", {
+ body: {
+ ...body,
+ encoding_format: encoding_format,
+ },
+ ...options,
+ });
+ // if the user specified an encoding_format, return the response as-is
+ if (hasUserProvidedEncodingFormat || body.model.includes("jina")) {
+ return response;
+ }
+ // in this stage, we are sure the user did not specify an encoding_format
+ // and we defaulted to base64 for performance reasons
+ // we are sure then that the response is base64 encoded, let's decode it
+ // the returned result will be a float32 array since this is OpenAI API's default encoding
+ loggerFor(this._client).debug(
+ "embeddings/decoding base64 embeddings from base64"
+ );
+ return response._thenUnwrap((response) => {
+ if (response && response.data && typeof response.data[0]?.embedding === 'string') {
+ response.data.forEach((embeddingBase64Obj) => {
+ const embeddingBase64Str = embeddingBase64Obj.embedding;
+ embeddingBase64Obj.embedding = toFloat32Array(embeddingBase64Str);
+ });
+ }
+ return response;
+ });
+ }
}
//# sourceMappingURL=embeddings.mjs.map

View File

@ -216,7 +216,7 @@
"motion": "^12.10.5",
"notion-helper": "^1.3.22",
"npx-scope-finder": "^1.2.0",
"openai": "patch:openai@npm%3A5.1.0#~/.yarn/patches/openai-npm-5.1.0-0e7b3ccb07.patch",
"openai": "patch:openai@npm%3A5.12.0#~/.yarn/patches/openai-npm-5.12.0-a06a6369b2.patch",
"p-queue": "^8.1.0",
"pdf-lib": "^1.17.1",
"playwright": "^1.52.0",
@ -274,10 +274,10 @@
"@langchain/openai@npm:^0.3.16": "patch:@langchain/openai@npm%3A0.3.16#~/.yarn/patches/@langchain-openai-npm-0.3.16-e525b59526.patch",
"@langchain/openai@npm:>=0.1.0 <0.4.0": "patch:@langchain/openai@npm%3A0.3.16#~/.yarn/patches/@langchain-openai-npm-0.3.16-e525b59526.patch",
"libsql@npm:^0.4.4": "patch:libsql@npm%3A0.4.7#~/.yarn/patches/libsql-npm-0.4.7-444e260fb1.patch",
"openai@npm:^4.77.0": "patch:openai@npm%3A5.1.0#~/.yarn/patches/openai-npm-5.1.0-0e7b3ccb07.patch",
"openai@npm:^4.77.0": "patch:openai@npm%3A5.12.0#~/.yarn/patches/openai-npm-5.12.0-a06a6369b2.patch",
"pkce-challenge@npm:^4.1.0": "patch:pkce-challenge@npm%3A4.1.0#~/.yarn/patches/pkce-challenge-npm-4.1.0-fbc51695a3.patch",
"app-builder-lib@npm:26.0.13": "patch:app-builder-lib@npm%3A26.0.13#~/.yarn/patches/app-builder-lib-npm-26.0.13-a064c9e1d0.patch",
"openai@npm:^4.87.3": "patch:openai@npm%3A5.1.0#~/.yarn/patches/openai-npm-5.1.0-0e7b3ccb07.patch",
"openai@npm:^4.87.3": "patch:openai@npm%3A5.12.0#~/.yarn/patches/openai-npm-5.12.0-a06a6369b2.patch",
"app-builder-lib@npm:26.0.15": "patch:app-builder-lib@npm%3A26.0.15#~/.yarn/patches/app-builder-lib-npm-26.0.15-360e5b0476.patch",
"@langchain/core@npm:^0.3.26": "patch:@langchain/core@npm%3A0.3.44#~/.yarn/patches/@langchain-core-npm-0.3.44-41d5c3cb0a.patch",
"node-abi": "4.12.0",

View File

@ -3,25 +3,28 @@ import {
isFunctionCallingModel,
isNotSupportTemperatureAndTopP,
isOpenAIModel,
isSupportedFlexServiceTier
isSupportFlexServiceTierModel
} from '@renderer/config/models'
import { REFERENCE_PROMPT } from '@renderer/config/prompts'
import { isSupportServiceTierProviders } from '@renderer/config/providers'
import { getLMStudioKeepAliveTime } from '@renderer/hooks/useLMStudio'
import { getStoreSetting } from '@renderer/hooks/useSettings'
import { getAssistantSettings } from '@renderer/services/AssistantService'
import { SettingsState } from '@renderer/store/settings'
import {
Assistant,
FileTypes,
GenerateImageParams,
GroqServiceTiers,
isGroqServiceTier,
isOpenAIServiceTier,
KnowledgeReference,
MCPCallToolResponse,
MCPTool,
MCPToolResponse,
MemoryItem,
Model,
OpenAIServiceTier,
OpenAIServiceTiers,
Provider,
SystemProviderIds,
ToolCallResponse,
WebSearchProviderResponse,
WebSearchResponse
@ -201,29 +204,37 @@ export abstract class BaseApiClient<
return assistantSettings?.enableTopP ? assistantSettings?.topP : undefined
}
// NOTE: 这个也许可以迁移到OpenAIBaseClient
protected getServiceTier(model: Model) {
if (!isOpenAIModel(model) || model.provider === 'github' || model.provider === 'copilot') {
const serviceTierSetting = this.provider.serviceTier
if (!isSupportServiceTierProviders(this.provider) || !isOpenAIModel(model) || !serviceTierSetting) {
return undefined
}
const openAI = getStoreSetting('openAI') as SettingsState['openAI']
let serviceTier = 'auto' as OpenAIServiceTier
if (openAI && openAI?.serviceTier === 'flex') {
if (isSupportedFlexServiceTier(model)) {
serviceTier = 'flex'
} else {
serviceTier = 'auto'
// 处理不同供应商需要 fallback 到默认值的情况
if (this.provider.id === SystemProviderIds.groq) {
if (
!isGroqServiceTier(serviceTierSetting) ||
(serviceTierSetting === GroqServiceTiers.flex && !isSupportFlexServiceTierModel(model))
) {
return undefined
}
} else {
serviceTier = openAI.serviceTier
// 其他 OpenAI 供应商,假设他们的服务层级设置和 OpenAI 完全相同
if (
!isOpenAIServiceTier(serviceTierSetting) ||
(serviceTierSetting === OpenAIServiceTiers.flex && !isSupportFlexServiceTierModel(model))
) {
return undefined
}
}
return serviceTier
return serviceTierSetting
}
protected getTimeout(model: Model) {
if (isSupportedFlexServiceTier(model)) {
if (isSupportFlexServiceTierModel(model)) {
return 15 * 1000 * 60
}
return defaultTimeout

View File

@ -39,6 +39,7 @@ import {
MCPTool,
MCPToolResponse,
Model,
OpenAIServiceTier,
Provider,
ToolCallResponse,
TranslateAssistant,
@ -551,7 +552,7 @@ export class OpenAIAPIClient extends OpenAIBaseClient<
reqMessages = processReqMessages(model, reqMessages)
// 5. 创建通用参数
const commonParams = {
const commonParams: OpenAISdkParams = {
model: model.id,
messages:
isRecursiveCall && recursiveSdkMessages && recursiveSdkMessages.length > 0
@ -561,7 +562,8 @@ export class OpenAIAPIClient extends OpenAIBaseClient<
top_p: this.getTopP(assistant, model),
max_tokens: maxTokens,
tools: tools.length > 0 ? tools : undefined,
service_tier: this.getServiceTier(model),
// groq 有不同的 service tier 配置,不符合 openai 接口类型
service_tier: this.getServiceTier(model) as OpenAIServiceTier,
...this.getProviderSpecificParameters(assistant, model),
...this.getReasoningEffort(assistant, model),
...getOpenAIWebSearchParams(model, enableWebSearch),

View File

@ -16,6 +16,7 @@ import {
MCPTool,
MCPToolResponse,
Model,
OpenAIServiceTier,
Provider,
ToolCallResponse,
WebSearchSource
@ -341,8 +342,8 @@ export class OpenAIResponseAPIClient extends OpenAIBaseClient<
}
public extractMessagesFromSdkPayload(sdkPayload: OpenAIResponseSdkParams): OpenAIResponseSdkMessageParam[] {
if (typeof sdkPayload.input === 'string') {
return [{ role: 'user', content: sdkPayload.input }]
if (!sdkPayload.input || typeof sdkPayload.input === 'string') {
return [{ role: 'user', content: sdkPayload.input ?? '' }]
}
return sdkPayload.input
}
@ -440,7 +441,7 @@ export class OpenAIResponseAPIClient extends OpenAIBaseClient<
}
tools = tools.concat(extraTools)
const commonParams = {
const commonParams: OpenAIResponseSdkParams = {
model: model.id,
input:
isRecursiveCall && recursiveSdkMessages && recursiveSdkMessages.length > 0
@ -451,7 +452,8 @@ export class OpenAIResponseAPIClient extends OpenAIBaseClient<
max_output_tokens: maxTokens,
stream: streamOutput,
tools: !isEmpty(tools) ? tools : undefined,
service_tier: this.getServiceTier(model),
// groq 有不同的 service tier 配置,不符合 openai 接口类型
service_tier: this.getServiceTier(model) as OpenAIServiceTier,
...(this.getReasoningEffort(assistant, model) as OpenAI.Reasoning),
// 只在对话场景下应用自定义参数,避免影响翻译、总结等其他业务逻辑
...(coreRequest.callType === 'chat' ? this.getCustomParameters(assistant) : {})

View File

@ -145,7 +145,7 @@ import YiModelLogoDark from '@renderer/assets/images/models/yi_dark.png'
import YoudaoLogo from '@renderer/assets/images/providers/netease-youdao.svg'
import NomicLogo from '@renderer/assets/images/providers/nomic.png'
import { getProviderByModel } from '@renderer/services/AssistantService'
import { Model } from '@renderer/types'
import { Model, SystemProviderId } from '@renderer/types'
import { getLowerBaseModelName, isUserSelectedModelType } from '@renderer/utils'
import OpenAI from 'openai'
@ -433,7 +433,7 @@ export function getModelLogo(modelId: string) {
return undefined
}
export const SYSTEM_MODELS: Record<string, Model[]> = {
export const SYSTEM_MODELS: Record<SystemProviderId | 'defaultModel', Model[]> = {
defaultModel: [
{
// 默认助手模型
@ -464,6 +464,7 @@ export const SYSTEM_MODELS: Record<string, Model[]> = {
group: 'deepseek-ai'
}
],
vertexai: [],
'302ai': [
{
id: 'deepseek-chat',
@ -643,129 +644,6 @@ export const SYSTEM_MODELS: Record<string, Model[]> = {
{ id: 'deepseek-r1', name: 'DeepSeek-R1', provider: 'burncloud', group: 'deepseek-ai' },
{ id: 'deepseek-v3', name: 'DeepSeek-V3', provider: 'burncloud', group: 'deepseek-ai' }
],
o3: [
{
id: 'gpt-4o',
provider: 'o3',
name: 'GPT-4o',
group: 'OpenAI'
},
{
id: 'o1-mini',
provider: 'o3',
name: 'o1-mini',
group: 'OpenAI'
},
{
id: 'o1-preview',
provider: 'o3',
name: 'o1-preview',
group: 'OpenAI'
},
{
id: 'o3-mini',
provider: 'o3',
name: 'o3-mini',
group: 'OpenAI'
},
{
id: 'o3-mini-high',
provider: 'o3',
name: 'o3-mini-high',
group: 'OpenAI'
},
{
id: 'claude-3-7-sonnet-20250219',
provider: 'o3',
name: 'claude-3-7-sonnet-20250219',
group: 'Anthropic'
},
{
id: 'claude-3-5-sonnet-20241022',
provider: 'o3',
name: 'claude-3-5-sonnet-20241022',
group: 'Anthropic'
},
{
id: 'claude-3-5-haiku-20241022',
provider: 'o3',
name: 'claude-3-5-haiku-20241022',
group: 'Anthropic'
},
{
id: 'claude-3-opus-20240229',
provider: 'o3',
name: 'claude-3-opus-20240229',
group: 'Anthropic'
},
{
id: 'claude-3-haiku-20240307',
provider: 'o3',
name: 'claude-3-haiku-20240307',
group: 'Anthropic'
},
{
id: 'claude-3-5-sonnet-20240620',
provider: 'o3',
name: 'claude-3-5-sonnet-20240620',
group: 'Anthropic'
},
{
id: 'deepseek-ai/Deepseek-R1',
provider: 'o3',
name: 'DeepSeek R1',
group: 'DeepSeek'
},
{
id: 'deepseek-reasoner',
provider: 'o3',
name: 'deepseek-reasoner',
group: 'DeepSeek'
},
{
id: 'deepseek-chat',
provider: 'o3',
name: 'deepseek-chat',
group: 'DeepSeek'
},
{
id: 'deepseek-ai/DeepSeek-V3',
provider: 'o3',
name: 'DeepSeek V3',
group: 'DeepSeek'
},
{
id: 'text-embedding-3-small',
provider: 'o3',
name: 'text-embedding-3-small',
group: '嵌入模型'
},
{
id: 'text-embedding-ada-002',
provider: 'o3',
name: 'text-embedding-ada-002',
group: '嵌入模型'
},
{
id: 'text-embedding-v2',
provider: 'o3',
name: 'text-embedding-v2',
group: '嵌入模型'
},
{
id: 'Doubao-embedding',
provider: 'o3',
name: 'Doubao-embedding',
group: '嵌入模型'
},
{
id: 'Doubao-embedding-large',
provider: 'o3',
name: 'Doubao-embedding-large',
group: '嵌入模型'
}
],
ollama: [],
lmstudio: [],
silicon: [
@ -978,7 +856,6 @@ export const SYSTEM_MODELS: Record<string, Model[]> = {
group: 'Claude 3'
}
],
'gitee-ai': [],
deepseek: [
{
id: 'deepseek-chat',
@ -1382,7 +1259,7 @@ export const SYSTEM_MODELS: Record<string, Model[]> = {
group: 'deepseek-ai'
}
],
bailian: [
dashscope: [
{ id: 'qwen-vl-plus', name: 'qwen-vl-plus', provider: 'dashscope', group: 'qwen-vl', owned_by: 'system' },
{ id: 'qwen-coder-plus', name: 'qwen-coder-plus', provider: 'dashscope', group: 'qwen-coder', owned_by: 'system' },
{ id: 'qwen-turbo', name: 'qwen-turbo', provider: 'dashscope', group: 'qwen-turbo', owned_by: 'system' },
@ -1753,20 +1630,6 @@ export const SYSTEM_MODELS: Record<string, Model[]> = {
group: 'Llama3'
}
],
zhinao: [
{
id: '360gpt-pro',
provider: 'zhinao',
name: '360gpt-pro',
group: '360Gpt'
},
{
id: '360gpt-turbo',
provider: 'zhinao',
name: '360gpt-turbo',
group: '360Gpt'
}
],
hunyuan: [
{
id: 'hunyuan-pro',
@ -2551,7 +2414,7 @@ export function isOpenAIModel(model: Model): boolean {
return model.id.includes('gpt') || isOpenAIReasoningModel(model)
}
export function isSupportedFlexServiceTier(model: Model): boolean {
export function isSupportFlexServiceTierModel(model: Model): boolean {
if (!model) {
return false
}

View File

@ -52,10 +52,546 @@ import VoyageAIProviderLogo from '@renderer/assets/images/providers/voyageai.png
import XirangProviderLogo from '@renderer/assets/images/providers/xirang.png'
import ZeroOneProviderLogo from '@renderer/assets/images/providers/zero-one.png'
import ZhipuProviderLogo from '@renderer/assets/images/providers/zhipu.png'
import { SYSTEM_PROVIDERS } from '@renderer/store/llm'
import { Provider, SystemProvider } from '@renderer/types'
import { OpenAIServiceTiers, Provider, SystemProvider, SystemProviderId } from '@renderer/types'
import { TOKENFLUX_HOST } from './constant'
import { SYSTEM_MODELS } from './models'
export const SYSTEM_PROVIDERS_CONFIG: Record<SystemProviderId, SystemProvider> = {
silicon: {
id: 'silicon',
name: 'Silicon',
type: 'openai',
apiKey: '',
apiHost: 'https://api.siliconflow.cn',
models: SYSTEM_MODELS.silicon,
isSystem: true,
enabled: true
},
aihubmix: {
id: 'aihubmix',
name: 'AiHubMix',
type: 'openai',
apiKey: '',
apiHost: 'https://aihubmix.com',
models: SYSTEM_MODELS.aihubmix,
isSystem: true,
enabled: false
},
ocoolai: {
id: 'ocoolai',
name: 'ocoolAI',
type: 'openai',
apiKey: '',
apiHost: 'https://api.ocoolai.com',
models: SYSTEM_MODELS.ocoolai,
isSystem: true,
enabled: false
},
deepseek: {
id: 'deepseek',
name: 'deepseek',
type: 'openai',
apiKey: '',
apiHost: 'https://api.deepseek.com',
models: SYSTEM_MODELS.deepseek,
isSystem: true,
enabled: false,
isNotSupportArrayContent: true
},
ppio: {
id: 'ppio',
name: 'PPIO',
type: 'openai',
apiKey: '',
apiHost: 'https://api.ppinfra.com/v3/openai/',
models: SYSTEM_MODELS.ppio,
isSystem: true,
enabled: false
},
alayanew: {
id: 'alayanew',
name: 'AlayaNew',
type: 'openai',
apiKey: '',
apiHost: 'https://deepseek.alayanew.com',
models: SYSTEM_MODELS.alayanew,
isSystem: true,
enabled: false
},
qiniu: {
id: 'qiniu',
name: 'Qiniu',
type: 'openai',
apiKey: '',
apiHost: 'https://api.qnaigc.com',
models: SYSTEM_MODELS.qiniu,
isSystem: true,
enabled: false
},
dmxapi: {
id: 'dmxapi',
name: 'DMXAPI',
type: 'openai',
apiKey: '',
apiHost: 'https://www.dmxapi.cn',
models: SYSTEM_MODELS.dmxapi,
isSystem: true,
enabled: false
},
burncloud: {
id: 'burncloud',
name: 'BurnCloud',
type: 'openai',
apiKey: '',
apiHost: 'https://ai.burncloud.com',
models: SYSTEM_MODELS.burncloud,
isSystem: true,
enabled: false
},
tokenflux: {
id: 'tokenflux',
name: 'TokenFlux',
type: 'openai',
apiKey: '',
apiHost: 'https://tokenflux.ai',
models: SYSTEM_MODELS.tokenflux,
isSystem: true,
enabled: false
},
'302ai': {
id: '302ai',
name: '302.AI',
type: 'openai',
apiKey: '',
apiHost: 'https://api.302.ai',
models: SYSTEM_MODELS['302ai'],
isSystem: true,
enabled: false
},
cephalon: {
id: 'cephalon',
name: 'Cephalon',
type: 'openai',
apiKey: '',
apiHost: 'https://cephalon.cloud/user-center/v1/model',
models: SYSTEM_MODELS.cephalon,
isSystem: true,
enabled: false
},
lanyun: {
id: 'lanyun',
name: 'LANYUN',
type: 'openai',
apiKey: '',
apiHost: 'https://maas-api.lanyun.net',
models: SYSTEM_MODELS.lanyun,
isSystem: true,
enabled: false
},
ph8: {
id: 'ph8',
name: 'PH8',
type: 'openai',
apiKey: '',
apiHost: 'https://ph8.co',
models: SYSTEM_MODELS.ph8,
isSystem: true,
enabled: false
},
openrouter: {
id: 'openrouter',
name: 'OpenRouter',
type: 'openai',
apiKey: '',
apiHost: 'https://openrouter.ai/api/v1/',
models: SYSTEM_MODELS.openrouter,
isSystem: true,
enabled: false
},
ollama: {
id: 'ollama',
name: 'Ollama',
type: 'openai',
apiKey: '',
apiHost: 'http://localhost:11434',
models: SYSTEM_MODELS.ollama,
isSystem: true,
enabled: false
},
'new-api': {
id: 'new-api',
name: 'New API',
type: 'openai',
apiKey: '',
apiHost: 'http://localhost:3000',
models: SYSTEM_MODELS['new-api'],
isSystem: true,
enabled: false
},
lmstudio: {
id: 'lmstudio',
name: 'LM Studio',
type: 'openai',
apiKey: '',
apiHost: 'http://localhost:1234',
models: SYSTEM_MODELS.lmstudio,
isSystem: true,
enabled: false
},
anthropic: {
id: 'anthropic',
name: 'Anthropic',
type: 'anthropic',
apiKey: '',
apiHost: 'https://api.anthropic.com/',
models: SYSTEM_MODELS.anthropic,
isSystem: true,
enabled: false
},
openai: {
id: 'openai',
name: 'OpenAI',
type: 'openai-response',
apiKey: '',
apiHost: 'https://api.openai.com',
models: SYSTEM_MODELS.openai,
isSystem: true,
enabled: false,
serviceTier: OpenAIServiceTiers.auto
},
'azure-openai': {
id: 'azure-openai',
name: 'Azure OpenAI',
type: 'azure-openai',
apiKey: '',
apiHost: '',
apiVersion: '',
models: SYSTEM_MODELS['azure-openai'],
isSystem: true,
enabled: false
},
gemini: {
id: 'gemini',
name: 'Gemini',
type: 'gemini',
apiKey: '',
apiHost: 'https://generativelanguage.googleapis.com',
models: SYSTEM_MODELS.gemini,
isSystem: true,
enabled: false,
isVertex: false
},
vertexai: {
id: 'vertexai',
name: 'VertexAI',
type: 'vertexai',
apiKey: '',
apiHost: 'https://aiplatform.googleapis.com',
models: SYSTEM_MODELS.vertexai,
isSystem: true,
enabled: false,
isVertex: true
},
github: {
id: 'github',
name: 'Github Models',
type: 'openai',
apiKey: '',
apiHost: 'https://models.inference.ai.azure.com/',
models: SYSTEM_MODELS.github,
isSystem: true,
enabled: false
},
copilot: {
id: 'copilot',
name: 'Github Copilot',
type: 'openai',
apiKey: '',
apiHost: 'https://api.githubcopilot.com/',
models: SYSTEM_MODELS.copilot,
isSystem: true,
enabled: false,
isAuthed: false
},
zhipu: {
id: 'zhipu',
name: 'ZhiPu',
type: 'openai',
apiKey: '',
apiHost: 'https://open.bigmodel.cn/api/paas/v4/',
models: SYSTEM_MODELS.zhipu,
isSystem: true,
enabled: false
},
yi: {
id: 'yi',
name: 'Yi',
type: 'openai',
apiKey: '',
apiHost: 'https://api.lingyiwanwu.com',
models: SYSTEM_MODELS.yi,
isSystem: true,
enabled: false
},
moonshot: {
id: 'moonshot',
name: 'Moonshot AI',
type: 'openai',
apiKey: '',
apiHost: 'https://api.moonshot.cn',
models: SYSTEM_MODELS.moonshot,
isSystem: true,
enabled: false
},
baichuan: {
id: 'baichuan',
name: 'BAICHUAN AI',
type: 'openai',
apiKey: '',
apiHost: 'https://api.baichuan-ai.com',
models: SYSTEM_MODELS.baichuan,
isSystem: true,
enabled: false,
isNotSupportArrayContent: true
},
dashscope: {
id: 'dashscope',
name: 'Bailian',
type: 'openai',
apiKey: '',
apiHost: 'https://dashscope.aliyuncs.com/compatible-mode/v1/',
models: SYSTEM_MODELS.dashscope,
isSystem: true,
enabled: false
},
stepfun: {
id: 'stepfun',
name: 'StepFun',
type: 'openai',
apiKey: '',
apiHost: 'https://api.stepfun.com',
models: SYSTEM_MODELS.stepfun,
isSystem: true,
enabled: false
},
doubao: {
id: 'doubao',
name: 'doubao',
type: 'openai',
apiKey: '',
apiHost: 'https://ark.cn-beijing.volces.com/api/v3/',
models: SYSTEM_MODELS.doubao,
isSystem: true,
enabled: false
},
infini: {
id: 'infini',
name: 'Infini',
type: 'openai',
apiKey: '',
apiHost: 'https://cloud.infini-ai.com/maas',
models: SYSTEM_MODELS.infini,
isSystem: true,
enabled: false
},
minimax: {
id: 'minimax',
name: 'MiniMax',
type: 'openai',
apiKey: '',
apiHost: 'https://api.minimax.chat/v1/',
models: SYSTEM_MODELS.minimax,
isSystem: true,
enabled: false,
isNotSupportArrayContent: true
},
groq: {
id: 'groq',
name: 'Groq',
type: 'openai',
apiKey: '',
apiHost: 'https://api.groq.com/openai',
models: SYSTEM_MODELS.groq,
isSystem: true,
enabled: false
},
together: {
id: 'together',
name: 'Together',
type: 'openai',
apiKey: '',
apiHost: 'https://api.together.xyz',
models: SYSTEM_MODELS.together,
isSystem: true,
enabled: false
},
fireworks: {
id: 'fireworks',
name: 'Fireworks',
type: 'openai',
apiKey: '',
apiHost: 'https://api.fireworks.ai/inference',
models: SYSTEM_MODELS.fireworks,
isSystem: true,
enabled: false
},
nvidia: {
id: 'nvidia',
name: 'nvidia',
type: 'openai',
apiKey: '',
apiHost: 'https://integrate.api.nvidia.com',
models: SYSTEM_MODELS.nvidia,
isSystem: true,
enabled: false
},
grok: {
id: 'grok',
name: 'Grok',
type: 'openai',
apiKey: '',
apiHost: 'https://api.x.ai',
models: SYSTEM_MODELS.grok,
isSystem: true,
enabled: false
},
hyperbolic: {
id: 'hyperbolic',
name: 'Hyperbolic',
type: 'openai',
apiKey: '',
apiHost: 'https://api.hyperbolic.xyz',
models: SYSTEM_MODELS.hyperbolic,
isSystem: true,
enabled: false
},
mistral: {
id: 'mistral',
name: 'Mistral',
type: 'openai',
apiKey: '',
apiHost: 'https://api.mistral.ai',
models: SYSTEM_MODELS.mistral,
isSystem: true,
enabled: false,
isNotSupportStreamOptions: true
},
jina: {
id: 'jina',
name: 'Jina',
type: 'openai',
apiKey: '',
apiHost: 'https://api.jina.ai',
models: SYSTEM_MODELS.jina,
isSystem: true,
enabled: false
},
perplexity: {
id: 'perplexity',
name: 'Perplexity',
type: 'openai',
apiKey: '',
apiHost: 'https://api.perplexity.ai/',
models: SYSTEM_MODELS.perplexity,
isSystem: true,
enabled: false
},
modelscope: {
id: 'modelscope',
name: 'ModelScope',
type: 'openai',
apiKey: '',
apiHost: 'https://api-inference.modelscope.cn/v1/',
models: SYSTEM_MODELS.modelscope,
isSystem: true,
enabled: false
},
xirang: {
id: 'xirang',
name: 'Xirang',
type: 'openai',
apiKey: '',
apiHost: 'https://wishub-x1.ctyun.cn',
models: SYSTEM_MODELS.xirang,
isSystem: true,
enabled: false,
isNotSupportArrayContent: true
},
hunyuan: {
id: 'hunyuan',
name: 'hunyuan',
type: 'openai',
apiKey: '',
apiHost: 'https://api.hunyuan.cloud.tencent.com',
models: SYSTEM_MODELS.hunyuan,
isSystem: true,
enabled: false
},
'tencent-cloud-ti': {
id: 'tencent-cloud-ti',
name: 'Tencent Cloud TI',
type: 'openai',
apiKey: '',
apiHost: 'https://api.lkeap.cloud.tencent.com',
models: SYSTEM_MODELS['tencent-cloud-ti'],
isSystem: true,
enabled: false
},
'baidu-cloud': {
id: 'baidu-cloud',
name: 'Baidu Cloud',
type: 'openai',
apiKey: '',
apiHost: 'https://qianfan.baidubce.com/v2/',
models: SYSTEM_MODELS['baidu-cloud'],
isSystem: true,
enabled: false
},
gpustack: {
id: 'gpustack',
name: 'GPUStack',
type: 'openai',
apiKey: '',
apiHost: '',
models: SYSTEM_MODELS.gpustack,
isSystem: true,
enabled: false
},
voyageai: {
id: 'voyageai',
name: 'VoyageAI',
type: 'openai',
apiKey: '',
apiHost: 'https://api.voyageai.com',
models: SYSTEM_MODELS.voyageai,
isSystem: true,
enabled: false
},
'aws-bedrock': {
id: 'aws-bedrock',
name: 'AWS Bedrock',
type: 'aws-bedrock',
apiKey: '',
apiHost: '',
models: SYSTEM_MODELS['aws-bedrock'],
isSystem: true,
enabled: false
},
poe: {
id: 'poe',
name: 'Poe',
type: 'openai',
apiKey: '',
apiHost: 'https://api.poe.com/v1/',
models: SYSTEM_MODELS['poe'],
isSystem: true,
enabled: false,
isNotSupportDeveloperRole: true
}
} as const
export const SYSTEM_PROVIDERS: SystemProvider[] = Object.values(SYSTEM_PROVIDERS_CONFIG)
const PROVIDER_LOGO_MAP = {
ph8: Ph8ProviderLogo,
@ -123,7 +659,19 @@ export function getProviderLogo(providerId: string) {
export const NOT_SUPPORTED_REANK_PROVIDERS = ['ollama']
export const ONLY_SUPPORTED_DIMENSION_PROVIDERS = ['ollama', 'infini']
export const PROVIDER_CONFIG = {
type ProviderUrls = {
api: {
url: string
}
websites?: {
official: string
apiKey?: string
docs: string
models?: string
}
}
export const PROVIDER_URLS: Record<SystemProviderId, ProviderUrls> = {
ph8: {
api: {
url: 'https://ph8.co'
@ -157,17 +705,6 @@ export const PROVIDER_CONFIG = {
models: 'https://platform.openai.com/docs/models'
}
},
o3: {
api: {
url: 'https://api.o3.fan'
},
websites: {
official: 'https://o3.fan',
apiKey: 'https://o3.fan/token',
docs: '',
models: 'https://o3.fan/info/models/'
}
},
burncloud: {
api: {
url: 'https://ai.burncloud.com'
@ -213,17 +750,6 @@ export const PROVIDER_CONFIG = {
models: 'https://cloud.siliconflow.cn/models'
}
},
'gitee-ai': {
api: {
url: 'https://ai.gitee.com'
},
websites: {
official: 'https://ai.gitee.com/',
apiKey: 'https://ai.gitee.com/dashboard/settings/tokens',
docs: 'https://ai.gitee.com/docs/openapi/v1#tag/%E6%96%87%E6%9C%AC%E7%94%9F%E6%88%90/POST/chat/completions',
models: 'https://ai.gitee.com/serverless-api'
}
},
deepseek: {
api: {
url: 'https://api.deepseek.com'
@ -545,17 +1071,6 @@ export const PROVIDER_CONFIG = {
models: 'https://fireworks.ai/dashboard/models'
}
},
zhinao: {
api: {
url: 'https://api.360.cn'
},
websites: {
official: 'https://ai.360.com/',
apiKey: 'https://ai.360.com/platform/keys',
docs: 'https://ai.360.com/platform/docs/overview',
models: 'https://ai.360.com/platform/limit'
}
},
hunyuan: {
api: {
url: 'https://api.hunyuan.cloud.tencent.com'
@ -720,47 +1235,63 @@ export const PROVIDER_CONFIG = {
}
}
const NOT_SUPPORT_ARRAY_CONTENT_PROVIDERS = ['deepseek', 'baichuan', 'minimax', 'xirang']
const NOT_SUPPORT_ARRAY_CONTENT_PROVIDERS = [
'deepseek',
'baichuan',
'minimax',
'xirang'
] as const satisfies SystemProviderId[]
/**
* message content Only for OpenAI Chat Completions API.
*/
export const isSupportArrayContentProvider = (provider: Provider) => {
return provider.isNotSupportArrayContent !== true && !NOT_SUPPORT_ARRAY_CONTENT_PROVIDERS.includes(provider.id)
return (
provider.isNotSupportArrayContent !== true &&
!NOT_SUPPORT_ARRAY_CONTENT_PROVIDERS.some((pid) => pid === provider.id)
)
}
const NOT_SUPPORT_DEVELOPER_ROLE_PROVIDERS = ['poe']
const NOT_SUPPORT_DEVELOPER_ROLE_PROVIDERS = ['poe'] as const satisfies SystemProviderId[]
/**
* developer message role Only for OpenAI API.
*/
export const isSupportDeveloperRoleProvider = (provider: Provider) => {
return provider.isNotSupportDeveloperRole !== true && !NOT_SUPPORT_DEVELOPER_ROLE_PROVIDERS.includes(provider.id)
return (
provider.isNotSupportDeveloperRole !== true &&
!NOT_SUPPORT_DEVELOPER_ROLE_PROVIDERS.some((pid) => pid === provider.id)
)
}
const NOT_SUPPORT_STREAM_OPTIONS_PROVIDERS = ['mistral']
const NOT_SUPPORT_STREAM_OPTIONS_PROVIDERS = ['mistral'] as const satisfies SystemProviderId[]
/**
* stream_options Only for OpenAI API.
*/
export const isSupportStreamOptionsProvider = (provider: Provider) => {
return provider.isNotSupportStreamOptions !== true && !NOT_SUPPORT_STREAM_OPTIONS_PROVIDERS.includes(provider.id)
return (
provider.isNotSupportStreamOptions !== true &&
!NOT_SUPPORT_STREAM_OPTIONS_PROVIDERS.some((pid) => pid === provider.id)
)
}
const SUPPORT_QWEN3_ENABLE_THINKING_PROVIDER = ['dashscope', 'modelscope']
const SUPPORT_QWEN3_ENABLE_THINKING_PROVIDER = ['dashscope', 'modelscope'] as const satisfies SystemProviderId[]
/**
* 使enable_thinking参数来控制Qwen3系列模型的思考 Only for OpenAI Chat Completions API.
*/
export const isSupportQwen3EnableThinkingProvider = (provider: Provider) => {
return SUPPORT_QWEN3_ENABLE_THINKING_PROVIDER.includes(provider.id)
return SUPPORT_QWEN3_ENABLE_THINKING_PROVIDER.some((pid) => pid === provider.id)
}
const NOT_SUPPORT_SERVICE_TIER_PROVIDERS = ['github', 'copilot'] as const satisfies SystemProviderId[]
/**
* 使`provider.isSystem`
* @param provider - Provider对象
* @returns
* service_tier Only for OpenAI API.
*/
export const isSystemProvider = (provider: Provider): provider is SystemProvider => {
return SYSTEM_PROVIDERS.some((p) => p.id === provider.id)
export const isSupportServiceTierProviders = (provider: Provider) => {
return (
provider.isNotSupportServiceTier !== true || !NOT_SUPPORT_SERVICE_TIER_PROVIDERS.some((pid) => pid === provider.id)
)
}

View File

@ -1,5 +1,4 @@
import { createSelector } from '@reduxjs/toolkit'
import { isSystemProvider } from '@renderer/config/providers'
import { getDefaultProvider } from '@renderer/services/AssistantService'
import { useAppDispatch, useAppSelector } from '@renderer/store'
import {
@ -11,7 +10,7 @@ import {
updateProvider,
updateProviders
} from '@renderer/store/llm'
import { Assistant, Model, Provider } from '@renderer/types'
import { Assistant, isSystemProvider, Model, Provider } from '@renderer/types'
import { useDefaultModel } from './useAssistant'

View File

@ -3069,6 +3069,9 @@
"auto": "auto",
"default": "default",
"flex": "flex",
"on_demand": "on demand",
"performance": "performance",
"priority": "priority",
"tip": "Specifies the latency tier to use for processing the request",
"title": "Service Tier"
},
@ -3122,6 +3125,10 @@
"label": "Support Developer Message"
},
"label": "API Settings",
"service_tier": {
"help": "Whether the provider supports configuring the service_tier parameter. When enabled, this parameter can be adjusted in the service tier settings on the chat page. (OpenAI models only)",
"label": "Supports service_tier"
},
"stream_options": {
"help": "Does the provider support the stream_options parameter?",
"label": "Support stream_options"

View File

@ -3069,6 +3069,9 @@
"auto": "自動",
"default": "デフォルト",
"flex": "フレックス",
"on_demand": "オンデマンド",
"performance": "性能",
"priority": "優先",
"tip": "リクエスト処理に使用するレイテンシティアを指定します",
"title": "サービスティア"
},
@ -3122,6 +3125,10 @@
"label": "Developer Message をサポート"
},
"label": "API設定",
"service_tier": {
"help": "このプロバイダーがservice_tierパラメータの設定をサポートしているかどうか。有効にすると、チャットページのサービスレベル設定でこのパラメータを調整できます。OpenAIモデルのみ対象",
"label": "service_tier をサポート"
},
"stream_options": {
"help": "このプロバイダーは stream_options パラメータをサポートしていますか",
"label": "stream_options をサポート"

View File

@ -3069,6 +3069,9 @@
"auto": "Авто",
"default": "По умолчанию",
"flex": "Гибкий",
"on_demand": "по требованию",
"performance": "производительность",
"priority": "приоритет",
"tip": "Указывает уровень задержки, который следует использовать для обработки запроса",
"title": "Уровень сервиса"
},
@ -3122,6 +3125,10 @@
"label": "Поддержка сообщения разработчика"
},
"label": "API настройки",
"service_tier": {
"help": "Поддерживает ли этот провайдер настройку параметра service_tier? После включения параметр можно настроить в настройках уровня обслуживания на странице диалога. (Только для моделей OpenAI)",
"label": "Поддержка service_tier"
},
"stream_options": {
"help": "Поддерживает ли этот провайдер параметр stream_options",
"label": "Поддержка stream_options"

View File

@ -3069,6 +3069,9 @@
"auto": "自动",
"default": "默认",
"flex": "灵活",
"on_demand": "按需",
"performance": "性能",
"priority": "优先",
"tip": "指定用于处理请求的延迟层级",
"title": "服务层级"
},
@ -3122,6 +3125,10 @@
"label": "支持 Developer Message"
},
"label": "API 设置",
"service_tier": {
"help": "该提供商是否支持配置 service_tier 参数。开启后可在对话页面的服务层级设置中调整该参数。仅限OpenAI模型",
"label": "支持 service_tier"
},
"stream_options": {
"help": "该提供商是否支持 stream_options 参数",
"label": "支持 stream_options"

View File

@ -3069,6 +3069,9 @@
"auto": "自動",
"default": "預設",
"flex": "彈性",
"on_demand": "按需",
"performance": "效能",
"priority": "優先",
"tip": "指定用於處理請求的延遲層級",
"title": "服務層級"
},
@ -3122,6 +3125,10 @@
"label": "支援開發人員訊息"
},
"label": "API 設定",
"service_tier": {
"help": "該提供商是否支援設定 service_tier 參數。啟用後,可在對話頁面的服務層級設定中調整此參數。(僅限 OpenAI 模型)",
"label": "支援 service_tier"
},
"stream_options": {
"help": "該提供商是否支援 stream_options 參數",
"label": "支援 stream_options"

View File

@ -2733,7 +2733,7 @@
"logoUrl": "URL Λογότυπου",
"longRunning": "Μακροχρόνια λειτουργία",
"longRunningTooltip": "Όταν ενεργοποιηθεί, ο διακομιστής υποστηρίζει μακροχρόνιες εργασίες, επαναφέρει το χρονικό όριο μετά από λήψη ειδοποίησης προόδου και επεκτείνει το μέγιστο χρονικό όριο σε 10 λεπτά.",
"missingDependencies": "Απο缺失, παρακαλώ εγκαταστήστε το για να συνεχίσετε",
"missingDependencies": "Λείπει, παρακαλώ εγκαταστήστε το για να συνεχίσετε",
"more": {
"awesome": "Επιλεγμένος κατάλογος διακομιστών MCP",
"composio": "Εργαλείο ανάπτυξης Composio MCP",
@ -3069,6 +3069,9 @@
"auto": "Αυτόματο",
"default": "Προεπιλογή",
"flex": "Εύκαμπτο",
"on_demand": "κατά παραγγελία",
"performance": "Απόδοση",
"priority": "προτεραιότητα",
"tip": "Καθορίστε το επίπεδο καθυστέρησης που χρησιμοποιείται για την επεξεργασία των αιτημάτων",
"title": "Επίπεδο υπηρεσίας"
},
@ -3122,6 +3125,10 @@
"label": "Υποστήριξη μηνύματος προγραμματιστή"
},
"label": "Ρυθμίσεις API",
"service_tier": {
"help": "Εάν ο πάροχος υποστηρίζει τη δυνατότητα διαμόρφωσης της παραμέτρου service_tier. Αν είναι ενεργοποιημένη, αυτή η παράμετρος μπορεί να ρυθμιστεί μέσω της ρύθμισης επιπέδου υπηρεσίας στη σελίδα διαλόγου. (Μόνο για μοντέλα OpenAI)",
"label": "Υποστήριξη service_tier"
},
"stream_options": {
"help": "Υποστηρίζει ο πάροχος την παράμετρο stream_options;",
"label": "Υποστήριξη stream_options"

View File

@ -3069,6 +3069,9 @@
"auto": "Automático",
"default": "Predeterminado",
"flex": "Flexible",
"on_demand": "según demanda",
"performance": "rendimiento",
"priority": "prioridad",
"tip": "Especifica el nivel de latencia utilizado para procesar la solicitud",
"title": "Nivel de servicio"
},
@ -3122,6 +3125,10 @@
"label": "Mensajes para desarrolladores compatibles"
},
"label": "Configuración de la API",
"service_tier": {
"help": "Si el proveedor admite la configuración del parámetro service_tier. Al activarlo, se podrá ajustar este parámetro en la configuración del nivel de servicio en la página de conversación. (Solo para modelos OpenAI)",
"label": "Compatible con service_tier"
},
"stream_options": {
"help": "¿Admite el proveedor el parámetro stream_options?",
"label": "Admite stream_options"

View File

@ -3069,6 +3069,9 @@
"auto": "Automatique",
"default": "Par défaut",
"flex": "Flexible",
"on_demand": "à la demande",
"performance": "performance",
"priority": "priorité",
"tip": "Spécifie le niveau de latence utilisé pour traiter la demande",
"title": "Niveau de service"
},
@ -3122,6 +3125,10 @@
"label": "Prise en charge du message développeur"
},
"label": "Paramètres de l'API",
"service_tier": {
"help": "Le fournisseur prend-il en charge la configuration du paramètre service_tier ? Lorsqu'il est activé, ce paramètre peut être ajusté dans les paramètres de niveau de service sur la page de conversation. (Modèles OpenAI uniquement)",
"label": "Prend en charge service_tier"
},
"stream_options": {
"help": "Le fournisseur prend-il en charge le paramètre stream_options ?",
"label": "Prise en charge des options de flux"

View File

@ -3069,6 +3069,9 @@
"auto": "Automático",
"default": "Padrão",
"flex": "Flexível",
"on_demand": "sob demanda",
"performance": "desempenho",
"priority": "prioridade",
"tip": "Especifique o nível de latência usado para processar a solicitação",
"title": "Nível de Serviço"
},
@ -3122,6 +3125,10 @@
"label": "Mensagem de suporte ao programador"
},
"label": "Definições da API",
"service_tier": {
"help": "Se o fornecedor suporta a configuração do parâmetro service_tier. Quando ativado, este parâmetro pode ser ajustado nas definições do nível de serviço na página de conversa. (Apenas para modelos OpenAI)",
"label": "Suporta service_tier"
},
"stream_options": {
"help": "O fornecedor suporta o parâmetro stream_options?",
"label": "suporta stream_options"

View File

@ -3,11 +3,7 @@ import { HStack } from '@renderer/components/Layout'
import Scrollbar from '@renderer/components/Scrollbar'
import Selector from '@renderer/components/Selector'
import { DEFAULT_CONTEXTCOUNT, DEFAULT_MAX_TOKENS, DEFAULT_TEMPERATURE } from '@renderer/config/constant'
import {
isOpenAIModel,
isSupportedFlexServiceTier,
isSupportedReasoningEffortOpenAIModel
} from '@renderer/config/models'
import { isOpenAIModel } from '@renderer/config/models'
import { translateLanguageOptions } from '@renderer/config/translate'
import { useCodeStyle } from '@renderer/context/CodeStyleProvider'
import { useTheme } from '@renderer/context/ThemeProvider'
@ -170,11 +166,6 @@ const SettingsTab: FC<Props> = (props) => {
const model = assistant.model || getDefaultModel()
const isOpenAI = isOpenAIModel(model)
const isOpenAIReasoning =
isSupportedReasoningEffortOpenAIModel(model) &&
!model.id.includes('o1-pro') &&
(provider.type === 'openai-response' || provider.id === 'aihubmix')
const isOpenAIFlexServiceTier = isSupportedFlexServiceTier(model)
return (
<Container className="settings-tab">
@ -302,8 +293,8 @@ const SettingsTab: FC<Props> = (props) => {
</CollapsibleSettingGroup>
{isOpenAI && (
<OpenAISettingsGroup
isOpenAIReasoning={isOpenAIReasoning}
isSupportedFlexServiceTier={isOpenAIFlexServiceTier}
model={model}
providerId={provider.id}
SettingGroup={SettingGroup}
SettingRowTitleSmall={SettingRowTitleSmall}
/>

View File

@ -1,9 +1,19 @@
import Selector from '@renderer/components/Selector'
import { isSupportedReasoningEffortOpenAIModel, isSupportFlexServiceTierModel } from '@renderer/config/models'
import { useProvider } from '@renderer/hooks/useProvider'
import { SettingDivider, SettingRow } from '@renderer/pages/settings'
import { CollapsibleSettingGroup } from '@renderer/pages/settings/SettingGroup'
import { RootState, useAppDispatch } from '@renderer/store'
import { setOpenAIServiceTier, setOpenAISummaryText } from '@renderer/store/settings'
import { OpenAIServiceTier, OpenAISummaryText } from '@renderer/types'
import { setOpenAISummaryText } from '@renderer/store/settings'
import {
GroqServiceTiers,
Model,
OpenAIServiceTier,
OpenAIServiceTiers,
OpenAISummaryText,
ServiceTier,
SystemProviderIds
} from '@renderer/types'
import { Tooltip } from 'antd'
import { CircleHelp } from 'lucide-react'
import { FC, useCallback, useEffect, useMemo } from 'react'
@ -11,29 +21,26 @@ import { useTranslation } from 'react-i18next'
import { useSelector } from 'react-redux'
interface Props {
isOpenAIReasoning: boolean
isSupportedFlexServiceTier: boolean
model: Model
providerId: string
SettingGroup: FC<{ children: React.ReactNode }>
SettingRowTitleSmall: FC<{ children: React.ReactNode }>
}
const FALL_BACK_SERVICE_TIER: Record<OpenAIServiceTier, OpenAIServiceTier> = {
auto: 'auto',
default: 'default',
flex: 'default'
}
const OpenAISettingsGroup: FC<Props> = ({
isOpenAIReasoning,
isSupportedFlexServiceTier,
SettingGroup,
SettingRowTitleSmall
}) => {
const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, SettingRowTitleSmall }) => {
const { t } = useTranslation()
const { provider, updateProvider } = useProvider(providerId)
const summaryText = useSelector((state: RootState) => state.settings.openAI.summaryText)
const serviceTierMode = useSelector((state: RootState) => state.settings.openAI.serviceTier)
const serviceTierMode = provider.serviceTier
const dispatch = useAppDispatch()
const isOpenAIReasoning =
isSupportedReasoningEffortOpenAIModel(model) &&
!model.id.includes('o1-pro') &&
(provider.type === 'openai-response' || provider.id === 'aihubmix')
const isSupportServiceTier = !provider.isNotSupportServiceTier
const isSupportedFlexServiceTier = isSupportFlexServiceTierModel(model)
const setSummaryText = useCallback(
(value: OpenAISummaryText) => {
dispatch(setOpenAISummaryText(value))
@ -42,10 +49,10 @@ const OpenAISettingsGroup: FC<Props> = ({
)
const setServiceTierMode = useCallback(
(value: OpenAIServiceTier) => {
dispatch(setOpenAIServiceTier(value))
(value: ServiceTier) => {
updateProvider({ serviceTier: value })
},
[dispatch]
[updateProvider]
)
const summaryTextOptions = [
@ -64,52 +71,90 @@ const OpenAISettingsGroup: FC<Props> = ({
]
const serviceTierOptions = useMemo(() => {
const baseOptions = [
{
value: 'auto',
label: t('settings.openai.service_tier.auto')
},
{
value: 'default',
label: t('settings.openai.service_tier.default')
},
{
value: 'flex',
label: t('settings.openai.service_tier.flex')
}
]
let baseOptions: { value: ServiceTier; label: string }[]
if (provider.id === SystemProviderIds.groq) {
baseOptions = [
{
value: 'auto',
label: t('settings.openai.service_tier.auto')
},
{
value: 'on_demand',
label: t('settings.openai.service_tier.on_demand')
},
{
value: 'flex',
label: t('settings.openai.service_tier.flex')
},
{
value: 'performance',
label: t('settings.openai.service_tier.performance')
}
]
} else {
// 其他情况默认是和 OpenAI 相同
baseOptions = [
{
value: 'auto',
label: t('settings.openai.service_tier.auto')
},
{
value: 'default',
label: t('settings.openai.service_tier.default')
},
{
value: 'flex',
label: t('settings.openai.service_tier.flex')
},
{
value: 'priority',
label: t('settings.openai.service_tier.priority')
}
]
}
return baseOptions.filter((option) => {
if (option.value === 'flex') {
return isSupportedFlexServiceTier
}
return true
})
}, [isSupportedFlexServiceTier, t])
}, [isSupportedFlexServiceTier, provider.id, t])
useEffect(() => {
if (serviceTierMode && !serviceTierOptions.some((option) => option.value === serviceTierMode)) {
setServiceTierMode(FALL_BACK_SERVICE_TIER[serviceTierMode])
if (provider.id === SystemProviderIds.groq) {
setServiceTierMode(GroqServiceTiers.on_demand)
} else {
setServiceTierMode(OpenAIServiceTiers.auto)
}
}
}, [serviceTierMode, serviceTierOptions, setServiceTierMode])
}, [provider.id, serviceTierMode, serviceTierOptions, setServiceTierMode])
if (!isOpenAIReasoning && !isSupportServiceTier) {
return null
}
return (
<CollapsibleSettingGroup title={t('settings.openai.title')} defaultExpanded={true}>
<SettingGroup>
<SettingRow>
<SettingRowTitleSmall>
{t('settings.openai.service_tier.title')}{' '}
<Tooltip title={t('settings.openai.service_tier.tip')}>
<CircleHelp size={14} style={{ marginLeft: 4 }} color="var(--color-text-2)" />
</Tooltip>
</SettingRowTitleSmall>
<Selector
value={serviceTierMode}
onChange={(value) => {
setServiceTierMode(value as OpenAIServiceTier)
}}
options={serviceTierOptions}
/>
</SettingRow>
{isSupportServiceTier && (
<SettingRow>
<SettingRowTitleSmall>
{t('settings.openai.service_tier.title')}{' '}
<Tooltip title={t('settings.openai.service_tier.tip')}>
<CircleHelp size={14} style={{ marginLeft: 4 }} color="var(--color-text-2)" />
</Tooltip>
</SettingRowTitleSmall>
<Selector
value={serviceTierMode}
onChange={(value) => {
setServiceTierMode(value as OpenAIServiceTier)
}}
options={serviceTierOptions}
placeholder={t('settings.openai.service_tier.auto')}
/>
</SettingRow>
)}
{isOpenAIReasoning && (
<>
<SettingDivider />

View File

@ -1,8 +1,7 @@
import InfoTooltip from '@renderer/components/InfoTooltip'
import { HStack } from '@renderer/components/Layout'
import { isSystemProvider } from '@renderer/config/providers'
import { useProvider } from '@renderer/hooks/useProvider'
import { Provider } from '@renderer/types'
import { isSystemProvider, Provider } from '@renderer/types'
import { Collapse, Flex, Switch } from 'antd'
import { startTransition, useCallback, useMemo } from 'react'
import { useTranslation } from 'react-i18next'
@ -60,6 +59,15 @@ const ApiOptionsSettings = ({ providerId }: Props) => {
updateProviderTransition({ ...provider, isNotSupportArrayContent: !checked })
},
checked: !provider.isNotSupportArrayContent
},
{
key: 'openai_service_tier',
label: t('settings.provider.api.options.service_tier.label'),
tip: t('settings.provider.api.options.service_tier.help'),
onChange: (checked: boolean) => {
updateProviderTransition({ ...provider, isNotSupportServiceTier: !checked })
},
checked: !provider.isNotSupportServiceTier
}
],
[t, provider, updateProviderTransition]

View File

@ -1,5 +1,5 @@
import { HStack } from '@renderer/components/Layout'
import { PROVIDER_CONFIG } from '@renderer/config/providers'
import { PROVIDER_URLS } from '@renderer/config/providers'
import { useAwsBedrockSettings } from '@renderer/hooks/useAwsBedrock'
import { Alert, Input } from 'antd'
import { FC, useState } from 'react'
@ -12,7 +12,7 @@ const AwsBedrockSettings: FC = () => {
const { accessKeyId, secretAccessKey, region, setAccessKeyId, setSecretAccessKey, setRegion } =
useAwsBedrockSettings()
const providerConfig = PROVIDER_CONFIG['aws-bedrock']
const providerConfig = PROVIDER_URLS['aws-bedrock']
const apiKeyWebsite = providerConfig?.websites?.apiKey
const [localAccessKeyId, setLocalAccessKeyId] = useState(accessKeyId)

View File

@ -2,7 +2,7 @@ import CollapsibleSearchBar from '@renderer/components/CollapsibleSearchBar'
import CustomTag from '@renderer/components/CustomTag'
import { LoadingIcon, StreamlineGoodHealthAndWellBeing } from '@renderer/components/Icons'
import { HStack } from '@renderer/components/Layout'
import { PROVIDER_CONFIG } from '@renderer/config/providers'
import { PROVIDER_URLS } from '@renderer/config/providers'
import { useProvider } from '@renderer/hooks/useProvider'
import { getProviderLabel } from '@renderer/i18n/label'
import { SettingHelpLink, SettingHelpText, SettingHelpTextRow, SettingSubtitle } from '@renderer/pages/settings'
@ -47,7 +47,7 @@ const ModelList: React.FC<ModelListProps> = ({ providerId }) => {
const { t } = useTranslation()
const { provider, models, removeModel } = useProvider(providerId)
const providerConfig = PROVIDER_CONFIG[provider.id]
const providerConfig = PROVIDER_URLS[provider.id]
const docsWebsite = providerConfig?.websites?.docs
const modelsWebsite = providerConfig?.websites?.models

View File

@ -5,7 +5,7 @@ import SiliconFlowProviderLogo from '@renderer/assets/images/providers/silicon.p
import TokenFluxProviderLogo from '@renderer/assets/images/providers/tokenflux.png'
import { HStack } from '@renderer/components/Layout'
import OAuthButton from '@renderer/components/OAuth/OAuthButton'
import { PROVIDER_CONFIG } from '@renderer/config/providers'
import { PROVIDER_URLS } from '@renderer/config/providers'
import { useProvider } from '@renderer/hooks/useProvider'
import { getProviderLabel } from '@renderer/i18n/label'
import { providerBills, providerCharge } from '@renderer/utils/oauth'
@ -37,7 +37,7 @@ const ProviderOAuth: FC<Props> = ({ providerId }) => {
}
let providerWebsite =
PROVIDER_CONFIG[provider.id]?.api?.url.replace('https://', '').replace('api.', '') || provider.name
PROVIDER_URLS[provider.id]?.api?.url.replace('https://', '').replace('api.', '') || provider.name
if (provider.id === 'ppio') {
providerWebsite = 'ppio.com'
}
@ -64,7 +64,7 @@ const ProviderOAuth: FC<Props> = ({ providerId }) => {
i18nKey="settings.provider.oauth.description"
components={{
website: (
<OfficialWebsite href={PROVIDER_CONFIG[provider.id].websites.official} target="_blank" rel="noreferrer" />
<OfficialWebsite href={PROVIDER_URLS[provider.id].websites.official} target="_blank" rel="noreferrer" />
)
}}
values={{ provider: providerWebsite }}

View File

@ -3,7 +3,7 @@ import { LoadingIcon } from '@renderer/components/Icons'
import { HStack } from '@renderer/components/Layout'
import { ApiKeyListPopup } from '@renderer/components/Popups/ApiKeyListPopup'
import { isEmbeddingModel, isRerankModel } from '@renderer/config/models'
import { PROVIDER_CONFIG } from '@renderer/config/providers'
import { PROVIDER_URLS } from '@renderer/config/providers'
import { useTheme } from '@renderer/context/ThemeProvider'
import { useAllProviders, useProvider, useProviders } from '@renderer/hooks/useProvider'
import i18n from '@renderer/i18n'
@ -57,7 +57,7 @@ const ProviderSetting: FC<Props> = ({ providerId }) => {
const isDmxapi = provider.id === 'dmxapi'
const providerConfig = PROVIDER_CONFIG[provider.id]
const providerConfig = PROVIDER_URLS[provider.id]
const officialWebsite = providerConfig?.websites?.official
const apiKeyWebsite = providerConfig?.websites?.apiKey
const configedApiHost = providerConfig?.api?.url

View File

@ -1,5 +1,5 @@
import { HStack } from '@renderer/components/Layout'
import { PROVIDER_CONFIG } from '@renderer/config/providers'
import { PROVIDER_URLS } from '@renderer/config/providers'
import { useProvider } from '@renderer/hooks/useProvider'
import { useVertexAISettings } from '@renderer/hooks/useVertexAI'
import { Alert, Input, Space } from 'antd'
@ -30,7 +30,7 @@ const VertexAISettings: FC<Props> = ({ providerId }) => {
const { provider, updateProvider } = useProvider(providerId)
const [apiHost, setApiHost] = useState(provider.apiHost)
const providerConfig = PROVIDER_CONFIG['vertexai']
const providerConfig = PROVIDER_URLS['vertexai']
const apiKeyWebsite = providerConfig?.websites?.apiKey
const onUpdateApiHost = () => {

View File

@ -1,11 +1,11 @@
import { loggerService } from '@logger'
import { DraggableVirtualList } from '@renderer/components/DraggableList'
import { DeleteIcon, EditIcon } from '@renderer/components/Icons'
import { getProviderLogo, isSystemProvider } from '@renderer/config/providers'
import { getProviderLogo } from '@renderer/config/providers'
import { useAllProviders, useProviders } from '@renderer/hooks/useProvider'
import { getProviderLabel } from '@renderer/i18n/label'
import ImageStorage from '@renderer/services/ImageStorage'
import { Provider, ProviderType } from '@renderer/types'
import { isSystemProvider, Provider, ProviderType } from '@renderer/types'
import {
generateColorFromChar,
getFancyProviderName,

View File

@ -48,24 +48,29 @@ vi.mock('@renderer/aiCore/clients/ApiClientFactory', () => ({
}))
// Mock the models config
vi.mock('@renderer/config/models', () => ({
isDedicatedImageGenerationModel: vi.fn(() => false),
isTextToImageModel: vi.fn(() => false),
isEmbeddingModel: vi.fn(() => false),
isRerankModel: vi.fn(() => false),
isVisionModel: vi.fn(() => false),
isReasoningModel: vi.fn(() => false),
isWebSearchModel: vi.fn(() => false),
isOpenAIModel: vi.fn(() => false),
isFunctionCallingModel: vi.fn(() => true),
models: {
gemini: {
id: 'gemini-2.5-pro',
name: 'Gemini 2.5 Pro'
}
},
isAnthropicModel: vi.fn(() => false)
}))
vi.mock('@renderer/config/models', async () => {
const origin = await vi.importActual('@renderer/config/models')
return {
...origin,
isDedicatedImageGenerationModel: vi.fn(() => false),
isTextToImageModel: vi.fn(() => false),
isEmbeddingModel: vi.fn(() => false),
isRerankModel: vi.fn(() => false),
isVisionModel: vi.fn(() => false),
isReasoningModel: vi.fn(() => false),
isWebSearchModel: vi.fn(() => false),
isOpenAIModel: vi.fn(() => false),
isFunctionCallingModel: vi.fn(() => true),
models: {
gemini: {
id: 'gemini-2.5-pro',
name: 'Gemini 2.5 Pro'
}
},
isAnthropicModel: vi.fn(() => false)
}
})
// Mock uuid
vi.mock('uuid', () => ({

View File

@ -1,7 +1,8 @@
import { createSlice, PayloadAction } from '@reduxjs/toolkit'
import { isLocalAi } from '@renderer/config/env'
import { SYSTEM_MODELS } from '@renderer/config/models'
import { Model, Provider, SystemProvider } from '@renderer/types'
import { SYSTEM_PROVIDERS } from '@renderer/config/providers'
import { Model, Provider } from '@renderer/types'
import { uniqBy } from 'lodash'
type LlmSettings = {
@ -38,533 +39,6 @@ export interface LlmState {
settings: LlmSettings
}
export const SYSTEM_PROVIDERS: SystemProvider[] = [
{
id: 'silicon',
name: 'Silicon',
type: 'openai',
apiKey: '',
apiHost: 'https://api.siliconflow.cn',
models: SYSTEM_MODELS.silicon,
isSystem: true,
enabled: true
},
{
id: 'aihubmix',
name: 'AiHubMix',
type: 'openai',
apiKey: '',
apiHost: 'https://aihubmix.com',
models: SYSTEM_MODELS.aihubmix,
isSystem: true,
enabled: false
},
{
id: 'ocoolai',
name: 'ocoolAI',
type: 'openai',
apiKey: '',
apiHost: 'https://api.ocoolai.com',
models: SYSTEM_MODELS.ocoolai,
isSystem: true,
enabled: false
},
{
id: 'deepseek',
name: 'deepseek',
type: 'openai',
apiKey: '',
apiHost: 'https://api.deepseek.com',
models: SYSTEM_MODELS.deepseek,
isSystem: true,
enabled: false
},
{
id: 'ppio',
name: 'PPIO',
type: 'openai',
apiKey: '',
apiHost: 'https://api.ppinfra.com/v3/openai/',
models: SYSTEM_MODELS.ppio,
isSystem: true,
enabled: false
},
{
id: 'alayanew',
name: 'AlayaNew',
type: 'openai',
apiKey: '',
apiHost: 'https://deepseek.alayanew.com',
models: SYSTEM_MODELS.alayanew,
isSystem: true,
enabled: false
},
{
id: 'qiniu',
name: 'Qiniu',
type: 'openai',
apiKey: '',
apiHost: 'https://api.qnaigc.com',
models: SYSTEM_MODELS.qiniu,
isSystem: true,
enabled: false
},
{
id: 'dmxapi',
name: 'DMXAPI',
type: 'openai',
apiKey: '',
apiHost: 'https://www.dmxapi.cn',
models: SYSTEM_MODELS.dmxapi,
isSystem: true,
enabled: false
},
{
id: 'burncloud',
name: 'BurnCloud',
type: 'openai',
apiKey: '',
apiHost: 'https://ai.burncloud.com',
models: SYSTEM_MODELS.burncloud,
isSystem: true,
enabled: false
},
{
id: 'tokenflux',
name: 'TokenFlux',
type: 'openai',
apiKey: '',
apiHost: 'https://tokenflux.ai',
models: SYSTEM_MODELS.tokenflux,
isSystem: true,
enabled: false
},
{
id: '302ai',
name: '302.AI',
type: 'openai',
apiKey: '',
apiHost: 'https://api.302.ai',
models: SYSTEM_MODELS['302ai'],
isSystem: true,
enabled: false
},
{
id: 'cephalon',
name: 'Cephalon',
type: 'openai',
apiKey: '',
apiHost: 'https://cephalon.cloud/user-center/v1/model',
models: SYSTEM_MODELS.cephalon,
isSystem: true,
enabled: false
},
{
id: 'lanyun',
name: 'LANYUN',
type: 'openai',
apiKey: '',
apiHost: 'https://maas-api.lanyun.net',
models: SYSTEM_MODELS.lanyun,
isSystem: true,
enabled: false
},
{
id: 'ph8',
name: 'PH8',
type: 'openai',
apiKey: '',
apiHost: 'https://ph8.co',
models: SYSTEM_MODELS.ph8,
isSystem: true,
enabled: false
},
{
id: 'openrouter',
name: 'OpenRouter',
type: 'openai',
apiKey: '',
apiHost: 'https://openrouter.ai/api/v1/',
models: SYSTEM_MODELS.openrouter,
isSystem: true,
enabled: false
},
{
id: 'ollama',
name: 'Ollama',
type: 'openai',
apiKey: '',
apiHost: 'http://localhost:11434',
models: SYSTEM_MODELS.ollama,
isSystem: true,
enabled: false
},
{
id: 'new-api',
name: 'New API',
type: 'openai',
apiKey: '',
apiHost: 'http://localhost:3000',
models: SYSTEM_MODELS['new-api'],
isSystem: true,
enabled: false
},
{
id: 'lmstudio',
name: 'LM Studio',
type: 'openai',
apiKey: '',
apiHost: 'http://localhost:1234',
models: SYSTEM_MODELS.lmstudio,
isSystem: true,
enabled: false
},
{
id: 'anthropic',
name: 'Anthropic',
type: 'anthropic',
apiKey: '',
apiHost: 'https://api.anthropic.com/',
models: SYSTEM_MODELS.anthropic,
isSystem: true,
enabled: false
},
{
id: 'openai',
name: 'OpenAI',
type: 'openai-response',
apiKey: '',
apiHost: 'https://api.openai.com',
models: SYSTEM_MODELS.openai,
isSystem: true,
enabled: false
},
{
id: 'azure-openai',
name: 'Azure OpenAI',
type: 'azure-openai',
apiKey: '',
apiHost: '',
apiVersion: '',
models: SYSTEM_MODELS['azure-openai'],
isSystem: true,
enabled: false
},
{
id: 'gemini',
name: 'Gemini',
type: 'gemini',
apiKey: '',
apiHost: 'https://generativelanguage.googleapis.com',
models: SYSTEM_MODELS.gemini,
isSystem: true,
enabled: false,
isVertex: false
},
{
id: 'vertexai',
name: 'VertexAI',
type: 'vertexai',
apiKey: '',
apiHost: 'https://aiplatform.googleapis.com',
models: [],
isSystem: true,
enabled: false,
isVertex: true
},
{
id: 'github',
name: 'Github Models',
type: 'openai',
apiKey: '',
apiHost: 'https://models.inference.ai.azure.com/',
models: SYSTEM_MODELS.github,
isSystem: true,
enabled: false
},
{
id: 'copilot',
name: 'Github Copilot',
type: 'openai',
apiKey: '',
apiHost: 'https://api.githubcopilot.com/',
models: SYSTEM_MODELS.copilot,
isSystem: true,
enabled: false,
isAuthed: false
},
{
id: 'zhipu',
name: 'ZhiPu',
type: 'openai',
apiKey: '',
apiHost: 'https://open.bigmodel.cn/api/paas/v4/',
models: SYSTEM_MODELS.zhipu,
isSystem: true,
enabled: false
},
{
id: 'yi',
name: 'Yi',
type: 'openai',
apiKey: '',
apiHost: 'https://api.lingyiwanwu.com',
models: SYSTEM_MODELS.yi,
isSystem: true,
enabled: false
},
{
id: 'moonshot',
name: 'Moonshot AI',
type: 'openai',
apiKey: '',
apiHost: 'https://api.moonshot.cn',
models: SYSTEM_MODELS.moonshot,
isSystem: true,
enabled: false
},
{
id: 'baichuan',
name: 'BAICHUAN AI',
type: 'openai',
apiKey: '',
apiHost: 'https://api.baichuan-ai.com',
models: SYSTEM_MODELS.baichuan,
isSystem: true,
enabled: false
},
{
id: 'dashscope',
name: 'Bailian',
type: 'openai',
apiKey: '',
apiHost: 'https://dashscope.aliyuncs.com/compatible-mode/v1/',
models: SYSTEM_MODELS.bailian,
isSystem: true,
enabled: false
},
{
id: 'stepfun',
name: 'StepFun',
type: 'openai',
apiKey: '',
apiHost: 'https://api.stepfun.com',
models: SYSTEM_MODELS.stepfun,
isSystem: true,
enabled: false
},
{
id: 'doubao',
name: 'doubao',
type: 'openai',
apiKey: '',
apiHost: 'https://ark.cn-beijing.volces.com/api/v3/',
models: SYSTEM_MODELS.doubao,
isSystem: true,
enabled: false
},
{
id: 'infini',
name: 'Infini',
type: 'openai',
apiKey: '',
apiHost: 'https://cloud.infini-ai.com/maas',
models: SYSTEM_MODELS.infini,
isSystem: true,
enabled: false
},
{
id: 'minimax',
name: 'MiniMax',
type: 'openai',
apiKey: '',
apiHost: 'https://api.minimax.chat/v1/',
models: SYSTEM_MODELS.minimax,
isSystem: true,
enabled: false
},
{
id: 'groq',
name: 'Groq',
type: 'openai',
apiKey: '',
apiHost: 'https://api.groq.com/openai',
models: SYSTEM_MODELS.groq,
isSystem: true,
enabled: false
},
{
id: 'together',
name: 'Together',
type: 'openai',
apiKey: '',
apiHost: 'https://api.together.xyz',
models: SYSTEM_MODELS.together,
isSystem: true,
enabled: false
},
{
id: 'fireworks',
name: 'Fireworks',
type: 'openai',
apiKey: '',
apiHost: 'https://api.fireworks.ai/inference',
models: SYSTEM_MODELS.fireworks,
isSystem: true,
enabled: false
},
{
id: 'nvidia',
name: 'nvidia',
type: 'openai',
apiKey: '',
apiHost: 'https://integrate.api.nvidia.com',
models: SYSTEM_MODELS.nvidia,
isSystem: true,
enabled: false
},
{
id: 'grok',
name: 'Grok',
type: 'openai',
apiKey: '',
apiHost: 'https://api.x.ai',
models: SYSTEM_MODELS.grok,
isSystem: true,
enabled: false
},
{
id: 'hyperbolic',
name: 'Hyperbolic',
type: 'openai',
apiKey: '',
apiHost: 'https://api.hyperbolic.xyz',
models: SYSTEM_MODELS.hyperbolic,
isSystem: true,
enabled: false
},
{
id: 'mistral',
name: 'Mistral',
type: 'openai',
apiKey: '',
apiHost: 'https://api.mistral.ai',
models: SYSTEM_MODELS.mistral,
isSystem: true,
enabled: false
},
{
id: 'jina',
name: 'Jina',
type: 'openai',
apiKey: '',
apiHost: 'https://api.jina.ai',
models: SYSTEM_MODELS.jina,
isSystem: true,
enabled: false
},
{
id: 'perplexity',
name: 'Perplexity',
type: 'openai',
apiKey: '',
apiHost: 'https://api.perplexity.ai/',
models: SYSTEM_MODELS.perplexity,
isSystem: true,
enabled: false
},
{
id: 'modelscope',
name: 'ModelScope',
type: 'openai',
apiKey: '',
apiHost: 'https://api-inference.modelscope.cn/v1/',
models: SYSTEM_MODELS.modelscope,
isSystem: true,
enabled: false
},
{
id: 'xirang',
name: 'Xirang',
type: 'openai',
apiKey: '',
apiHost: 'https://wishub-x1.ctyun.cn',
models: SYSTEM_MODELS.xirang,
isSystem: true,
enabled: false
},
{
id: 'hunyuan',
name: 'hunyuan',
type: 'openai',
apiKey: '',
apiHost: 'https://api.hunyuan.cloud.tencent.com',
models: SYSTEM_MODELS.hunyuan,
isSystem: true,
enabled: false
},
{
id: 'tencent-cloud-ti',
name: 'Tencent Cloud TI',
type: 'openai',
apiKey: '',
apiHost: 'https://api.lkeap.cloud.tencent.com',
models: SYSTEM_MODELS['tencent-cloud-ti'],
isSystem: true,
enabled: false
},
{
id: 'baidu-cloud',
name: 'Baidu Cloud',
type: 'openai',
apiKey: '',
apiHost: 'https://qianfan.baidubce.com/v2/',
models: SYSTEM_MODELS['baidu-cloud'],
isSystem: true,
enabled: false
},
{
id: 'gpustack',
name: 'GPUStack',
type: 'openai',
apiKey: '',
apiHost: '',
models: SYSTEM_MODELS.gpustack,
isSystem: true,
enabled: false
},
{
id: 'voyageai',
name: 'VoyageAI',
type: 'openai',
apiKey: '',
apiHost: 'https://api.voyageai.com',
models: SYSTEM_MODELS.voyageai,
isSystem: true,
enabled: false
},
{
id: 'aws-bedrock',
name: 'AWS Bedrock',
type: 'aws-bedrock',
apiKey: '',
apiHost: '',
models: SYSTEM_MODELS['aws-bedrock'],
isSystem: true,
enabled: false
},
{
id: 'poe',
name: 'Poe',
type: 'openai',
apiKey: '',
apiHost: 'https://api.poe.com/v1/',
models: SYSTEM_MODELS['poe'],
isSystem: true,
enabled: false
}
]
export const initialState: LlmState = {
defaultModel: SYSTEM_MODELS.defaultModel[0],
topicNamingModel: SYSTEM_MODELS.defaultModel[1],

View File

@ -8,11 +8,19 @@ import {
isSupportArrayContentProvider,
isSupportDeveloperRoleProvider,
isSupportStreamOptionsProvider,
isSystemProvider
SYSTEM_PROVIDERS
} from '@renderer/config/providers'
import db from '@renderer/databases'
import i18n from '@renderer/i18n'
import { Assistant, LanguageCode, Model, Provider, WebSearchProvider } from '@renderer/types'
import {
Assistant,
isSystemProvider,
LanguageCode,
Model,
Provider,
SystemProviderIds,
WebSearchProvider
} from '@renderer/types'
import { getDefaultGroupName, getLeadingEmoji, runAsyncFunction, uuid } from '@renderer/utils'
import { defaultByPassRules, UpgradeChannel } from '@shared/config/constant'
import { isEmpty } from 'lodash'
@ -20,7 +28,7 @@ import { createMigrate } from 'redux-persist'
import { RootState } from '.'
import { DEFAULT_TOOL_ORDER } from './inputTools'
import { initialState as llmInitialState, moveProvider, SYSTEM_PROVIDERS } from './llm'
import { initialState as llmInitialState, moveProvider } from './llm'
import { mcpSlice } from './mcp'
import { defaultActionItems } from './selectionStore'
import { DEFAULT_SIDEBAR_ICONS, initialState as settingsInitialState } from './settings'
@ -2023,6 +2031,13 @@ const migrateConfig = {
},
'128': (state: RootState) => {
try {
// 迁移 service tier 设置
const openai = state.llm.providers.find((provider) => provider.id === SystemProviderIds.openai)
const serviceTier = state.settings.openAI.serviceTier
if (openai) {
openai.serviceTier = serviceTier
}
// @ts-ignore eslint-disable-next-line
if (state.settings.codePreview) {
// @ts-ignore eslint-disable-next-line
@ -2042,6 +2057,8 @@ const migrateConfig = {
}
}
// 注意:添加新迁移时,记得同时更新 persistReducer
const migrate = createMigrate(migrateConfig as any)
export default migrate

View File

@ -192,6 +192,7 @@ export interface SettingsState {
// OpenAI
openAI: {
summaryText: OpenAISummaryText
/** @deprecated 现在该设置迁移到Provider对象中 */
serviceTier: OpenAIServiceTier
}
// Notification
@ -774,9 +775,6 @@ const settingsSlice = createSlice({
setOpenAISummaryText: (state, action: PayloadAction<OpenAISummaryText>) => {
state.openAI.summaryText = action.payload
},
setOpenAIServiceTier: (state, action: PayloadAction<OpenAIServiceTier>) => {
state.openAI.serviceTier = action.payload
},
setNotificationSettings: (state, action: PayloadAction<SettingsState['notification']>) => {
state.notification = action.payload
},
@ -941,7 +939,6 @@ export const {
setEnableBackspaceDeleteModel,
setDisableHardwareAcceleration,
setOpenAISummaryText,
setOpenAIServiceTier,
setNotificationSettings,
// Local backup settings
setLocalBackupDir,

View File

@ -5,6 +5,7 @@ import type { CSSProperties } from 'react'
import * as z from 'zod/v4'
export * from './file'
import type { FileMetadata } from './file'
import type { Message } from './newMessage'
@ -173,21 +174,98 @@ export type Provider = {
isAuthed?: boolean
rateLimit?: number
// undefined 视为支持
// API options
// undefined 视为支持,默认支持
/** 是否不支持 message 的 content 为数组类型 */
isNotSupportArrayContent?: boolean
/** 是否不支持 stream_options 参数 */
isNotSupportStreamOptions?: boolean
/** 是否不支持 message 的 role 为 developer */
isNotSupportDeveloperRole?: boolean
/** 是否不支持 service_tier 参数. Only for OpenAI Models. */
isNotSupportServiceTier?: boolean
serviceTier?: ServiceTier
isVertex?: boolean
notes?: string
extra_headers?: Record<string, string>
}
// 后面会重构成更严格的类型
export const SystemProviderIds = {
silicon: 'silicon',
aihubmix: 'aihubmix',
ocoolai: 'ocoolai',
deepseek: 'deepseek',
ppio: 'ppio',
alayanew: 'alayanew',
qiniu: 'qiniu',
dmxapi: 'dmxapi',
burncloud: 'burncloud',
tokenflux: 'tokenflux',
'302ai': '302ai',
cephalon: 'cephalon',
lanyun: 'lanyun',
ph8: 'ph8',
openrouter: 'openrouter',
ollama: 'ollama',
'new-api': 'new-api',
lmstudio: 'lmstudio',
anthropic: 'anthropic',
openai: 'openai',
'azure-openai': 'azure-openai',
gemini: 'gemini',
vertexai: 'vertexai',
github: 'github',
copilot: 'copilot',
zhipu: 'zhipu',
yi: 'yi',
moonshot: 'moonshot',
baichuan: 'baichuan',
dashscope: 'dashscope',
stepfun: 'stepfun',
doubao: 'doubao',
infini: 'infini',
minimax: 'minimax',
groq: 'groq',
together: 'together',
fireworks: 'fireworks',
nvidia: 'nvidia',
grok: 'grok',
hyperbolic: 'hyperbolic',
mistral: 'mistral',
jina: 'jina',
perplexity: 'perplexity',
modelscope: 'modelscope',
xirang: 'xirang',
hunyuan: 'hunyuan',
'tencent-cloud-ti': 'tencent-cloud-ti',
'baidu-cloud': 'baidu-cloud',
gpustack: 'gpustack',
voyageai: 'voyageai',
'aws-bedrock': 'aws-bedrock',
poe: 'poe'
} as const
export type SystemProviderId = keyof typeof SystemProviderIds
export const isSystemProviderId = (id: string): id is SystemProviderId => {
return Object.hasOwn(SystemProviderIds, id)
}
export type SystemProvider = Provider & {
id: SystemProviderId
isSystem: true
}
/**
* 使`provider.isSystem`
* @param provider - Provider对象
* @returns
*/
export const isSystemProvider = (provider: Provider): provider is SystemProvider => {
return isSystemProviderId(provider.id) && !!provider.isSystem
}
export type ProviderType =
| 'openai'
| 'openai-response'
@ -822,7 +900,39 @@ export interface StoreSyncAction {
}
export type OpenAISummaryText = 'auto' | 'concise' | 'detailed' | 'off'
export type OpenAIServiceTier = 'auto' | 'default' | 'flex'
export const OpenAIServiceTiers = {
auto: 'auto',
default: 'default',
flex: 'flex',
priority: 'priority'
} as const
export type OpenAIServiceTier = keyof typeof OpenAIServiceTiers
export function isOpenAIServiceTier(tier: string): tier is OpenAIServiceTier {
return Object.hasOwn(OpenAIServiceTiers, tier)
}
export const GroqServiceTiers = {
auto: 'auto',
on_demand: 'on_demand',
flex: 'flex',
performance: 'performance'
} as const
// 从 GroqServiceTiers 对象中提取类型
export type GroqServiceTier = keyof typeof GroqServiceTiers
export function isGroqServiceTier(tier: string): tier is GroqServiceTier {
return Object.hasOwn(GroqServiceTiers, tier)
}
export type ServiceTier = OpenAIServiceTier | GroqServiceTier
export function isServiceTier(tier: string): tier is ServiceTier {
return isGroqServiceTier(tier) || isOpenAIServiceTier(tier)
}
export type S3Config = {
endpoint: string

View File

@ -1,15 +1,8 @@
import type { Model, Provider } from '@renderer/types'
import { describe, expect, it, vi } from 'vitest'
import type { Model, Provider, SystemProvider } from '@renderer/types'
import { describe, expect, it } from 'vitest'
import { includeKeywords, matchKeywordsInModel, matchKeywordsInProvider, matchKeywordsInString } from '../match'
// mock i18n for getFancyProviderName
vi.mock('@renderer/i18n', () => ({
default: {
t: (key: string) => `i18n:${key}`
}
}))
describe('match', () => {
const provider: Provider = {
id: '12345',
@ -20,10 +13,10 @@ describe('match', () => {
models: [],
isSystem: false
}
const sysProvider: Provider = {
const sysProvider: SystemProvider = {
...provider,
id: 'sys',
name: 'SystemProvider',
id: 'dashscope',
name: 'doesnt matter',
isSystem: true
}
@ -83,8 +76,10 @@ describe('match', () => {
})
it('should match i18n name for system provider', () => {
expect(matchKeywordsInProvider('sys', sysProvider)).toBe(true)
expect(matchKeywordsInProvider('SystemProvider', sysProvider)).toBe(false)
// system provider 不应该通过 name 字段匹配
expect(matchKeywordsInProvider('dashscope', sysProvider)).toBe(true)
expect(matchKeywordsInProvider('Alibaba', sysProvider)).toBe(true)
expect(matchKeywordsInProvider('doesnt matter', sysProvider)).toBe(false)
})
})
@ -108,9 +103,11 @@ describe('match', () => {
})
it('should match model name and i18n provider name for system provider', () => {
expect(matchKeywordsInModel('gpt-4.1 sys', model, sysProvider)).toBe(true)
expect(matchKeywordsInModel('sys', model, sysProvider)).toBe(true)
expect(matchKeywordsInModel('SystemProvider', model, sysProvider)).toBe(false)
expect(matchKeywordsInModel('gpt-4.1 dashscope', model, sysProvider)).toBe(true)
expect(matchKeywordsInModel('dashscope', model, sysProvider)).toBe(true)
// system provider 不会直接用 name 检索
expect(matchKeywordsInModel('doesnt matter', model, sysProvider)).toBe(false)
expect(matchKeywordsInModel('Alibaba', model, sysProvider)).toBe(true)
})
it('should match model by id when name is customized', () => {

View File

@ -1,3 +1,4 @@
import { Provider, SystemProvider } from '@renderer/types'
import { describe, expect, it } from 'vitest'
import {
@ -6,6 +7,7 @@ import {
getBaseModelName,
getBriefInfo,
getDefaultGroupName,
getFancyProviderName,
getFirstCharacter,
getLeadingEmoji,
getLowerBaseModelName,
@ -285,4 +287,32 @@ describe('naming', () => {
expect(getBriefInfo(text, 5)).toBe('This...')
})
})
describe('getFancyProviderName', () => {
it('should get i18n name for system provider', () => {
const mockSystemProvider: SystemProvider = {
id: 'dashscope',
type: 'openai',
name: 'whatever',
apiHost: 'whatever',
apiKey: 'whatever',
models: [],
isSystem: true
}
// 默认 i18n 环境是 en-us
expect(getFancyProviderName(mockSystemProvider)).toBe('Alibaba Cloud')
})
it('should get name for custom provider', () => {
const mockProvider: Provider = {
id: 'whatever',
type: 'openai',
name: '好名字',
apiHost: 'whatever',
apiKey: 'whatever',
models: []
}
expect(getFancyProviderName(mockProvider)).toBe('好名字')
})
})
})

View File

@ -1,5 +1,5 @@
import { getProviderLabel } from '@renderer/i18n/label'
import { Model, Provider } from '@renderer/types'
import { isSystemProvider, Model, Provider } from '@renderer/types'
/**
* keywords
@ -64,8 +64,7 @@ export function matchKeywordsInModel(keywords: string | string[], model: Model,
* @returns
*/
function getProviderSearchString(provider: Provider) {
// FIXME: 无法在这里使用 isSystemProvider但我不清楚为什么
return provider.isSystem ? `${getProviderLabel(provider.id)} ${provider.id}` : provider.name
return isSystemProvider(provider) ? `${getProviderLabel(provider.id)} ${provider.id}` : provider.name
}
/**

View File

@ -1,5 +1,5 @@
import { getProviderLabel } from '@renderer/i18n/label'
import { Provider } from '@renderer/types'
import { isSystemProvider, Provider } from '@renderer/types'
/**
* ID
@ -82,8 +82,7 @@ export const getLowerBaseModelName = (id: string, delimiter: string = '/'): stri
* @returns
*/
export const getFancyProviderName = (provider: Provider) => {
// FIXME: 无法在这里使用 isSystemProvider但我不清楚为什么
return provider.isSystem ? getProviderLabel(provider.id) : provider.name
return isSystemProvider(provider) ? getProviderLabel(provider.id) : provider.name
}
/**

View File

@ -7786,7 +7786,7 @@ __metadata:
notion-helper: "npm:^1.3.22"
npx-scope-finder: "npm:^1.2.0"
officeparser: "npm:^4.2.0"
openai: "patch:openai@npm%3A5.1.0#~/.yarn/patches/openai-npm-5.1.0-0e7b3ccb07.patch"
openai: "patch:openai@npm%3A5.12.0#~/.yarn/patches/openai-npm-5.12.0-a06a6369b2.patch"
os-proxy-config: "npm:^1.1.2"
p-queue: "npm:^8.1.0"
pdf-lib: "npm:^1.17.1"
@ -16688,9 +16688,9 @@ __metadata:
languageName: node
linkType: hard
"openai@npm:5.1.0":
version: 5.1.0
resolution: "openai@npm:5.1.0"
"openai@npm:5.12.0":
version: 5.12.0
resolution: "openai@npm:5.12.0"
peerDependencies:
ws: ^8.18.0
zod: ^3.23.8
@ -16701,13 +16701,13 @@ __metadata:
optional: true
bin:
openai: bin/cli
checksum: 10c0/d5882c95f95bfc4127ccbe494d298f43fe56cd3a9fd5711d1f02a040cfa6cdcc1e706ffe05f3d428421ec4caa526fe1b5ff50e1849dbfb3016d289853262ea3d
checksum: 10c0/adab04e90cae8f393f76c007f98c0636af97a280fb05766b0cee5ab202c802db01c113d0ce0dfea42e1a1fe3b08c9a3881b6eea9a0b0703375f487688aaca1fc
languageName: node
linkType: hard
"openai@patch:openai@npm%3A5.1.0#~/.yarn/patches/openai-npm-5.1.0-0e7b3ccb07.patch":
version: 5.1.0
resolution: "openai@patch:openai@npm%3A5.1.0#~/.yarn/patches/openai-npm-5.1.0-0e7b3ccb07.patch::version=5.1.0&hash=7d7491"
"openai@patch:openai@npm%3A5.12.0#~/.yarn/patches/openai-npm-5.12.0-a06a6369b2.patch":
version: 5.12.0
resolution: "openai@patch:openai@npm%3A5.12.0#~/.yarn/patches/openai-npm-5.12.0-a06a6369b2.patch::version=5.12.0&hash=d96796"
peerDependencies:
ws: ^8.18.0
zod: ^3.23.8
@ -16718,7 +16718,7 @@ __metadata:
optional: true
bin:
openai: bin/cli
checksum: 10c0/e7d2429887d0060cf9d8cd2c04640f759b55bffab696b3e926e510357af1b5f5b3bcf55d0e0dbe2282da8438a61fd75259847899db289d1e18ff0798b2450344
checksum: 10c0/207f70a43839d34f6ad3322a4bdf6d755ac923ca9c6b5fb49bd13263d816c5acb1a501228b9124b1f72eae2f7efffc8890e2d901907b3c8efc2fee3f8a273cec
languageName: node
linkType: hard