- Updated version to 1.7.0-rc.3 in package.json
- Added new features including support for Silicon provider and AIHubMix
- Consolidated bug fixes related to providers, models, UI, and settings
- Improved SDK integration with upgraded dependencies
* Initial plan
* feat: Add proper Poe API reasoning parameters support for GPT-5 and other models
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
* test: Add comprehensive tests for Poe API reasoning support
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
* fix: Add missing isGPT5SeriesModel import in reasoning.ts
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
* fix: Use correct extra_body format for Poe API reasoning parameters
Per Poe API documentation, custom bot parameters like reasoning_effort
and thinking_budget should be passed directly in extra_body, not as
nested structures.
Changed from:
- reasoning_effort: 'low' -> extra_body: { reasoning_effort: 'low' }
- thinking: { type: 'enabled', budget_tokens: X } -> extra_body: { thinking_budget: X }
- extra_body: { google: { thinking_config: {...} } } -> extra_body: { thinking_budget: X }
Updated tests to match the corrected implementation.
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
* fix: Update reasoning parameters and improve type definitions for GPT-5 support
* fix lint
* docs
* fix(reasoning): handle edge cases for models without token limit configuration
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
Co-authored-by: suyao <sy20010504@gmail.com>
* fix(anthropic): prevent duplicate /v1 in API endpoints
Anthropic SDK automatically appends /v1 to endpoints, so we should not add it in our formatting. This change ensures URLs are correctly formatted without duplicate path segments.
* fix(anthropic): strip /v1 suffix in getSdkClient to prevent duplicate in models endpoint
The issue was:
- AI SDK (for chat) needs baseURL with /v1 suffix
- Anthropic SDK (for listModels) automatically appends /v1 to all endpoints
Solution:
- Keep /v1 in formatProviderApiHost for AI SDK compatibility
- Strip /v1 in getSdkClient before passing to Anthropic SDK
- This ensures chat works correctly while preventing /v1/v1/models duplication
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(anthropic): correct preview URL to match actual request behavior
The preview now correctly shows:
- Input: https://api.siliconflow.cn/v2
- Preview: https://api.siliconflow.cn/v2/messages (was incorrectly showing /v2/v1/messages)
- Actual: https://api.siliconflow.cn/v2/messages
This matches the actual behavior where getSdkClient strips /v1 suffix before
passing to Anthropic SDK, which then appends /v1/messages.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(anthropic): strip all API version suffixes, not just /v1
The Anthropic SDK always appends /v1 to endpoints, regardless of the baseURL.
Previously we only stripped /v1 suffix, causing issues with custom versions like /v2.
Now we strip all version suffixes (/v1, /v2, /v1beta, etc.) before passing to Anthropic SDK.
Examples:
- Input: https://api.siliconflow.cn/v2/
- After strip: https://api.siliconflow.cn
- Actual request: https://api.siliconflow.cn/v1/messages✅🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(anthropic): correct preview to show AI SDK behavior, not Anthropic SDK
The preview was showing the wrong URL because it was reflecting Anthropic SDK behavior
(which strips versions and uses /v1), but checkApi and chat use AI SDK which preserves
the user's version path.
Now preview correctly shows:
- Input: https://api.siliconflow.cn/v2/
- AI SDK (checkApi/chat): https://api.siliconflow.cn/v2/messages✅
- Preview: https://api.siliconflow.cn/v2/messages✅
Note: Anthropic SDK (for listModels) still strips versions to use /v1/models,
but this is not shown in preview since it's a different code path.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* refactor(checkApi): remove unnecessary legacy fallback
The legacy fallback logic in checkApi was:
1. Complex and hard to maintain
2. Never actually triggered in practice for Modern SDK supported providers
3. Could cause duplicate API requests
Since Modern AI SDK now handles all major providers correctly,
we can simplify by directly throwing errors instead of falling back.
This also removes unused imports: AiProvider and CompletionsParams.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(anthropic): restore version stripping in getSdkClient for Anthropic SDK
The Anthropic SDK (used for listModels) always appends /v1 to endpoints,
so we need to strip version suffixes from baseURL to avoid duplication.
This only affects Anthropic SDK operations (like listModels).
AI SDK operations (chat/checkApi) use provider.apiHost directly via
providerToAiSdkConfig, which preserves the user's version path.
Examples:
- AI SDK (chat): https://api.siliconflow.cn/v1 -> /v1/messages ✅
- Anthropic SDK (models): https://api.siliconflow.cn/v1 -> strip v1 -> /v1/models ✅🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(anthropic): ensure AI SDK gets /v1 in baseURL, strip for Anthropic SDK
The correct behavior is:
1. formatProviderApiHost: Add /v1 to apiHost (for AI SDK compatibility)
2. AI SDK (chat/checkApi): Use apiHost with /v1 -> /v1/messages ✅
3. Anthropic SDK (listModels): Strip /v1 from baseURL -> SDK adds /v1/models ✅
4. Preview: Show AI SDK behavior (main use case) -> /v1/messages ✅
Examples:
- Input: https://api.siliconflow.cn
- Formatted: https://api.siliconflow.cn/v1 (added by formatApiHost)
- AI SDK: https://api.siliconflow.cn/v1/messages✅
- Anthropic SDK: https://api.siliconflow.cn (stripped) + /v1/models ✅
- Preview: https://api.siliconflow.cn/v1/messages✅🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* refactor(ai): simplify AiProviderNew initialization and improve docs
Update AiProviderNew constructor to automatically format URLs by default
Add comprehensive documentation explaining constructor behavior and usage
* chore: remove unused play.ts file
* fix(anthropic): strip api version from baseURL to avoid endpoint duplication
---------
Co-authored-by: Claude <noreply@anthropic.com>
* feat: add silicon provider support for Anthropic API compatibility
* fix: update handling of ANTHROPIC_BASE_URL for silicon provider compatibility
* fix: update anthropicApiHost for silicon provider to use the correct endpoint
* fix: remove silicon from CLAUDE_OFFICIAL_SUPPORTED_PROVIDERS
* chore: add comment to clarify silicon model fallback logic in CLAUDE_OFFICIAL_SUPPORTED_PROVIDERS
- Bumped @types/react from ^19.0.12 to ^19.2.7
- Bumped @types/react-dom from ^19.0.4 to ^19.2.3
- Updated csstype dependency from ^3.0.2 to ^3.2.2 in yarn.lock
These updates ensure compatibility with the latest React types and improve type definitions.
* fix: add claude-opus-4-5 pattern to THINKING_TOKEN_MAP
Adds missing regex pattern for claude-opus-4-5 models (e.g., claude-opus-4-5-20251101)
to the THINKING_TOKEN_MAP configuration. Without this pattern, the model was not
recognized, causing findTokenLimit() to return undefined and leading to an
AI_InvalidArgumentError when using Google Vertex AI Anthropic provider.
The fix adds the pattern 'claude-opus-4-5.*$': { min: 1024, max: 64_000 } to
match the existing claude-4 thinking token configuration.
Fixes AI_InvalidArgumentError: invalid anthropic provider options caused by
budgetTokens receiving NaN instead of a number.
Signed-off-by: Shuchen Luo (personal linux) <nemo0806@gmail.com>
* refactor: make THINKING_TOKEN_MAP constant private
* fix(reasoning): update claude model token limit regex patterns
- Consolidate claude model regex patterns to be more consistent
- Add comprehensive test cases for various claude model variants
- Ensure case insensitivity and proper handling of edge cases
* fix: format
* feat(models): extend claude model regex patterns to support AWS and GCP formats
Update regex patterns in THINKING_TOKEN_MAP to support additional Claude model ID formats used in AWS Bedrock and GCP Vertex AI
Add comprehensive test cases for new model ID formats and reorganize test suite
* fix: format
---------
Signed-off-by: Shuchen Luo (personal linux) <nemo0806@gmail.com>
Co-authored-by: icarus <eurfelux@gmail.com>
- Updated links in CONTRIBUTING.md and README.md to point to the correct Chinese documentation paths.
- Removed outdated files including the English and Chinese versions of the branching strategy, contributing guide, and test plan documents.
- Cleaned up references to non-existent documentation in the project structure to streamline the contributor experience.
* Initial plan
* fix(aiCore): extract AI SDK standard params from custom params for Gemini
Custom parameters like topK, frequencyPenalty, presencePenalty,
stopSequences, and seed should be passed as top-level streamText()
parameters, not in providerOptions. This fixes the issue where these
parameters were being ignored by the AI SDK's @ai-sdk/google module.
Changes:
- Add extractAiSdkStandardParams function to separate standard params
- Update buildProviderOptions to return both providerOptions and standardParams
- Update buildStreamTextParams to spread standardParams into params object
- Update tests to reflect new return structure
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
* refactor(aiCore): remove extractAiSdkStandardParams function and its tests, streamline parameter extraction logic
* chore: type
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
Co-authored-by: suyao <sy20010504@gmail.com>
* fix: update provider-utils and add patch for header merging logic
* fix: enhance header merging logic to deduplicate values
* fix: handle null values in header merging logic
* chore: update ai-sdk dependencies and remove obsolete patches
- Updated @ai-sdk/amazon-bedrock from 3.0.56 to 3.0.61
- Updated @ai-sdk/anthropic from 2.0.45 to 2.0.49
- Updated @ai-sdk/gateway from 2.0.13 to 2.0.15
- Updated @ai-sdk/google from 2.0.40 to 2.0.43
- Updated @ai-sdk/google-vertex from 3.0.72 to 3.0.79
- Updated @ai-sdk/openai from 2.0.71 to 2.0.72
- Updated @ai-sdk/provider-utils from patch version to 3.0.17
- Removed obsolete patches for @ai-sdk/openai and @ai-sdk/provider-utils
- Added reasoning_content field to OpenAIChat response and chunk schemas
- Enhanced OpenAIChatLanguageModel to handle reasoning content in responses
* chore
* feat(settings): show OpenAI settings for supported service tier providers
Add support for displaying OpenAI settings when provider supports service tiers.
This includes refactoring the condition check and fixing variable naming consistency.
* fix(settings): set openAI verbosity to undefined by default
* fix(store): bump version to 178 and disable verbosity for groq provider
Add migration to remove verbosity from groq provider and implement provider utility to check verbosity support
Update provider types to include verbosity support flag
* feat(provider): add verbosity option support for providers
Add verbosity parameter support in provider API options settings
* fix(aiCore): check provider support for verbosity before applying
Add provider validation and check for verbosity support to prevent errors when unsupported providers are used with verbosity settings
* feat(settings): add Groq settings group component and translations
add new GroqSettingsGroup component for managing Groq provider settings
update translations for Groq settings in both zh-cn and en-us locales
refactor OpenAISettingsGroup to separate Groq-specific logic
* feat(i18n): add groq settings and verbosity support translations
add translations for groq settings title and verbosity parameter support in multiple languages
* refactor(settings): simplify service tier mode fallback logic
Remove conditional service tier mode fallback and use provider-specific defaults directly
* fix(provider): remove redundant system provider check in verbosity support
* test(provider): add tests for verbosity support detection
* fix(OpenAISettingsGroup): add endpoint_type check for showSummarySetting condition
Add model.endpoint_type check to properly determine when to show summary setting for OpenAI models
* refactor(selector): simplify selector option types and add utility functions
remove undefined and null from selector option types
add utility functions to convert between option values and real values
update groq and openai settings groups to use new utilities
add new translation for "ignore" option
* fix(ApiOptionsSettings): correct checked state for verbosity toggle
* feat(i18n): add "ignore" translation for multiple languages
* refactor(groq): remove unused model prop and related checks
Clean up GroqSettingsGroup component by removing unused model prop and unnecessary service tier checks
refactor(models): improve text delta support check for qwen-mt models
Replace direct qwen-mt model check with regex pattern matching
Add comprehensive test cases for isNotSupportTextDeltaModel
Update all references to use new function name
The previous implementation used `a = preset` inside forEach, which only
reassigns the local variable and doesn't actually update the array element.
Changed to use findIndex + direct array assignment to properly update
the preset in the state.
Fixes#11451🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
* fix: respect enableMaxTokens setting when maxTokens is not configured
When enableMaxTokens is disabled, getMaxTokens() should return undefined
to let the API use its own default value, instead of forcing 4096 tokens.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(modelParameters): handle max tokens when feature is disabled
Check if max tokens feature is enabled before returning undefined to ensure proper API behavior
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: icarus <eurfelux@gmail.com>
* feat: update Google and OpenAI SDKs with new features and fixes
- Updated Google SDK to ensure model paths are correctly formatted.
- Enhanced OpenAI SDK to include support for image URLs in chat responses.
- Added reasoning content handling in OpenAI chat responses and chunks.
- Introduced Azure Anthropic provider configuration for Claude integration.
* fix: azure error
* fix: lint
* fix: test
* fix: test
* fix type
* fix comment
* fix: redundant
* chore resolution
* fix: test
* fix: comment
* fix: comment
* fix
* feat: 添加 OpenRouter 推理中间件以支持内容过滤
* refactor: improve model filtering with todo for robust conversion
* refactor(aiCore): add AiSdkConfig type and update provider config handling
- Introduce new AiSdkConfig type in aiCoreTypes for better type safety
- Update provider factory and config to use AiSdkConfig consistently
- Simplify getAiSdkProviderId return type to string
- Add config validation in ModernAiProvider
* refactor(aiCore): move ai core types to dedicated module
Consolidate AI core type definitions into a dedicated module under aiCore/types. This improves code organization by keeping related types together and removes circular dependencies between modules. The change includes:
- Moving AiSdkConfig to aiCore/types
- Updating all imports to reference the new location
- Removing duplicate type definitions
* refactor(provider): add return type to createAiSdkProvider function
* feat(messages): add filter for error-only messages and their related pairs
Add new filter function to remove assistant messages containing only error blocks along with their associated user messages, identified by askId. This improves conversation quality by cleaning up error-only responses.
* refactor(ConversationService): improve message filtering pipeline readability
Break down complex message filtering chain into clearly labeled steps
Add comments explaining each filtering step's purpose
Maintain same functionality while improving code maintainability
* test(messageUtils): add test cases for message filter utilities
* docs(messageUtils): correct jsdoc for filterUsefulMessages
* refactor(ConversationService): extract message filtering logic into pipeline method
Move message filtering steps into a dedicated static method to improve testability and maintainability. Add comprehensive tests to verify pipeline behavior.
* refactor(ConversationService): add logging and improve message filtering readability
Add logger service to track message pipeline output
Split filterUserRoleStartMessages into separate variable for better debugging
* refactor(types): consolidate OpenAI types and improve type safety
- Move OpenAI-related types to aiCoreTypes.ts
- Rename FetchChatCompletionOptions to FetchChatCompletionRequestOptions
- Add proper type definitions for service tiers and verbosity
- Improve type guards for service tier checks
* refactor(api): rename options parameter to requestOptions for consistency
Update parameter name across multiple files to use requestOptions instead of options for better clarity and consistency in API calls
* refactor(aiCore): simplify OpenAI summary text handling and improve type safety
- Remove 'off' option from OpenAISummaryText type and use null instead
- Add migration to convert 'off' values to null
- Add utility function to convert undefined to null
- Update Selector component to handle null/undefined values
- Improve type safety in provider options and reasoning params
* fix(i18n): Auto update translations for PR #10964
* feat(utils): add notNull function to convert null to undefined
* refactor(utils): move defined and notNull functions to shared package
Consolidate utility functions into shared package to improve code organization and reuse
* Revert "fix(i18n): Auto update translations for PR #10964"
This reverts commit 68bd7eaac5.
* feat(i18n): add "off" translation and remove "performance" tier
Add "off" translation for multiple languages and remove "performance" service tier option from translations
* Apply suggestion from @EurFelux
* docs(types): clarify handling of undefined and null values
Add comments to explain that undefined is treated as default and null as explicitly off in OpenAIVerbosity and OpenAIServiceTier types. Also update type safety for OpenAIServiceTiers record.
* fix(migration): update migration version from 167 to 171 for removed type
* chore: update store version to 172
* fix(migrate): update migration version number from 171 to 172
* fix(i18n): Auto update translations for PR #10964
* refactor(types): improve type safety for verbosity handling
add NotUndefined and NotNull utility types to better handle null/undefined cases
clarify verbosity types in aiCoreTypes and update related utility functions
* refactor(types): replace null with undefined for verbosity values
Standardize on undefined instead of null for verbosity values to align with OpenAI API docs and improve type consistency
* refactor(aiCore): update OpenAI provider options type import and usage
* fix(openai): change summaryText default from null to 'auto'
Update OpenAI settings to use 'auto' as default summaryText value instead of null for consistency with API behavior. Remove 'off' option and add 'concise' option while maintaining type safety.
* refactor(OpenAISettingsGroup): extract service tier options type for better maintainability
* refactor(types): make SystemProviderIdTypeMap internal type
* docs(provider): clarify OpenAIServiceTier behavior for undefined vs null
Explain that undefined and null values for serviceTier should be treated differently since they affect whether the field appears in the response
* refactor(utils): rename utility functions for clarity
Rename `defined` to `toNullIfUndefined` and `notNull` to `toUndefinedIfNull` to better reflect their functionality
* refactor(aiCore): extract service tier logic and improve type safety
Extract service tier validation logic into separate functions for better reusability
Add proper type annotations for provider options
Pass service tier parameter through provider option builders
* refactor(utils): comment out unused utility functions
Keep commented utility functions for potential future use while cleaning up current codebase
* fix(migration): update migration version number from 172 to 177
* docs(aiCoreTypes): clarify parameter passing behavior in OpenAI API
Update comments to consistently use 'undefined' instead of 'null' when describing parameter passing behavior in OpenAI API requests, as they share the same meaning in this context
---------
Co-authored-by: GitHub Action <action@github.com>
* 100m
* feat: add web search header for Claude 4 series models
* fix: typo
* fix: identify model
---------
Co-authored-by: defi-failure <159208748+defi-failure@users.noreply.github.com>
* refactor: optimize DatabaseManager and fix libsql crash issues
Major improvements:
- Created DatabaseManager singleton to centralize database connection management
- Auto-initialize database in constructor (no manual initialization needed)
- Removed all manual initialize() and ensureInitialized() calls (47 occurrences)
- Simplified initialization logic (removed retry loops that could cause crashes)
- Removed unused close() and reinitialize() methods
- Reduced code from ~270 lines to 172 lines (-36%)
Key changes:
1. DatabaseManager.ts (new file):
- Singleton pattern with auto-initialization
- State management (INITIALIZING, INITIALIZED, FAILED)
- Windows compatibility fixes (empty file detection, intMode: 'number')
- Simplified waitForInitialization() logic
2. BaseService.ts:
- Removed static initialize() and ensureInitialized() methods
- Simplified database/rawClient getters to use DatabaseManager
3. Service classes (AgentService, SessionService, SessionMessageService):
- Removed all initialize() methods
- Removed all ensureInitialized() calls
- Services now work out of the box
4. Main entry points (index.ts, server.ts):
- Removed explicit database initialization calls
- Database initializes automatically on first access
Benefits:
- Fixes Windows libsql crashes by removing dangerous retry logic
- Simpler API - no need to remember to call initialize()
- Better separation of concerns
- Cleaner codebase with 36% less code
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: wait for database initialization on app startup
Issue: "Database is still initializing" error on startup
Root cause: Synchronous database getter was called before async initialization completed
Solution:
- Explicitly wait for database initialization in main index.ts
- Import DatabaseManager and call getDatabase() to ensure initialization is complete
- This guarantees database is ready before any service methods are called
Changes:
- src/main/index.ts: Added explicit database initialization wait before API server check
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* refactor: use static import for getDatabaseManager
- Move import to top of file for better code organization
- Remove unnecessary dynamic import
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* refactor: streamline database access in service classes
- Replaced direct database access with asynchronous calls to getDatabase() in various service classes (AgentService, SessionService, SessionMessageService).
- Updated the main index.ts to utilize runAsyncFunction for API server initialization, ensuring proper handling of asynchronous database access.
- Improved code organization and readability by consolidating database access logic.
This change enhances the reliability of database interactions across the application and ensures that services are correctly initialized before use.
* refactor: remove redundant logging in ApiServer initialization
- Removed the logging statement for 'AgentService ready' during server initialization.
- This change streamlines the startup process by eliminating unnecessary log entries.
This update contributes to cleaner logs and improved readability during server startup.
* refactor: change getDatabase method to synchronous return type
- Updated the getDatabase method in DatabaseManager to return a synchronous LibSQLDatabase instance instead of a Promise.
- This change simplifies the database access pattern, aligning with the current initialization logic.
This refactor enhances code clarity and reduces unnecessary asynchronous handling in the database access layer.
* refactor: simplify sessionMessageRepository by removing transaction handling
- Removed transaction handling parameters from message persistence methods in sessionMessageRepository.
- Updated database access to use a direct call to getDatabase() instead of passing a transaction client.
- Streamlined the upsertMessage and persistExchange methods for improved clarity and reduced complexity.
This refactor enhances code readability and simplifies the database interaction logic.
---------
Co-authored-by: Claude <noreply@anthropic.com>
Implement single instance IPC subscription pattern to resolve MaxListenersExceededWarning. Previously, each component using useApiServer would register a separate 'api-server:ready' listener, and React strict mode double rendering would quickly exceed the 10 listener limit.
Changes:
- Add module-level subscription manager with onReadyCallbacks Set
- Ensure only one IPC listener is registered regardless of component count
- Use useRef to maintain stable callback references
- Properly cleanup subscriptions when all components unmount
This maintains existing behavior while keeping listener count constant at 1.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
* feat: add endpoint type support for cherryin provider
* chore: bump @cherrystudio/ai-sdk-provider version to 0.1.1
* chore: bump ai-sdk-provider version to 0.1.3
* test(knowledge): fix tests for knowledge base form modal refactoring
Update all test files to match the new vertical layout structure with button-based advanced settings toggle. Remove obsolete tests for deleted features.
Changes:
- Rewrite KnowledgeBaseFormModal.test.tsx for new button-toggle structure
- Remove tests for preprocess and rerank features from GeneralSettingsPanel
- Update AdvancedSettingsPanel tests with required props
- Update all snapshots to reflect new component structure
- Format test files according to biome rules
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* test(knowledge): simplify KnowledgeBaseFormModal button tests
Simplify button interaction tests to avoid text matching issues. Focus on testing behavior rather than implementation details.
Changes:
- Simplify advanced settings toggle test
- Simplify footer buttons test to check button count instead of text content
- Remove fragile text-based button selection
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
* feat: add Git Bash detection and requirement check for Windows agents
- Add System_CheckGitBash IPC channel for detecting Git Bash installation
- Implement detection logic checking common installation paths and PATH environment
- Display non-closable error alert in AgentModal when Git Bash is not found
- Disable agent creation/edit button until Git Bash is installed
- Add recheck functionality to verify installation without restarting app
Git Bash is required for agents to function properly on Windows systems.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* i18n: add Git Bash requirement translations for agent modal
- Add English translations for Git Bash detection warnings
- Add Simplified Chinese (zh-cn) translations
- Add Traditional Chinese (zh-tw) translations
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* format code
---------
Co-authored-by: Claude <noreply@anthropic.com>
* feat: add ChatGPT conversation import feature
Introduces a new import workflow for ChatGPT conversations, including UI components, service logic, and i18n support for English, Simplified Chinese, and Traditional Chinese. Adds an import menu to data settings, a popup for file selection and progress, and a service to parse and store imported conversations as topics and messages.
* fix: ci failure
* refactor: import service and add modular importers
Refactored the import service to support a modular importer architecture. Moved ChatGPT import logic to a dedicated importer class and directory. Updated UI components and i18n descriptions for clarity. Removed unused Redux selector in ImportMenuSettings. This change enables easier addition of new importers in the future.
* Apply suggestion from @Copilot
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* fix: improve ChatGPT import UX and set model for assistant
Added a loading state and spinner for file selection in the ChatGPT import popup, with new translations for the 'selecting' state in en-us, zh-cn, and zh-tw locales. Also, set the model property for imported assistant messages to display the GPT-5 logo.
---------
Co-authored-by: SuYao <sy20010504@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>