cherry-studio/resources/model-catalogs/microsoft/phi-4-multimodal-instruct.yaml
2025-07-06 21:27:27 +08:00

43 lines
1.7 KiB
YAML

id: microsoft/phi-4-multimodal-instruct
canonical_slug: microsoft/phi-4-multimodal-instruct
hugging_face_id: microsoft/Phi-4-multimodal-instruct
name: 'Microsoft: Phi 4 Multimodal Instruct'
type: chat
created: 1741396284
description: |
Phi-4 Multimodal Instruct is a versatile 5.6B parameter foundation model that combines advanced reasoning and instruction-following capabilities across both text and visual inputs, providing accurate text outputs. The unified architecture enables efficient, low-latency inference, suitable for edge and mobile deployments. Phi-4 Multimodal Instruct supports text inputs in multiple languages including Arabic, Chinese, English, French, German, Japanese, Spanish, and more, with visual input optimized primarily for English. It delivers impressive performance on multimodal tasks involving mathematical, scientific, and document reasoning, providing developers and enterprises a powerful yet compact model for sophisticated interactive applications. For more information, see the [Phi-4 Multimodal blog post](https://azure.microsoft.com/en-us/blog/empowering-innovation-the-next-generation-of-the-phi-family/).
context_length: 131072
architecture:
modality: text+image->text
input_modalities:
- text
- image
output_modalities:
- text
tokenizer: Other
instruct_type: null
pricing:
prompt: '0.00000005'
completion: '0.0000001'
input_cache_read: ''
input_cache_write: ''
request: '0'
image: '0.00017685'
web_search: '0'
internal_reasoning: '0'
unit: 1
currency: USD
supported_parameters:
- max_tokens
- temperature
- top_p
- stop
- frequency_penalty
- presence_penalty
- repetition_penalty
- response_format
- top_k
- seed
- min_p
model_provider: microsoft