Your guide to installing the best model for your local AI
model_8b-v.23-q3_K_S
.
Here’s a breakdown of what each part likely means:Model | Parameters | Quantization | Best For |
---|---|---|---|
Llama3.1 | 8B, 70B, 405B | Q8, FP16 | Advanced tools, large-scale tasks |
Gemma2 | 2B, 9B, 27B | Q8 | Efficient text generation and language tasks |
Mistral-Nemo | 12B, 70B | Q4 | Long-context tasks, multi-lingual support |
Qwen2 | 0.5B, 1.5B, 7B, 72B | Q4, Q8 | Text processing, general AI tasks |
Deepseek-Coder | 16B, 236B | Q8, FP16 | Code generation, fill-in-the-middle tasks |
CodeGemma | 2B, 7B | Q8 | Code generation, instruction-following tasks |
AIs
option of the left side menu > Click the Show all models
option below the recommended models.
Tags
column next to each model you can see that each offers several size variants (e.g., 8b, 35b). These numbers refer to the number of parameters in the model, which directly affect the model’s performance, speed, and the hardware resources required.