Files
hermes-sync/skills/mlops/training/unsloth/references/llms-txt.md

794 KiB
Raw Blame History

Unsloth - Llms-Txt

Pages: 136


!pip install huggingface_hub hf_transfer

URL: llms-txt#!pip-install-huggingface_hub-hf_transfer

import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF", local_dir = "unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF", allow_patterns = ["IQ2_XXS"], ) bash ./llama.cpp/llama-cli
--model unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF/Llama-4-Scout-17B-16E-Instruct-UD-IQ2_XXS.gguf
--threads 32
--ctx-size 16384
--n-gpu-layers 99
-ot ".ffn_.*_exps.=CPU"
--seed 3407
--prio 3
--temp 0.6
--min-p 0.01
--top-p 0.9
-no-cnv
--prompt "<|header_start|>user<|header_end|>\n\nCreate a Flappy Bird game.<|eot|><|header_start|>assistant<|header_end|>\n\n"


{% hint style="success" %}
Read more on running Llama 4 here: <https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-llama-4>
{% endhint %}

**Examples:**

Example 1 (unknown):
```unknown
And let's do inference!

{% code overflow="wrap" %}

First uninstall xformers installed by previous libraries

URL: llms-txt#first-uninstall-xformers-installed-by-previous-libraries

pip uninstall xformers -y


(1) Saving to GGUF / merging to 16bit for vLLM

URL: llms-txt#(1)-saving-to-gguf-/-merging-to-16bit-for-vllm


Qwen3-Coder: How to Run Locally

URL: llms-txt#qwen3-coder:-how-to-run-locally

Contents:

  • 🖥️ Running Qwen3-Coder
    • ⚙️ Recommended Settings
    • Run Qwen3-Coder-30B-A3B-Instruct:

Run Qwen3-Coder-30B-A3B-Instruct and 480B-A35B locally with Unsloth Dynamic quants.

Qwen3-Coder is Qwens new series of coding agent models, available in 30B (Qwen3-Coder-Flash) and 480B parameters. Qwen3-480B-A35B-Instruct achieves SOTA coding performance rivalling ClaudeSonnet-4, GPT-4.1, and Kimi K2, with 61.8% on Aider Polygot and support for 256K (extendable to 1M) token context.

We also uploaded Qwen3-Coder with native 1M context length extended by YaRN and full-precision 8bit and 16bit versions. Unsloth also now supports fine-tuning and RL of Qwen3-Coder.

{% hint style="success" %} UPDATE: We fixed tool-calling for Qwen3-Coder! You can now use tool-calling seamlessly in llama.cpp, Ollama, LMStudio, Open WebUI, Jan etc. This issue was universal and affected all uploads (not just Unsloth), and we've communicated with the Qwen team about our fixes! Read more {% endhint %}

Run 30B-A3BRun 480B-A35B

{% hint style="success" %} Does Unsloth Dynamic Quants work? Yes, and very well. In third-party testing on the Aider Polyglot benchmark, the UD-Q4_K_XL (276GB) dynamic quant nearly matched the full bf16 (960GB) Qwen3-coder model, scoring 60.9% vs 61.8%. More details here. {% endhint %}

Qwen3 Coder - Unsloth Dynamic 2.0 GGUFs:

Dynamic 2.0 GGUF (to run) 1M Context Dynamic 2.0 GGUF

🖥️ Running Qwen3-Coder

Below are guides for the 30B-A3B and 480B-A35B variants of the model.

Qwen recommends these inference settings for both models:

temperature=0.7, top_p=0.8, top_k=20, repetition_penalty=1.05

  • Temperature of 0.7
  • Top_K of 20
  • Min_P of 0.00 (optional, but 0.01 works well, llama.cpp default is 0.1)
  • Top_P of 0.8
  • Repetition Penalty of 1.05
  • Chat template:

{% code overflow="wrap" %}

{% endcode %}

  • Recommended context output: 65,536 tokens (can be increased). Details here.

Chat template/prompt format with newlines un-rendered

{% code overflow="wrap" %}

Chat template for tool calling (Getting the current temperature for San Francisco). More details here for how to format tool calls.

{% hint style="info" %} Reminder that this model supports only non-thinking mode and does not generate <think></think> blocks in its output. Meanwhile, specifying enable_thinking=False is no longer required. {% endhint %}

Run Qwen3-Coder-30B-A3B-Instruct:

To achieve inference speeds of 6+ tokens per second for our Dynamic 4-bit quant, have at least 18GB of unified memory (combined VRAM and RAM) or 18GB of system RAM alone. As a rule of thumb, your available memory should match or exceed the size of the model youre using. E.g. the UD_Q8_K_XL quant (full precision), which is 32.5GB, will require at least 33GB of unified memory (VRAM + RAM) or 33GB of RAM for optimal performance.

NOTE: The model can run on less memory than its total size, but this will slow down inference. Maximum memory is only needed for the fastest speeds.

Given that this is a non thinking model, there is no need to set thinking=False and the model does not generate <think> </think> blocks.

{% hint style="info" %} Follow the best practices above. They're the same as the 480B model. {% endhint %}

🦙 Ollama: Run Qwen3-Coder-30B-A3B-Instruct Tutorial

  1. Install ollama if you haven't already! You can only run models up to 32B in size.

  2. Run the model! Note you can call ollama servein another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in params in our Hugging Face upload!

Llama.cpp: Run Qwen3-Coder-30B-A3B-Instruct Tutorial

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

  2. You can directly pull from HuggingFace via:

  3. Download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose UD_Q4_K_XL or other quantized versions.

Examples:

Example 1 (unknown):

<|im_start|>user
  Hey there!<|im_end|>
  <|im_start|>assistant
  What is 1+1?<|im_end|>
  <|im_start|>user
  2<|im_end|>
  <|im_start|>assistant

Example 2 (unknown):

<|im_start|>user\nHey there!<|im_end|>\n<|im_start|>assistant\nWhat is 1+1?<|im_end|>\n<|im_start|>user\n2<|im_end|>\n<|im_start|>assistant\n

Example 3 (unknown):

<|im_start|>user
What's the temperature in San Francisco now? How about tomorrow?<|im_end|>
<|im_start|>assistant
<tool_call>\n<function=get_current_temperature>\n<parameter=location>\nSan Francisco, CA, USA
</parameter>\n</function>\n</tool_call><|im_end|>
<|im_start|>user
<tool_response>
{"temperature": 26.1, "location": "San Francisco, CA, USA", "unit": "celsius"}
</tool_response>\n<|im_end|>

Example 4 (bash):

apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh

Ensure all audio is at 24 kHz sampling rate (Orpheuss expected rate)

URL: llms-txt#ensure-all-audio-is-at-24-khz-sampling-rate-(orpheuss-expected-rate)

Contents:

  • Fine-Tuning TTS with Unsloth

dataset = dataset.cast_column("audio", Audio(sampling_rate=24000))

filename,text 0001.wav,Hello there! 0002.wav, I am very tired. python from datasets import Audio dataset = load_dataset("csv", data_files="mydata.csv", split="train") dataset = dataset.cast_column("filename", Audio(sampling_rate=24000)) python from unsloth import FastLanguageModel import torch dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = False # Use 4bit quantization to reduce memory usage. Can be False.

model, tokenizer = FastLanguageModel.from_pretrained( model_name = "unsloth/orpheus-3b-0.1-ft", max_seq_length= 2048, # Choose any for long context! dtype = dtype, load_in_4bit = load_in_4bit, #token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf )

from datasets import load_dataset dataset = load_dataset("MrDragonFox/Elise", split = "train") python

Examples:

Example 1 (unknown):

This will download the dataset (\~328 MB for \~1.2k samples). Each item in `dataset` is a dictionary with at least:

* `"audio"`: the audio clip (waveform array and metadata like sampling rate), and
* `"text"`: the transcript string

Orpheus supports tags like `<laugh>`, `<chuckle>`, `<sigh>`, `<cough>`, `<sniffle>`, `<groan>`, `<yawn>`, `<gasp>`, etc. For example: `"I missed you <laugh> so much!"`.  These tags are enclosed in angle brackets and will be treated as special tokens by the model (they match [Orpheuss expected tags](https://github.com/canopyai/Orpheus-TTS) like `<laugh>` and `<sigh>`. During training, the model will learn to associate these tags with the corresponding audio patterns. The Elise dataset with tags already has many of these (e.g., 336 occurrences of “laughs”, 156 of “sighs”, etc. as listed in its card). If your dataset lacks such tags but you want to incorporate them, you can manually annotate the transcripts where the audio contains those expressions.

**Option 2: Preparing a custom dataset**  If you have your own audio files and transcripts:

* Organize audio clips (WAV/FLAC files) in a folder.
* Create a CSV or TSV file with columns for file path and transcript. For example:

Example 2 (unknown):

* Use `load_dataset("csv", data_files="mydata.csv", split="train")` to load it. You might need to tell the dataset loader how to handle audio paths. An alternative is using the `datasets.Audio` feature to load audio data on the fly:

Example 3 (unknown):

Then `dataset[i]["audio"]` will contain the audio array.
* **Ensure transcripts are normalized** (no unusual characters that the tokenizer might not know, except the emotion tags if used). Also ensure all audio have a consistent sampling rate (resample them if necessary to the target rate the model expects, e.g. 24kHz for Orpheus).

In summary, for **dataset preparation**:

* You need a **list of (audio, text)** pairs.
* Use the HF `datasets` library to handle loading and optional preprocessing (like resampling).
* Include any **special tags** in the text that you want the model to learn (ensure they are in `<angle_brackets>` format so the model treats them as distinct tokens).
* (Optional) If multi-speaker, you could include a speaker ID token in the text or use a separate speaker embedding approach, but thats beyond this basic guide (Elise is single-speaker).

### Fine-Tuning TTS with Unsloth

Now, lets start fine-tuning! Well illustrate using Python code (which you can run in a Jupyter notebook, Colab, etc.).

**Step 1: Load the Model and Dataset**

In all our  TTS notebooks, we enable LoRA (16-bit) training and disable QLoRA (4-bit) training with: `load_in_4bit = False`. This is so the model can usually learn your dataset better and have higher accuracy.

Example 4 (unknown):

{% hint style="info" %}
If memory is very limited or if dataset is large, you can stream or load in chunks. Here, 3h of audio easily fits in RAM. If using your own dataset CSV, load it similarly.
{% endhint %}

**Step 2: Advanced - Preprocess the data for training (Optional)**

We need to prepare inputs for the Trainer. For text-to-speech, one approach is to train the model in a causal manner: concatenate text and audio token IDs as the target sequence. However, since Orpheus is a decoder-only LLM that outputs audio, we can feed the text as input (context) and have the audio token ids as labels. In practice, Unsloths integration might do this automatically if the models config identifies it as text-to-speech. If not, we can do something like:

All Our Models

URL: llms-txt#all-our-models

Contents:

  • New & recommended models:
  • DeepSeek models:
  • Llama models:
  • Gemma models:
  • Qwen models:
  • Mistral models:
  • Phi models:
  • Other (GLM, Orpheus, Smol, Llava etc.) models:
  • New models:
  • DeepSeek models

Unsloth model catalog for all our Dynamic GGUF, 4-bit, 16-bit models on Hugging Face.

{% tabs %} {% tab title="• GGUF + 4-bit" %} DeepSeekLlamaGemmaQwenMistralPhi

GGUFs let you run models in tools like Ollama, Open WebUI, and llama.cpp.
Instruct (4-bit) safetensors can be used for inference or fine-tuning.

Model Variant GGUF Instruct (4-bit)
gpt-oss 120b link link
20b link link
DeepSeek-V3.1 Terminus link
V3.1 link
Qwen3-VL 2B-Instruct link link
2B-Thinking link link
4B-Instruct link link
4B-Thinking link link
8B-Instruct link link
8B-Thinking link link
30B-A3B-Instruct link
30B-A3B-Thinking link
32B-Instruct link link
32B-Thinking link link
235B-A22B-Instruct link
235B-A22B-Thinking link
Qwen3-2507 30B-A3B-Instruct link
30B-A3B-Thinking link
235B-A22B-Thinking link
235B-A22B-Instruct link
Qwen3-Coder 30B-A3B link
480B-A35B link
Granite-4.0 (new) H-Small link link
GLM (new) 4.6 link
4.5-Air link
Kimi-K2-0905 1T link
Gemma 3n E2B link link
E4B link link
DeepSeek-R1-0528 R1-0528-Qwen3-8B link link
R1-0528 link
Mistral Magistral Small (2509) link link
Magistral Small (2507) link link
Small 3.2 24B (2506) link link
FLUX.1 Kontext-dev link
Qwen3 0.6 B link link
1.7 B link link
4 B link link
8 B link link
14 B link link
30B-A3B link link
32 B link link
235B-A22B link
Llama 4 Scout 17B 16E link link
Maverick 17B 128E link
Grok 2 270B link
Qwen-2.5 Omni 3 B link
7 B link
Phi-4 Reasoning-plus link link
Reasoning link link
Model Variant GGUF Instruct (4-bit)
DeepSeek-V3.1 Terminus link
V3.1 link
DeepSeek-V3 V3-0324 link
V3 link
DeepSeek-R1 R1-0528 link
R1-0528-Qwen3-8B link link
R1 link
R1 Zero link
Distill Llama 3 8 B link link
Distill Llama 3.3 70 B link link
Distill Qwen 2.5 1.5 B link link
Distill Qwen 2.5 7 B link link
Distill Qwen 2.5 14 B link link
Distill Qwen 2.5 32 B link link
Model Variant GGUF Instruct (4-bit)
Llama 4 Scout 17 B-16 E link link
Maverick 17 B-128 E link
Llama 3.3 70 B link link
Llama 3.2 1 B link link
3 B link link
11 B Vision link
90 B Vision link
Llama 3.1 8 B link link
70 B link
405 B link
Llama 3 8 B link
70 B link
Llama 2 7 B link
13 B link
CodeLlama 7 B link
13 B link
34 B link
Model Variant GGUF Instruct (4-bit)
Gemma 3n E2B link link
E4B link link
Gemma 3 270M link link
1 B link link
4 B link link
12 B link link
27 B link link
MedGemma 4 B (vision) link link
27 B (vision) link link
Gemma 2 2 B link link
9 B link
27 B link
Model Variant GGUF Instruct (4-bit)
Qwen 3 0.6 B link link
1.7 B link link
4 B link link
8 B link link
14 B link link
30 B-A3B link link
32 B link link
235 B-A22B link
Qwen 2.5 Omni 3 B link
7 B link
Qwen 2.5 VL 3 B link link
7 B link link
32 B link link
72 B link link
Qwen 2.5 0.5 B link
1.5 B link
3 B link
7 B link
14 B link
32 B link
72 B link
Qwen 2.5 Coder (128 K) 0.5 B link link
1.5 B link link
3 B link link
7 B link link
14 B link link
32 B link link
QwQ 32 B link link
QVQ (preview) 72 B link
Qwen 2 (chat) 1.5 B link
7 B link
72 B link
Qwen 2 VL 2 B link
7 B link
72 B link
ModelVariantGGUFInstruct (4-bit)
Mistral Small3.2-24 B (2506)linklink
3.1-24 B (2503)linklink
3-24 B (2501)linklink
MagistralSmall-24 B (2506)linklink
DevstralSmall-24 B (2507)linklink
Small-24 B (2505)linklink
Pixtral12 B (2409)link
Mistral Small2409-22 Blink
Mistral NeMo12 B (2407)linklink
Mistral Large2407link
Mistral 7 Bv0.3link
v0.2link
Mixtral8 × 7 Blink
Model Variant GGUF Instruct (4-bit)
Phi-4 Reasoning-plus link link
Reasoning link link
Mini-Reasoning link link
Phi-4 (instruct) link link
mini (instruct) link link
Phi-3.5 mini link
Phi-3 mini link
medium link

Other (GLM, Orpheus, Smol, Llava etc.) models:

Model Variant GGUF Instruct (4-bit)
GLM 4.5-Air link
4.5 4.5
4-32B-0414 4-32B-0414
Hunyuan A13B link
Orpheus 0.1-ft (3B) link link
LLava 1.5 (7 B) link
1.6 Mistral (7 B) link
TinyLlama Chat link
SmolLM 2 135 M link link
360 M link link
1.7 B link link
Zephyr-SFT 7 B link
Yi 6 B (v1.5) link
6 B (v1.0) link
34 B (chat) link
34 B (base) link
{% endtab %}

{% tab title="• Instruct 16-bit" %} 16-bit and 8-bit Instruct models are used for inference or fine-tuning:

Model Variant Instruct (16-bit)
gpt-oss (new) 20b link
120b link
Gemma 3n E2B link
E4B link
DeepSeek-R1-0528 R1-0528-Qwen3-8B link
R1-0528 link
Mistral Small 3.2 24B (2506) link
Small 3.1 24B (2503) link
Small 3.0 24B (2501) link
Magistral Small (2506) link
Qwen 3 0.6 B link
1.7 B link
4 B link
8 B link
14 B link
30B-A3B link
32 B link
235B-A22B link
Llama 4 Scout 17B-16E link
Maverick 17B-128E link
Qwen 2.5 Omni 3 B link
7 B link
Phi-4 Reasoning-plus link
Reasoning link
Model Variant Instruct (16-bit)
DeepSeek-V3 V3-0324 link
V3 link
DeepSeek-R1 R1-0528 link
R1-0528-Qwen3-8B link
R1 link
R1 Zero link
Distill Llama 3 8B link
Distill Llama 3.3 70B link
Distill Qwen 2.5 1.5B link
Distill Qwen 2.5 7B link
Distill Qwen 2.5 14B link
Distill Qwen 2.5 32B link
Family Variant Instruct (16-bit)
Llama 4 Scout 17B-16E link
Maverick 17B-128E link
Llama 3.3 70 B link
Llama 3.2 1 B link
3 B link
11 B Vision link
90 B Vision link
Llama 3.1 8 B link
70 B link
405 B link
Llama 3 8 B link
70 B link
Llama 2 7 B link
Model Variant Instruct (16-bit)
Gemma 3n E2B link
E4B link
Gemma 3 1 B link
4 B link
12 B link
27 B link
Gemma 2 2 B link
9 B link
27 B link
Family Variant Instruct (16-bit)
Qwen 3 0.6 B link
1.7 B link
4 B link
8 B link
14 B link
30B-A3B link
32 B link
235B-A22B link
Qwen 2.5 Omni 3 B link
7 B link
Qwen 2.5 VL 3 B link
7 B link
32 B link
72 B link
Qwen 2.5 0.5 B link
1.5 B link
3 B link
7 B link
14 B link
32 B link
72 B link
Qwen 2.5 Coder 128 K 0.5 B link
1.5 B link
3 B link
7 B link
14 B link
32 B link
QwQ 32 B link
QVQ (preview) 72 B
Qwen 2 (Chat) 1.5 B link
7 B link
72 B link
Qwen 2 VL 2 B link
7 B link
72 B link
Model Variant Instruct (16-bit)
Mistral Small 2409-22B link
Mistral Large 2407 link
Mistral 7B v0.3 link
Mistral 7B v0.2 link
Pixtral 12B 2409 link
Mixtral 8×7B link
Mistral NeMo 12B 2407 link
Devstral Small 2505 link
Model Variant Instruct (16-bit)
Phi-4 Reasoning-plus link
Reasoning link
Phi-4 (core) link
Mini-Reasoning link
Mini link
Phi-3.5 Mini link
Phi-3 Mini link
Medium link

Text-to-Speech (TTS) models:

Model Instruct (16-bit)
Orpheus-3B (v0.1 ft) link
Orpheus-3B (v0.1 pt) link
Sesame-CSM 1B link
Whisper Large V3 (STT) link
Llasa-TTS 1B link
Spark-TTS 0.5B link
Oute-TTS 1B link
{% endtab %}

{% tab title="• Base 4 + 16-bit" %} Base models are usually used for fine-tuning purposes:

Model Variant Base (16-bit) Base (4-bit)
Gemma 3n E2B link link
E4B link link
Qwen 3 0.6 B link link
1.7 B link link
4 B link link
8 B link link
14 B link link
30B-A3B link link
Llama 4 Scout 17B 16E link link
Maverick 17B 128E link

Llama models:

Model Variant Base (16-bit) Base (4-bit)
Llama 4 Scout 17B 16E link
Maverick 17B 128E link
Llama 3.3 70 B link
Llama 3.2 1 B link
3 B link
11 B Vision link
90 B Vision link
Llama 3.1 8 B link
70 B link
Llama 3 8 B link link
Llama 2 7 B link link
13 B link link
Model Variant Base (16-bit) Base (4-bit)
Qwen 3 0.6 B link link
1.7 B link link
4 B link link
8 B link link
14 B link link
30B-A3B link link
Qwen 2.5 0.5 B link link
1.5 B link link
3 B link link
7 B link link
14 B link link
32 B link link
72 B link link
Qwen 2 1.5 B link link
7 B link link

Llama models:

Model Variant Base (16-bit) Base (4-bit)
Llama 4 Scout 17B 16E link
Maverick 17B 128E link
Llama 3.3 70 B link
Llama 3.2 1 B link
3 B link
11 B Vision link
90 B Vision link
Llama 3.1 8 B link
70 B link
Llama 3 8 B link link
Llama 2 7 B link link
13 B link link
Model Variant Base (16-bit) Base (4-bit)
Gemma 3 1 B link link
4 B link link
12 B link link
27 B link link
Gemma 2 2 B link
9 B link
27 B link

Mistral models:

Model Variant Base (16-bit) Base (4-bit)
Mistral Small 24B 2501 link
NeMo 12B 2407 link
7B v0.3 link link
7B v0.2 link link
Pixtral 12B 2409 link

Other (TTS, TinyLlama) models:

Model Variant Base (16-bit) Base (4-bit)
TinyLlama 1.1 B (Base) link link
Orpheus-3b 0.1-pretrained link link
{% endtab %}
{% endtabs %}

Windows Installation

URL: llms-txt#windows-installation

Contents:

  • Method #1 - Docker:
  • Method #2 - Windows directly:
    • Notes
    • Advanced/Troubleshooting
  • Method #3 - Windows using PowerShell:
  • Method #4 - Windows via WSL:

See how to install Unsloth on Windows with or without WSL.

For Windows, pip install unsloth now works, however you must have Pytorch previously installed.

Method #1 - Docker:

Docker might be the easiest way for Windows users to get started with Unsloth as there is no setup needed or dependency issues. unsloth/unsloth is Unsloth's only Docker image. For Blackwell and 50-series GPUs, use this same image - no separate image needed.

For installation instructions, please follow our Docker guide, otherwise here is a quickstart guide:

{% stepper %} {% step %}

Install Docker and NVIDIA Container Toolkit.

Install Docker via Linux or Desktop (other). Then install NVIDIA Container Toolkit:

export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.17.8-1
sudo apt-get update && sudo apt-get install -y \
  nvidia-container-toolkit=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
  nvidia-container-toolkit-base=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
  libnvidia-container-tools=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
  libnvidia-container1=${NVIDIA_CONTAINER_TOOLKIT_VERSION}

Run the container.

unsloth/unsloth is Unsloth's only Docker image.

Access Jupyter Lab

Go to http://localhost:8888 and open Unsloth. Access the unsloth-notebooks tabs to see Unsloth notebooks. {% endstep %}

Start training with Unsloth

If you're new, follow our step-by-step Fine-tuning Guide, RL Guide or just save/copy any of our premade notebooks. {% endstep %} {% endstepper %}

Method #2 - Windows directly:

{% hint style="info" %} Python 3.13 now works with Unsloth! {% endhint %}

{% stepper %} {% step %} Install NVIDIA Video Driver

You should install the latest version of your GPUs driver. Download drivers here: NVIDIA GPU Drive {% endstep %}

{% step %} Install Visual Studio C++

You will need Visual Studio, with C++ installed. By default, C++ is not installed with Visual Studio, so make sure you select all of the C++ options. Also select options for Windows 10/11 SDK.

  • Launch the Installer here: Visual Studio Community Edition
  • In the installer, navigate to individual components and select all the options listed here:
    • .NET Framework 4.8 SDK
    • .NET Framework 4.7.2 targeting pack
    • C# and Visual Basic Roslyn compilers
    • MSBuild
    • MSVC v143 - VS 2022 C++ x64/x86 build tools
    • C++ 2022 Redistributable Update
    • C++ CMake tools for Windows
    • C++/CLI support for v143 build tools (Latest)
    • MSBuild support for LLVM (clang-cl) toolset
    • C++ Clang Compiler for Windows (19.1.1)
    • Windows 11 SDK (10.0.22621.0)
    • Windows Universal CRT SDK
    • C++ 2022 Redistributable MSMs

Easier method: Or you can open an elevated Command Prompt or PowerShell:

  • Search for "cmd" or "PowerShell", right-click it, and choose "Run as administrator."
  • Paste and run this command (update the Visual Studio path if necessary):

{% step %} Install Python and CUDA Toolkit

Follow the instructions to install CUDA Toolkit.

Then install Miniconda (which has Python) here: https://www.anaconda.com/docs/getting-started/miniconda/install {% endstep %}

{% step %} Install PyTorch

You will need the correct version of PyTorch that is compatible with your CUDA drivers, so make sure to select them carefully. Install PyTorch {% endstep %}

{% step %} Install Unsloth

Open Conda command prompt or your terminal with Python and run the command:

{% endstep %} {% endstepper %}

{% hint style="warning" %} If you're using GRPO or plan to use vLLM, currently vLLM does not support Windows directly but only via WSL or Linux. {% endhint %}

To run Unsloth directly on Windows:

  • Install Triton from this Windows fork and follow the instructions here (be aware that the Windows fork requires PyTorch >= 2.4 and CUDA 12)
  • In the SFTTrainer, set dataset_num_proc=1 to avoid a crashing issue:

Advanced/Troubleshooting

For advanced installation instructions or if you see weird errors during installations:

  1. Install torch and triton. Go to https://pytorch.org to install it. For example pip install torch torchvision torchaudio triton
  2. Confirm if CUDA is installed correctly. Try nvcc. If that fails, you need to install cudatoolkit or CUDA drivers.
  3. Install xformers manually. You can try installing vllm and seeing if vllm succeeds. Check if xformers succeeded with python -m xformers.info Go to https://github.com/facebookresearch/xformers. Another option is to install flash-attn for Ampere GPUs.
  4. Double check that your versions of Python, CUDA, CUDNN, torch, triton, and xformers are compatible with one another. The PyTorch Compatibility Matrix may be useful.
  5. Finally, install bitsandbytes and check it with python -m bitsandbytes

Method #3 - Windows using PowerShell:

Step 1: Install Prerequisites

  1. Install NVIDIA CUDA Toolkit:
    • Download and install the appropriate version of the NVIDIA CUDA Toolkit from CUDA Downloads.
    • Reboot your system after installation if prompted.
    • Note: No additional setup is required after installation for Unsloth.
  2. Install Microsoft C++ Build Tools:
    • Download and install Microsoft Build Tools for Visual Studio from the official website.
    • During installation, select the C++ build tools workload.
      Ensure the MSVC compiler toolset is included.
  3. Set Environment Variables for the C++ Compiler:
    • Open the System Properties window (search for "Environment Variables" in the Start menu).
    • Click "Environment Variables…".
    • Add or update the following under System variables:
      • CC:
        Path to the cl.exe C++ compiler.
        Example (adjust if your version differs):
  • CXX:
    Same path as CC.
    • Click OK to save changes.
    • Verify: Open a new terminal and type cl. It should show version info.
  1. Install Conda
    1. Download and install Miniconda from the official website
    2. Follow installation instruction from the website
    3. To check whether conda is already installed, you can test it with conda in your PowerShell

Step 2: Run the Unsloth Installation Script

  1. Download the unsloth_windows.ps1 PowerShell script by going through this link.

  2. Open PowerShell as Administrator:

    • Right-click Start and select "Windows PowerShell (Admin)".
  3. Navigate to the scripts location using cd:

  4. Run the script:

Step 3: Using Unsloth

Activate the environment after the installation completes:

Unsloth and its dependencies are now ready!

Method #4 - Windows via WSL:

WSL is Window's subsystem for Linux.

  1. Install python though Python's official site.
  2. Start WSL (Should already be preinstalled). Open command prompt as admin then run:

Optional: If WSL is not preinstalled, go to the Microsoft store and search "Ubuntu" and the app that says Ubuntu will be WSL. Install it and run it and continue from there.

  1. Optional: Install Jupyter Notebook to run in a Colab like environment:

  2. Launch Jupyter Notebook:

jupyter notebook
  1. Download any Colab notebook from Unsloth, import it into your Jupyter Notebook, adjust the parameters as needed, and execute the script.

Examples:

Example 1 (bash):

docker run -d -e JUPYTER_PASSWORD="mypassword" \
  -p 8888:8888 -p 2222:22 \
  -v $(pwd)/work:/workspace/work \
  --gpus all \
  unsloth/unsloth

Example 2 (unknown):

"C:\Program Files (x86)\Microsoft Visual Studio\Installer\vs_installer.exe" modify ^
--installPath "C:\Program Files\Microsoft Visual Studio\2022\Community" ^
--add Microsoft.Net.Component.4.8.SDK ^
--add Microsoft.Net.Component.4.7.2.TargetingPack ^
--add Microsoft.VisualStudio.Component.Roslyn.Compiler ^
--add Microsoft.Component.MSBuild ^
--add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 ^
--add Microsoft.VisualStudio.Component.VC.Redist.14.Latest ^
--add Microsoft.VisualStudio.Component.VC.CMake.Project ^
--add Microsoft.VisualStudio.Component.VC.CLI.Support ^
--add Microsoft.VisualStudio.Component.VC.Llvm.Clang ^
--add Microsoft.VisualStudio.ComponentGroup.ClangCL ^
--add Microsoft.VisualStudio.Component.Windows11SDK.22621 ^
--add Microsoft.VisualStudio.Component.Windows10SDK.19041 ^
--add Microsoft.VisualStudio.Component.UniversalCRT.SDK ^
--add Microsoft.VisualStudio.Component.VC.Redist.MSM

Example 3 (unknown):

pip install "unsloth[windows] @ git+https://github.com/unslothai/unsloth.git"

Example 4 (python):

trainer = SFTTrainer(
    dataset_num_proc=1,
    ...
)

Prepare batched input with your image file

URL: llms-txt#prepare-batched-input-with-your-image-file

image_1 = Image.open("path/to/your/image_1.png").convert("RGB") image_2 = Image.open("path/to/your/image_2.png").convert("RGB") prompt = "\nFree OCR."

model_input = [ { "prompt": prompt, "multi_modal_data": {"image": image_1} }, { "prompt": prompt, "multi_modal_data": {"image": image_2} } ]

sampling_param = SamplingParams( temperature=0.0, max_tokens=8192, # ngram logit processor args extra_args=dict( ngram_size=30, window_size=90, whitelist_token_ids={128821, 128822}, # whitelist: , ), skip_special_tokens=False, )


DeepSeek-V3-0324: How to Run Locally

URL: llms-txt#deepseek-v3-0324:-how-to-run-locally

Contents:

  • ⚙️ Official Recommended Settings
  • 📖 Tutorial: How to Run DeepSeek-V3 in llama.cpp

How to run DeepSeek-V3-0324 locally using our dynamic quants which recovers accuracy

{% hint style="info" %} Please see https://docs.unsloth.ai/basics/deepseek-r1-0528-how-to-run-locally (May 28th 2025 update) to learn on how to run DeepSeek faster and more efficiently! {% endhint %}

DeepSeek is at it again! After releasing V3, R1 Zero and R1 back in December 2024 and January 2025, DeepSeek updated their checkpoints / models for V3, and released a March update!

According to DeepSeek, MMLU-Pro jumped +5.3% to 81.2%. GPQA +9.3% points. AIME + 19.8% and LiveCodeBench + 10.0%! They provided a plot showing how they compared to the previous V3 checkpoint and other models like GPT 4.5 and Claude Sonnet 3.7. But how do we run a 671 billion parameter model locally?

MoE BitsTypeDisk SizeAccuracyLinkDetails
1.78bitIQ1_S173GBOkLink2.06/1.56bit
1.93bitIQ1_M183GBFairLink2.5/2.06/1.56
2.42bitIQ2_XXS203GBSuggestedLink2.5/2.06bit
2.71bitQ2_K_XL231GBSuggestedLink 3.5/2.5bit
3.5bitQ3_K_XL320GBGreatLink 4.5/3.5bit
4.5bitQ4_K_XL406GBBestLink 5.5/4.5bit

{% hint style="success" %} DeepSeek V3's original upload is in float8, which takes 715GB. Using Q4_K_M halves the file size to 404GB or so, and our dynamic 1.78bit quant fits in around 151GB. We suggest using our 2.7bit quant to balance size and accuracy! The 2.4bit one also works well! {% endhint %}

According to DeepSeek, these are the recommended settings for inference:

  • Temperature of 0.3 (Maybe 0.0 for coding as seen here)
  • Min_P of 0.00 (optional, but 0.01 works well, llama.cpp default is 0.1)
  • Chat template: <User>Create a simple playable Flappy Bird Game in Python. Place the final game inside of a markdown section.<Assistant>
  • A BOS token of <begin▁of▁sentence> is auto added during tokenization (do NOT add it manually!)
  • DeepSeek mentioned using a system prompt as well (optional) - it's in Chinese: 该助手为DeepSeek Chat由深度求索公司创造。\n今天是3月24日星期一。 which translates to: The assistant is DeepSeek Chat, created by DeepSeek.\nToday is Monday, March 24th.
  • For KV cache quantization, use 8bit, NOT 4bit - we found it to do noticeably worse.

📖 Tutorial: How to Run DeepSeek-V3 in llama.cpp

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

{% hint style="warning" %} NOTE using -DGGML_CUDA=ON for GPUs might take 5 minutes to compile. CPU only takes 1 minute to compile. You might be interested in llama.cpp's precompiled binaries. {% endhint %}

  1. Download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose UD-IQ1_S(dynamic 1.78bit quant) or other quantized versions like Q4_K_M . I recommend using our 2.7bit dynamic quant UD-Q2_K_XL to balance size and accuracy. More versions at: https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF

{% code overflow="wrap" %}

Examples:

Example 1 (bash):

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp

Quantization-Aware Training (QAT)

URL: llms-txt#quantization-aware-training-(qat)

Contents:

  • :books:Quantization
  • :fire:Smarter Quantization
  • :mag:Quantization-Aware Training
  • :sparkles:QAT + LoRA finetuning
  • :teapot:Exporting QAT models

Quantize models to 4-bit with Unsloth and PyTorch to recover accuracy.

In collaboration with PyTorch, we're introducing QAT (Quantization-Aware Training) in Unsloth to enable trainable quantization that recovers as much accuracy as possible. This results in significantly better model quality compared to standard 4-bit naive quantization. QAT can recover up to 70% of the lost accuracy and achieve a 13% model performance improvement on benchmarks such as GPQA and MMLU Pro.

Try QAT with our free Qwen3 (4B) notebook

:books:Quantization

{% columns %} {% column width="50%" %} Naively quantizing a model is called post-training quantization (PTQ). For example, assume we want to quantize to 8bit integers:

  1. Find max(abs(W))
  2. Find a = 127/max(abs(W)) where a is int8's maximum range which is 127
  3. Quantize via qW = int8(round(W * a)) {% endcolumn %}

{% column width="50%" %}

{% endcolumn %} {% endcolumns %}

Dequantizing back to 16bits simply does the reverse operation by float16(qW) / a . Post-training quantization (PTQ) can greatly reduce storage and inference costs, but quite often degrades accuracy when representing high-precision values with fewer bits - especially at 4-bit or lower. One way to solve this to utilize our dynamic GGUF quants, which uses a calibration dataset to change the quantization procedure to allocate more importance to important weights. The other way is to make quantization smarter, by making it trainable or learnable!

:fire:Smarter Quantization

To enable smarter quantization, we collaborated with the TorchAO team to add Quantization-Aware Training (QAT) directly inside of Unsloth - so now you can fine-tune models in Unsloth and then export them to 4-bit QAT format directly with accuracy improvements!

In fact, QAT recovers 66.9% of Gemma3-4B on GPQA, and increasing the raw accuracy by +1.0%. Gemma3-12B on BBH recovers 45.5%, and increased the raw accuracy by +2.1%. QAT has no extra overhead during inference, and uses the same disk and memory usage as normal naive quantization! So you get all the benefits of low-bit quantization, but with much increased accuracy!

:mag:Quantization-Aware Training

QAT simulates the true quantization procedure by "fake quantizing" weights and optionally activations during training, which typically means rounding high precision values to quantized ones (while staying in high precision dtype, e.g. bfloat16) and then immediately dequantizing them.

TorchAO enables QAT by first (1) inserting fake quantize operations into linear layers, and (2) transforms the fake quantize operations to actual quantize and dequantize operations after training to make it inference ready. Step 1 enables us to train a more accurate quantization representation.

:sparkles:QAT + LoRA finetuning

QAT in Unsloth can additionally be combined with LoRA fine-tuning to enable the benefits of both worlds: significantly reducing storage and compute requirements during training while mitigating quantization degradation! We support multiple methods via qat_scheme including fp8-int4, fp8-fp8, int8-int4, int4 . We also plan to add custom definitions for QAT in a follow up release!

{% code overflow="wrap" %}

:teapot:Exporting QAT models

After fine-tuning in Unsloth, you can call model.save_pretrained_torchao to save your trained model using TorchAOs PTQ format. You can also upload these to the HuggingFace hub! We support any config, and we plan to make text based methods as well, and to make the process more simpler for everyone! But first, we have to prepare the QAT model for the final conversion step via:

{% code overflow="wrap" %}

And now we can select which QAT style you want:

{% code overflow="wrap" %}

Examples:

Example 1 (python):

from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "unsloth/Qwen3-4B-Instruct-2507",
    max_seq_length = 2048,
    load_in_16bit = True,
)
model = FastLanguageModel.get_peft_model(
    model,
    r = 16,
    target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
                      "gate_proj", "up_proj", "down_proj",],
    lora_alpha = 32,
    
    # We support fp8-int4, fp8-fp8, int8-int4, int4
    qat_scheme = "int4",
)

Example 2 (python):

from torchao.quantization import quantize_
from torchao.quantization.qat import QATConfig
quantize_(model, QATConfig(step = "convert"))

Qwen3-2507

URL: llms-txt#qwen3-2507

Contents:

  • ⚙️Best Practices
  • 📖 Run Qwen3-30B-A3B-2507 Tutorials
    • Instruct: Qwen3-30B-A3B-Instruct-2507

Run Qwen3-30B-A3B-2507 and 235B-A22B Thinking and Instruct versions locally on your device!

Qwen released 2507 (July 2025) updates for their Qwen3 4B, 30B and 235B models, introducing both "thinking" and "non-thinking" variants. The non-thinking 'Qwen3-30B-A3B-Instruct-2507' and 'Qwen3-235B-A22B-Instruct-2507' features a 256K context window, improved instruction following, multilingual capabilities and alignment.

The thinking models 'Qwen3-30B-A3B-Thinking-2507' and 'Qwen3-235B-A22B-Thinking-2507' excel at reasoning, with the 235B achieving SOTA results in logic, math, science, coding, and advanced academic tasks.

Unsloth also now supports fine-tuning and Reinforcement Learning (RL) of Qwen3-2507 models — 2x faster, with 70% less VRAM, and 8x longer context lengths

Run 30B-A3BRun 235B-A22BFine-tune Qwen3-2507

Unsloth Dynamic 2.0 GGUFs:

Model GGUFs to run:
Qwen3-4B-2507 InstructThinking
Qwen3-30B-A3B-2507 InstructThinking
Qwen3-235B-A22B-2507 InstructThinking

{% hint style="success" %} The settings for the Thinking and Instruct model are different.
The thinking model uses temperature = 0.6, but the instruct model uses temperature = 0.7
The thinking model uses top_p = 0.95, but the instruct model uses top_p = 0.8 {% endhint %}

To achieve optimal performance, Qwen recommends these settings:

Instruct Model Settings: Thinking Model Settings:
Temperature = 0.7 Temperature = 0.6
Min_P = 0.00 (llama.cpp's default is 0.1) Min_P = 0.00 (llama.cpp's default is 0.1)
Top_P = 0.80 Top_P = 0.95
TopK = 20 TopK = 20
presence_penalty = 0.0 to 2.0 (llama.cpp default turns it off, but to reduce repetitions, you can use this) presence_penalty = 0.0 to 2.0 (llama.cpp default turns it off, but to reduce repetitions, you can use this)

Adequate Output Length: Use an output length of 32,768 tokens for most queries, which is adequate for most queries.

Chat template for both Thinking (thinking has <think></think>) and Instruct is below:

📖 Run Qwen3-30B-A3B-2507 Tutorials

Below are guides for the Thinking and Instruct versions of the model.

Instruct: Qwen3-30B-A3B-Instruct-2507

Given that this is a non thinking model, there is no need to set thinking=False and the model does not generate <think> </think> blocks.

⚙️Best Practices

To achieve optimal performance, Qwen recommends the following settings:

  • We suggest using temperature=0.7, top_p=0.8, top_k=20, and min_p=0.0 presence_penalty between 0 and 2 if the framework supports to reduce endless repetitions.
  • temperature = 0.7
  • top_k = 20
  • min_p = 0.00 (llama.cpp's default is 0.1)
  • top_p = 0.80
  • presence_penalty = 0.0 to 2.0 (llama.cpp default turns it off, but to reduce repetitions, you can use this) Try 1.0 for example.
  • Supports up to 262,144 context natively but you can set it to 32,768 tokens for less RAM use

🦙 Ollama: Run Qwen3-30B-A3B-Instruct-2507 Tutorial

  1. Install ollama if you haven't already! You can only run models up to 32B in size.

  2. Run the model! Note you can call ollama servein another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in params in our Hugging Face upload!

Llama.cpp: Run Qwen3-30B-A3B-Instruct-2507 Tutorial

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

  2. You can directly pull from HuggingFace via:

  3. Download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose UD_Q4_K_XL or other quantized versions.

Examples:

Example 1 (unknown):

<|im_start|>user
Hey there!<|im_end|>
<|im_start|>assistant
What is 1+1?<|im_end|>
<|im_start|>user
2<|im_end|>
<|im_start|>assistant

Example 2 (bash):

apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh

Example 3 (bash):

ollama run hf.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF:UD-Q4_K_XL

Example 4 (bash):

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp

Constants:

URL: llms-txt#constants:

WIDTH, HEIGHT =456 ,702 # BACKGROUND_COLOR_LIGHTS=['lightskyblue'] GAP_SIZE=189 #

BIRD_RADIUS=3.
PIPE_SPEED=- ( ) ? class Game(): def init(self): self.screen_size=( )

def reset_game_vars(): global current_scor e

set to zero and other initial states.


tokenizer.push_to_hub("your_name/lora_model", token = "...") # Online saving

URL: llms-txt#tokenizer.push_to_hub("your_name/lora_model",-token-=-"...")-#-online-saving

Contents:

  • Fine-tuning Voice models vs. Zero-shot voice cloning

This saves the model weights (for LoRA, it might save only adapter weights if the base is not fully fine-tuned). If you used --push_model in CLI or trainer.push_to_hub(), you could upload it to Hugging Face Hub directly.

Now you should have a fine-tuned TTS model in the directory. The next step is to test it out and if supported, you can use llama.cpp to convert it into a GGUF file.

Fine-tuning Voice models vs. Zero-shot voice cloning

People say you can clone a voice with just 30 seconds of audio using models like XTTS - no training required. Thats technically true, but it misses the point.

Zero-shot voice cloning, which is also available in models like Orpheus and CSM, is an approximation. It captures the general tone and timbre of a speakers voice, but it doesnt reproduce the full expressive range. You lose details like speaking speed, phrasing, vocal quirks, and the subtleties of prosody - things that give a voice its personality and uniqueness.

If you just want a different voice and are fine with the same delivery patterns, zero-shot is usually good enough. But the speech will still follow the models style, not the speakers.

For anything more personalized or expressive, you need training with methods like LoRA to truly capture how someone speaks.


Use the public key in docker run

URL: llms-txt#use-the-public-key-in-docker-run

-e "SSH_KEY=$(cat ~/.ssh/container_key.pub)"


Set CUDA environment variables

URL: llms-txt#set-cuda-environment-variables

ENV CUDA_HOME=/usr/local/cuda-13.0/ ENV CUDA_PATH=$CUDA_HOME ENV PATH=$CUDA_HOME/bin:$PATH ENV LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH ENV C_INCLUDE_PATH=$CUDA_HOME/include:$C_INCLUDE_PATH ENV CPLUS_INCLUDE_PATH=$CUDA_HOME/include:$CPLUS_INCLUDE_PATH


Generate SSH key pair

URL: llms-txt#generate-ssh-key-pair

ssh-keygen -t rsa -b 4096 -f ~/.ssh/container_key


LoRA Hot Swapping Guide

URL: llms-txt#lora-hot-swapping-guide

Contents:

  • 🍧 vLLM LoRA Hot Swapping / Dynamic LoRAs

🍧 vLLM LoRA Hot Swapping / Dynamic LoRAs

To enable LoRA serving for at most 4 LoRAs at 1 time (these are hot swapped / changed), first set the environment flag to allow hot swapping:

Then, serve it with LoRA support:

To load a LoRA dynamically (set the lora name as well), do:

To remove it from the pool:

Examples:

Example 1 (bash):

export VLLM_ALLOW_RUNTIME_LORA_UPDATING=True

Example 2 (bash):

export VLLM_ALLOW_RUNTIME_LORA_UPDATING=True
vllm serve unsloth/Llama-3.3-70B-Instruct \
    --quantization fp8 \
    --kv-cache-dtype fp8
    --gpu-memory-utilization 0.97 \
    --max-model-len 65536 \
    --enable-lora \
    --max-loras 4 \
    --max-lora-rank 64

Example 3 (bash):

curl -X POST http://localhost:8000/v1/load_lora_adapter \
    -H "Content-Type: application/json" \
    -d '{
        "lora_name": "LORA_NAME",
        "lora_path": "/path/to/LORA"
    }'

Example 4 (bash):

curl -X POST http://localhost:8000/v1/unload_lora_adapter \
    -H "Content-Type: application/json" \
    -d '{
        "lora_name": "LORA_NAME"
    }'

What Model Should I Use?

URL: llms-txt#what-model-should-i-use?

Contents:

  • Llama, Qwen, Mistral, Phi or?
  • Instruct or Base Model?
    • Instruct Models
    • Base Models
    • Should I Choose Instruct or Base?
  • Fine-tuning models with Unsloth
    • Experimentation is Key

Llama, Qwen, Mistral, Phi or?

When preparing for fine-tuning, one of the first decisions you'll face is selecting the right model. Here's a step-by-step guide to help you choose:

{% stepper %} {% step %}

Choose a model that aligns with your usecase

  • E.g. For image-based training, select a vision model such as Llama 3.2 Vision. For code datasets, opt for a specialized model like Qwen Coder 2.5.
  • Licensing and Requirements: Different models may have specific licensing terms and system requirements. Be sure to review these carefully to avoid compatibility issues. {% endstep %}

Assess your storage, compute capacity and dataset

  • Use our VRAM guideline to determine the VRAM requirements for the model youre considering.
  • Your dataset will reflect the type of model you will use and amount of time it will take to train {% endstep %}

Select a Model and Parameters

  • We recommend using the latest model for the best performance and capabilities. For instance, as of January 2025, the leading 70B model is Llama 3.3.
  • You can stay up to date by exploring our model catalog to find the newest and relevant options. {% endstep %}

Choose Between Base and Instruct Models

Further details below: {% endstep %} {% endstepper %}

Instruct or Base Model?

When preparing for fine-tuning, one of the first decisions you'll face is whether to use an instruct model or a base model.

Instruct models are pre-trained with built-in instructions, making them ready to use without any fine-tuning. These models, including GGUFs and others commonly available, are optimized for direct usage and respond effectively to prompts right out of the box. Instruct models work with conversational chat templates like ChatML or ShareGPT.

Base models, on the other hand, are the original pre-trained versions without instruction fine-tuning. These are specifically designed for customization through fine-tuning, allowing you to adapt them to your unique needs. Base models are compatible with instruction-style templates like Alpaca or Vicuna, but they generally do not support conversational chat templates out of the box.

Should I Choose Instruct or Base?

The decision often depends on the quantity, quality, and type of your data:

  • 1,000+ Rows of Data: If you have a large dataset with over 1,000 rows, it's generally best to fine-tune the base model.
  • 3001,000 Rows of High-Quality Data: With a medium-sized, high-quality dataset, fine-tuning the base or instruct model are both viable options.
  • Less than 300 Rows: For smaller datasets, the instruct model is typically the better choice. Fine-tuning the instruct model enables it to align with specific needs while preserving its built-in instructional capabilities. This ensures it can follow general instructions without additional input unless you intend to significantly alter its functionality.
  • For information how how big your dataset should be, see here

Fine-tuning models with Unsloth

You can change the model name to whichever model you like by matching it with model's name on Hugging Face e.g. 'unsloth/llama-3.1-8b-unsloth-bnb-4bit'.

We recommend starting with Instruct models, as they allow direct fine-tuning using conversational chat templates (ChatML, ShareGPT etc.) and require less data compared to Base models (which uses Alpaca, Vicuna etc). Learn more about the differences between instruct and base models here.

  • Model names ending in unsloth-bnb-4bit indicate they are Unsloth dynamic 4-bit quants. These models consume slightly more VRAM than standard BitsAndBytes 4-bit models but offer significantly higher accuracy.
  • If a model name ends with just bnb-4bit, without "unsloth", it refers to a standard BitsAndBytes 4-bit quantization.
  • Models with no suffix are in their original 16-bit or 8-bit formats. While they are the original models from the official model creators, we sometimes include important fixes - such as chat template or tokenizer fixes. So it's recommended to use our versions when available.

Experimentation is Key

{% hint style="info" %} We recommend experimenting with both models when possible. Fine-tune each one and evaluate the outputs to see which aligns better with your goals. {% endhint %}


Install unsloth and other dependencies

URL: llms-txt#install-unsloth-and-other-dependencies

RUN pip install unsloth unsloth_zoo bitsandbytes==0.48.0 transformers==4.56.2 trl==0.22.2


Tutorials: How To Fine-tune & Run LLMs

URL: llms-txt#tutorials:-how-to-fine-tune-&-run-llms

Learn how to run and fine-tune models for optimal performance 100% locally with Unsloth.

Cover image
DeepSeek-OCRdeepseek ocr logo.pngdeepseek-ocr-how-to-run-and-fine-tune
Qwen3-VLqwen3-vl promo.pngqwen3-vl-how-to-run-and-fine-tune
Vision Reinforcement Learningvision rl site.pngvision-reinforcement-learning-vlm-rl
DeepSeek-V3.1 Terminusdeepseek v3.1 logo.pngdeepseek-v3.1-how-to-run-locally
Run gpt-ossgpt-oss image.pnggpt-oss-how-to-run-and-fine-tune
Qwen3 Coderqwen3-coder 1920.pngqwen3-coder-how-to-run-locally
Fine-tune gpt-osssloth with comp.pngtutorial-how-to-fine-tune-gpt-oss
Magistral 1.2magistral center.pngmagistral-how-to-run-and-fine-tune
Gemma 3nGemma 3 text only.pnggemma-3n-how-to-run-and-fine-tune
Qwen3-2507qwen3-2507.pngqwen3-2507
DeepSeek-R1-0528deepseek r1-0528.pngdeepseek-r1-0528-how-to-run-locally
Kimi K2kimik2 landcsape.pngkimi-k2-how-to-run-locally
Devstral 2507devstral logo.pngdevstral-how-to-run-and-fine-tune
Fine-tune on Blackwell & RTX 50 GPUsnvidia-logo-white background.pngfine-tuning-llms-with-blackwell-rtx-50-series-and-unsloth
TTS Fine-tuningtts finetuning landscape.pngtext-to-speech-tts-fine-tuning
Qwen3qwen3.pngqwen3-how-to-run-and-fine-tune
Phi-4 reasoningphi4 reasoning2.pngphi-4-reasoning-how-to-run-and-fine-tune
Dynamic 2.0 GGUFsdynamic v2 with unsloth.pngunsloth-dynamic-2.0-ggufs
Llama 4llama 4 only.pngllama-4-how-to-run-and-fine-tune
DeepSeek-V3-0324v30324.pngdeepseek-v3-0324-how-to-run-locally
Grok 2grok 2 logo.pnggrok-2
Gemma 3gemma 3 logo.pnggemma-3-how-to-run-and-fine-tune
QwQ-32Bqwq logo only.pngqwq-32b-how-to-run-effectively
DeepSeek-R1deepseek r1.pngdeepseek-r1-how-to-run-locally
Reinforcement Learning (RL)rl guide new.pngtutorial-train-your-own-reasoning-model-with-grpo
Mistral Small 3.1mistral small 3.1.pnghttps://www.unsloth.ai/blog/mistral-small-3.1
Llama 3llama 3logo.pngtutorial-how-to-finetune-llama-3-and-use-in-ollama
Vision Fine-tuningllama_3.2_vision_large_rectangle_jPUNULJrVe5O4AvDDWO1M.webpvision-fine-tuning
Continued Pretrainingcontinued_pretraining_just_graph_HC0ALBypfCXyUUXClYPiN.webpcontinued-pretraining
Llama 3.3llama_3.3_website_9hQURhj6KfZ7EnBRaKbiu.webphttps://unsloth.ai/blog/llama3-3
Gemma 2gemma_2_long_OKsRGiTB8vrcIyXNWdgMw.avifhttps://unsloth.ai/blog/gemma2
Phi-3phi3_unsloth_ynBY7FG3NTjIbS11ozN_g.webphttps://unsloth.ai/blog/phi3

Create model instance

URL: llms-txt#create-model-instance

llm = LLM( model="unsloth/DeepSeek-OCR", enable_prefix_caching=False, mm_processor_cache_gb=0, logits_processors=[NGramPerReqLogitsProcessor] )


(3) Adding an evaluation loop / OOMs

URL: llms-txt#(3)-adding-an-evaluation-loop-/-ooms


Multi-GPU Training with Unsloth

URL: llms-txt#multi-gpu-training-with-unsloth

Learn how to fine-tune LLMs on multiple GPUs and parallelism with Unsloth.

Unsloth currently supports multi-GPU setups through libraries like Accelerate and DeepSpeed. This means you can already leverage parallelism methods such as FSDP and DDP with Unsloth.

However, we know that the process can be complex and requires manual setup. Were working hard to make multi-GPU support much simpler and more user-friendly, and well be announcing official multi-GPU support for Unsloth soon.

In the meantime, to enable multi GPU for DDP, do the following:

  1. Save your training script to train.py and set in SFTConfig or TrainingArguments the flag ddp_find_unused_parameters = False
  2. Run accelerate launch train.py or torchrun --nproc_per_node N_GPUS -m train.py where N_GPUS is the number of GPUs you have.

Pipeline / model splitting loading is also allowed, so if you do not have enough VRAM for 1 GPU to load say Llama 70B, no worries - we will split the model for you on each GPU! To enable this, use the device_map = "balanced" flag:

Also several contributors have created repos to enable or improve multi-GPU support with Unsloth, including:

  • unsloth-5090-multiple: A fork enabling Unsloth to run efficiently on multi-GPU systems, particularly for the NVIDIA RTX 5090 and similar setups.
  • opensloth: Unsloth with support for multi-GPU training including experimental features.

Stay tuned for our official announcement!
For more details, check out our ongoing Pull Request discussing multi-GPU support.

Examples:

Example 1 (python):

from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
    "unsloth/Llama-3.3-70B-Instruct",
    load_in_4bit = True,
    device_map = "balanced",
)

(4) Customized chat templates

URL: llms-txt#(4)-customized-chat-templates


Beginner? Start here!

URL: llms-txt#beginner?-start-here!

If you're a beginner, here might be the first questions you'll ask before your first fine-tune. You can also always ask our community by joining our Reddit page.

fine-tuning-llms-guideStep-by-step on how to fine-tune!Learn the core basics of training.fine-tuning-llms-guide
what-model-should-i-useInstruct or Base Model?How big should my dataset be?what-model-should-i-use
tutorials-how-to-fine-tune-and-run-llmsHow to Run & Fine-tune DeepSeek?What settings should I set when running Gemma 3?tutorials-how-to-fine-tune-and-run-llms
faq-+-is-fine-tuning-right-for-meWhat can fine-tuning do for me?RAG vs. Fine-tuning?faq-+-is-fine-tuning-right-for-me
install-and-updateHow do I install Unsloth locally?How to update Unsloth?install-and-update
datasets-guideHow do I structure/prepare my dataset?How do I collect data?
unsloth-requirementsDoes Unsloth work on my GPU?How much VRAM will I need?unsloth-requirements
running-and-saving-modelsHow do I save my model locally?How do I run my model via Ollama or vLLM?running-and-saving-models
lora-hyperparameters-guideWhat happens when I change a parameter?What parameters should I change?

Until v0.11.1 release, you need to install vLLM from nightly build

URL: llms-txt#until-v0.11.1-release,-you-need-to-install-vllm-from-nightly-build

uv pip install -U vllm --pre --extra-index-url https://wheels.vllm.ai/nightly python from vllm import LLM, SamplingParams from vllm.model_executor.models.deepseek_ocr import NGramPerReqLogitsProcessor from PIL import Image

Examples:

Example 1 (unknown):

2. Then run the following code:

{% code overflow="wrap" %}

Finetuning from Last Checkpoint

URL: llms-txt#finetuning-from-last-checkpoint

Contents:

  • Wandb Integration

Checkpointing allows you to save your finetuning progress so you can pause it and then continue.

You must edit the Trainer first to add save_strategy and save_steps. Below saves a checkpoint every 50 steps to the folder outputs.

Then in the trainer do:

Which will start from the latest checkpoint and continue training.

Wandb Integration

Examples:

Example 1 (python):

trainer = SFTTrainer(
    ....
    args = TrainingArguments(
        ....
        output_dir = "outputs",
        save_strategy = "steps",
        save_steps = 50,
    ),
)

Example 2 (python):

trainer_stats = trainer.train(resume_from_checkpoint = True)

import os # Optional for faster downloading

URL: llms-txt#import-os-#-optional-for-faster-downloading


Unsloth Inference

URL: llms-txt#unsloth-inference

Learn how to run your finetuned model with Unsloth's faster inference.

Unsloth supports natively 2x faster inference. For our inference only notebook, click here.

All QLoRA, LoRA and non LoRA inference paths are 2x faster. This requires no change of code or any new dependencies.

from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "lora_model", # YOUR MODEL YOU USED FOR TRAINING
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 64)

NotImplementedError: A UTF-8 locale is required. Got ANSI

Sometimes when you execute a cell this error can appear. To solve this, in a new cell, run the below:

Examples:

Example 1 (python):

import locale
locale.getpreferredencoding = lambda: "UTF-8"

DeepSeek-R1: How to Run Locally

URL: llms-txt#deepseek-r1:-how-to-run-locally

Contents:

  • Using llama.cpp (recommended)

A guide on how you can run our 1.58-bit Dynamic Quants for DeepSeek-R1 using llama.cpp.

{% hint style="success" %} Please see https://docs.unsloth.ai/basics/deepseek-r1-0528-how-to-run-locally for an updated DeepSeek R1-0528 (May 28th 2025 version) {% endhint %}

  1. Do not forget about <User> and <Assistant> tokens! - Or use a chat template formatter

  2. Obtain the latest llama.cpp at: github.com/ggerganov/llama.cpp. You can follow the build instructions below as well:

  3. It's best to use --min-p 0.05 to counteract very rare token predictions - I found this to work well especially for the 1.58bit model.

  4. Download the model via:

Examples:

Example 1 (bash):

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggerganov/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=ON -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp

Memory Efficient RL

URL: llms-txt#memory-efficient-rl

Contents:

  • :sparkles:How to enable optimizations
  • :mortar_board:No more gpu_memory_utilization!
  • :interrobang:Why does RL use so much memory?
  • 🦥Unsloth Standby
  • 🧪Performance Experiments
    • H100 Experiments
    • Previous A100 40GB experiments
  • :tada:Other optimizations
  • :books:GRPO Notebooks

We're excited to introduce more efficient reinforcement learning (RL) in Unsloth with multiple algorithmic advancements:

  • 1.2 to 1.7x increased context lengths with no slowdown and no extra memory usage!
  • 10% faster RL training runs with revamped kernels and async data movements
  • 2x faster torch.compile times during model loading

Unsloth already increases RL training speed, context window and reduces VRAM usage by 5090% vs. all other setups with FA2, but now Unsloth's Standby improves this even further. Our Standby feature uniquely limits speed degradation compared to other implementations and sometimes makes training even faster!

Now, Qwen3-32B LoRA 16-bit can attain 6,144 context lengths vs 3,600 (1.7x longer) before on 1xH100 80GB GPU. Llama-3.1-8B QLoRA 4bit can attain 47,500 lengths vs 42,000 before (1.13x longer).

We made RL runs 10% faster through various kernel optimizations, and removed the LoRA communication channel between the CPU and GPU when switching from training to inference mode. Finally, we used custom torch.compile flags to make vLLM's rollout faster by 10%, and reduced compilation time by 2x.

:sparkles:How to enable optimizations

To enable Unsloth's Standby feature, set the environment variable UNSLOTH_VLLM_STANDBY before any Unsloth import. Then set gpu_memory_utilization = 0.95 and that's it!

:mortar_board:No more gpu_memory_utilization!

With Unsloth's new RL improvements, you NEVER have to worry about tuning or setting gpu_memory_utilization ever again - simply set it to 90% or 95% of GPU utilization - 100% sadly won't work since some space is needed for small tensors. Previously one had to tune it from 30% to 95% - no more now! Set it to the maximum and Unsloth will handle the rest!

:interrobang:Why does RL use so much memory?

GRPO (and many RL variants) rely heavily on generation which is primarily powered by vLLM. But this comes comes with a steep cost since it requires constant GPU memory for weights, activations, and the KV Cache.

{% columns %} {% column width="41.66666666666667%" %} Inference takes a lot of VRAM

{% endcolumn %}

{% column width="58.33333333333333%" %} Whilst Training also uses VRAM!

{% endcolumn %} {% endcolumns %}

This means RL needs to keep 2 sets of VRAM / memory on the GPU at the same time:

  1. Inference engine (has model weights, KV cache)
  2. Training engine (has model weights, activations, gradients, optimizer states)

Current RL frameworks have to split 50/50 for a 80GB GPU with 50% for inference and 50% for training. And moving weights from training mode to inference mode can take quite some time.

80GB GPUInference Engine (50%)Training Engine (50%)
Model Weights16GB16GB
KV Cache24GB
Activations, Gradients, Optimizer States24GB

Previous Unsloth versions already smartly optimizes the above, as we share vLLM's weight space directly which removes the double memory usage of the model weights. This frees up 16GB of space for example which can be used to increase context length or the speed of generation. Also, we don't need to do memory movements, which makes training faster.

80GB GPU Inference Engine (50%) Training Engine (50%)
Model Weights 16GB SHARED <<< SHARED
KV Cache 24GB + 8GB= 32GB
Activations, Gradients, Optimizer States 24GB + 8GB=32GB

But we can go further - we first note RL does inference then training then inference then training etc.

This means the memory space for inference and training can in theory be re-used, since inference and training are separate modes - this is where vLLM's sleep mode feature comes in, which has 2 options:

  1. level = 1 copies weights to the CPU and deletes KV cache
  2. level = 2 deletes weights and deletes KV cache

But reminder in Unsloth we share vLLM's memory space for the weights - this means we need a new way to delete the KV cache, and ignore deletion of the weights, and we call this Unsloth Standby.

80GB GPU Inference Engine Training Engine
Model Weights 16GB SHARED <<< SHARED

Multi-purpose

64GB space

KV Cache Activations, Gradients, Optimizer States

To enable this, simply add the below to all RL / GRPO training runs before any Unsloth import:

🧪Performance Experiments

Here you will find out how we benchmarked memory usage and context length for GRPO. Note that we do 2 generations per prompt because for GRPO to work, we need at least 2 generations for which to calculate the sample mean and variance. Without 2 generations, the standard deviation of one sample is 0. This causes the advantages which uses this: (reward - mean)/std to be undefined.


Z=\frac{r\_i - \mu}{\sqrt{\frac{1}{n}\sum(r\_i-\mu)^2}} \\
Z\_{n=1}=\frac{r\_1 - \mu}{\sqrt{\frac{1}{1}\sum(r\_1-\mu)^2}}=\frac{0}{0}=\text{undefined}

This means for GRPO specifically, a maximum context length of 6,144 for Qwen-3 32B is actually 6,144 multiplied by 2 generations ie 12,288 in length.

We provide experiments for Llama-3.1 8B on both LoRA (16bit) and QLoRA (4bit) below:

If you notice any training time differences, it isnt much. In our apples to apples comparison we noticed <1% training time slowdowns or even speedups which can be attributed to margin of error.

We also theorize speedups are possible due to reduced memory pressure, so there might be less memory cleanup on the CUDA memory allocator side.

In the above image, you see the difference between baseline and standby mode on a single T4 GPU for Qwen 3 4B. We can stretch the vllm's gpu_memory_utilisation to as high as 0.95 without worrying that it'd affect training. This means you can fit higher context length sequences and more sequences can be processed. In the first case, for example, we have enough memory to fit and process 32K length sequences provided training allows where as previously, any inputs longer than 2K would potentially not fit in and end up causing OOMs (out of memory).

ExperimentsConfigStatusGPU Memory usageComments
  1. u0.95gen2ga1s Qwen3_(4B)-GRPO.ipynb

standby True

vllm_gpu_util 0.95

num_gen 2

grad_acc_steps 2

Runs for 40 steps/ 40 minutes

14.5 GiB (set by vllm_gpu_util)


Enough to fit in 32K KVCache with chunk of 2-4K or say 16K KVCache + 16K chunks
  1. u9ge2ga2s Qwen3_(4B)-GRPO.ipynb

standby True

vllm_gpu_util 0.9

num_gen 2

grad_acc_steps 2

Runs 32 steps in 40 m13.8 GiB (set by…)Approx enough to fit in ~28K KVCache with chunk of 2-4K or say 15K KVCache + 15K chunks
  1. u9ge2ga2ns Qwen3_(4B)-GRPO.ipynb

standby False

vllm_gpu_util 0.9

num_gen 2

grad_acc_steps 2

model loads but cant train because even batch size of 1 doesnt fitOOM
  1. u8ge2ga2ns Qwen3_(4B)-GRPO.ipynb

standby False

vllm_gpu_util 0.8

num_gen 2

grad_acc_steps 2

model loads but cant train because even batch size of 1 doesnt fitOOM
  1. u7ge2ga2ns Qwen3_(4B)-GRPO.ipynb

standby False

vllm_gpu_util 0.7

num_gen 2

grad_acc_steps 2

Trains fine

28 steps take 39min

~15.1GiBany input slightly longer will result in OOM on colab
  1. u7gen2ga2s Qwen3_(4B)-GRPO.ipynb

standby True

vllm_gpu_util 0.7

num_gen 2

grad_acc_steps 2

Trains fine

29 steps take 40min

13GiB but most of the time around 10-11GBAt the same config, we save 2GiB aka 15% memory here.
Can be higher for longer sequences
Model GPU Seq Len Num Generations Grad Acc Steps
Qwen2.5-14B-Instruct NVIDIA H100 80GB PCIe 32,768 8 4

In our collapsible results below, you can see there is a 9GiB difference in the peak memory used (note that 90% of the time, the GPU memory usage is equal to the peak memory in our case). To put things into perspective, using TRL and LoRA we were able to only fine-tune an 8B parameter model with a context length of 1024 at max (32x less). Anything with higher sequence length (with similar configuration) results in the process failing with OOM.

Click for Unsloth Standby Mode vs. no Standby Benchmarks

The image below shows how standby compares against non standby training with Unsloth. It is averaged over 3 runs to make sure the metrics arent noisy. In fact, if you zoom in close enough, youd see that enabling standby makes it faster as well, probably due to less memory pressure as discussed before.

Previous A100 40GB experiments

In our previous experiments on A100 40GB GPU with Qwen-2.5-3b-instruct and 8 generations per sample, we observed that without standby, the GRPO training (model loaded in 16bit, LoRA, only weights trainable), we could only fit 6K sequence lengths. With our standby feature, we were able to fit 10K and beyond! For comparison TRL can only give you context lengths of up to 1K while holding the same batch size.

:tada:Other optimizations

We now select better compilation flags and reduce compile times by 50% or more. We also managed to dynamically patch any vLLM version to handle gc.collect better for backwards compatibility reasons, as inspired from this vLLM pull request. This reduces compilation times from 2 minutes to under 40 seconds.

We also optimized torch.compile flags and tried turning on some flags - unfortunately combo_kernels and multi_kernel could not function correctly on vLLM 0.10 and Torch 2.8/2.9 nightly and coordinate_descent_tuning made autotuning all kernels dramatically slower. It used to compile in under a minute, but enabling it took over 13 minutes and more, with minimal performance gains.

:books:GRPO Notebooks

All our GRPO notebooks have Unsloth Standby on by default and all optimizations! See https://docs.unsloth.ai/get-started/unsloth-notebooks for all our GRPO notebooks, or try the below:

Examples:

Example 1 (python):

import os
os.environ["UNSLOTH_VLLM_STANDBY"] = "1"

from unsloth import FastLanguageModel
import torch
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "unsloth/Qwen3-8B-Base",
    max_seq_length = 2048, # Can increase for longer reasoning traces
    load_in_4bit = False, # False for LoRA 16bit
    fast_inference = True,
    max_lora_rank = 32, # Larger rank = smarter, but slower
    gpu_memory_utilization = 0.95,
)

Example 2 (python):

import os
os.environ["UNSLOTH_VLLM_STANDBY"] = "1"

Example 3 (unknown):

Standy mode enabled:

|===========================================================================|
|                  PyTorch CUDA memory summary, device ID 0                 |
|---------------------------------------------------------------------------|
|            CUDA OOMs: 0            |        cudaMalloc retries: 0         |
|===========================================================================|
|        Metric         | Cur Usage  | Peak Usage | Tot Alloc  | Tot Freed  |
|---------------------------------------------------------------------------|
| Allocated memory      |  32249 MiB |  43042 MiB | 128336 GiB | 128305 GiB |
|       from large pool |  31415 MiB |  42165 MiB | 127204 GiB | 127173 GiB |
|       from small pool |    834 MiB |   1184 MiB |   1132 GiB |   1131 GiB |
|---------------------------------------------------------------------------|
| Active memory         |  32249 MiB |  43042 MiB | 128336 GiB | 128305 GiB |
|       from large pool |  31415 MiB |  42165 MiB | 127204 GiB | 127173 GiB |
|       from small pool |    834 MiB |   1184 MiB |   1132 GiB |   1131 GiB |
|---------------------------------------------------------------------------|
| Requested memory      |  32199 MiB |  42987 MiB | 128176 GiB | 128145 GiB |
|       from large pool |  31364 MiB |  42110 MiB | 127047 GiB | 127016 GiB |
|       from small pool |    834 MiB |   1184 MiB |   1129 GiB |   1128 GiB |
|---------------------------------------------------------------------------|
| GPU reserved memory   |  37644 MiB |  47504 MiB | 705806 MiB | 668162 MiB |
|       from large pool |  36376 MiB |  46588 MiB | 682818 MiB | 646442 MiB |
|       from small pool |   1268 MiB |   1284 MiB |  22988 MiB |  21720 MiB |
|---------------------------------------------------------------------------|
| Non-releasable memory | 713142 KiB |   4633 MiB | 103206 GiB | 103205 GiB |
|       from large pool | 525312 KiB |   4594 MiB | 101923 GiB | 101922 GiB |
|       from small pool | 187830 KiB |    250 MiB |   1283 GiB |   1283 GiB |
|---------------------------------------------------------------------------|
| Allocations           |    3460    |    4809    |   15606 K  |   15603 K  |
|       from large pool |     395    |     563    |    2812 K  |    2811 K  |
|       from small pool |    3065    |    4270    |   12794 K  |   12791 K  |
|---------------------------------------------------------------------------|
| Active allocs         |    3460    |    4809    |   15606 K  |   15603 K  |
|       from large pool |     395    |     563    |    2812 K  |    2811 K  |
|       from small pool |    3065    |    4270    |   12794 K  |   12791 K  |
|---------------------------------------------------------------------------|
| GPU reserved segments |     913    |     920    |   13260    |   12347    |
|       from large pool |     279    |     305    |    1766    |    1487    |
|       from small pool |     634    |     642    |   11494    |   10860    |
|---------------------------------------------------------------------------|
| Non-releasable allocs |     422    |     628    |    4766 K  |    4765 K  |
|       from large pool |      66    |      92    |    1290 K  |    1289 K  |
|       from small pool |     356    |     555    |    3476 K  |    3475 K  |
|---------------------------------------------------------------------------|
| Oversize allocations  |       0    |       0    |       0    |       0    |
|---------------------------------------------------------------------------|
| Oversize GPU segments |       0    |       0    |       0    |       0    |
|===========================================================================|


Without Standby:

|===========================================================================|
|                  PyTorch CUDA memory summary, device ID 0                 |
|---------------------------------------------------------------------------|
|            CUDA OOMs: 0            |        cudaMalloc retries: 0         |
|===========================================================================|
|        Metric         | Cur Usage  | Peak Usage | Tot Alloc  | Tot Freed  |
|---------------------------------------------------------------------------|
| Allocated memory      |  32711 MiB |  52084 MiB | 142756 GiB | 142724 GiB |
|       from large pool |  31877 MiB |  51207 MiB | 141499 GiB | 141467 GiB |
|       from small pool |    834 MiB |   1184 MiB |   1257 GiB |   1256 GiB |
|---------------------------------------------------------------------------|
| Active memory         |  32711 MiB |  52084 MiB | 142756 GiB | 142724 GiB |
|       from large pool |  31877 MiB |  51207 MiB | 141499 GiB | 141467 GiB |
|       from small pool |    834 MiB |   1184 MiB |   1257 GiB |   1256 GiB |
|---------------------------------------------------------------------------|
| Requested memory      |  32572 MiB |  51658 MiB | 141898 GiB | 141866 GiB |
|       from large pool |  31738 MiB |  50780 MiB | 140644 GiB | 140613 GiB |
|       from small pool |    833 MiB |   1184 MiB |   1253 GiB |   1252 GiB |
|---------------------------------------------------------------------------|
| GPU reserved memory   |  49552 MiB |  52188 MiB |  86354 MiB |  36802 MiB |
|       from large pool |  48320 MiB |  51300 MiB |  84740 MiB |  36420 MiB |
|       from small pool |   1232 MiB |   1232 MiB |   1614 MiB |    382 MiB |
|---------------------------------------------------------------------------|
| Non-releasable memory |      0 B   |      0 B   |      0 B   |      0 B   |
|       from large pool |      0 B   |      0 B   |      0 B   |      0 B   |
|       from small pool |      0 B   |      0 B   |      0 B   |      0 B   |
|---------------------------------------------------------------------------|
| Allocations           |    3460    |    4809    |   17440 K  |   17437 K  |
|       from large pool |     395    |     564    |    2742 K  |    2741 K  |
|       from small pool |    3065    |    4270    |   14698 K  |   14695 K  |
|---------------------------------------------------------------------------|
| Active allocs         |    3460    |    4809    |   17440 K  |   17437 K  |
|       from large pool |     395    |     564    |    2742 K  |    2741 K  |
|       from small pool |    3065    |    4270    |   14698 K  |   14695 K  |
|---------------------------------------------------------------------------|
| GPU reserved segments |       0    |       0    |       0    |       0    |
|       from large pool |       0    |       0    |       0    |       0    |
|       from small pool |       0    |       0    |       0    |       0    |
|---------------------------------------------------------------------------|
| Non-releasable allocs |       0    |       0    |       0    |       0    |
|       from large pool |       0    |       0    |       0    |       0    |
|       from small pool |       0    |       0    |       0    |       0    |
|---------------------------------------------------------------------------|
| Oversize allocations  |       0    |       0    |       0    |       0    |
|---------------------------------------------------------------------------|
| Oversize GPU segments |       0    |       0    |       0    |       0    |
|===========================================================================|

or:

URL: llms-txt#or:

Contents:

  • Run & Evaluate your model
  • Save your model

mask_truncated_completions=True, python

Examples:

Example 1 (unknown):

{% endhint %}

You should see the reward increase overtime. We would recommend you train for at least 300 steps which may take 30 mins however, for optimal results, you should train for longer.

{% hint style="warning" %}
If you're having issues with your GRPO model not learning, we'd highly recommend to use our [Advanced GRPO notebooks](https://docs.unsloth.ai/unsloth-notebooks#grpo-reasoning-notebooks) as it has a much better reward function and you should see results much faster and frequently.
{% endhint %}

You will also see sample answers which allows you to see how the model is learning. Some may have steps, XML tags, attempts etc. and the idea is as trains it's going to get better and better because it's going to get scored higher and higher until we get the outputs we desire with long reasoning chains of answers.

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FyRmUGe8laUKIl0RKwlE6%2Fimage.png?alt=media&#x26;token=3ff931cc-0d2b-4a9c-bbe1-b6289b22d157" alt="" width="563"><figcaption></figcaption></figure>
{% endstep %}

{% step %}

### Run & Evaluate your model

Run your model by clicking the play button. In the first example, there is usually no reasoning in the answer and in order to see the reasoning, we need to first save the LoRA weights we just trained with GRPO first using:

<pre><code><strong>model.save_lora("grpo_saved_lora")
</strong></code></pre>

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FkLHdlRVKN58tM7SGKp3O%2Fimage.png?alt=media&#x26;token=b43a8164-7eae-4ec4-bf59-976078f9be31" alt=""><figcaption><p>The first inference example run has no reasoning. You must load the LoRA and test it to reveal the reasoning.</p></figcaption></figure>

Then we load the LoRA and test it. Our reasoning model is much better - it's not always correct, since we only trained it for an hour or so - it'll be better if we extend the sequence length and train for longer!

You can then save your model to GGUF, Ollama etc. by following our [guide here](https://docs.unsloth.ai/fine-tuning-llms-guide#id-7.-running--saving-the-model).

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FYdz5ch20Ig8JlumBesle%2Fimage.png?alt=media&#x26;token=8aea2867-b8a8-470a-aa4b-a7b9cdd64c3c" alt=""><figcaption></figcaption></figure>

If you are still not getting any reasoning, you may have either trained for too less steps or your reward function/verifier was not optimal.
{% endstep %}

{% step %}

### Save your model

We have multiple options for saving your fine-tuned model, but well focus on the easiest and most popular approaches which you can read more about [here](https://docs.unsloth.ai/basics/running-and-saving-models)

**Saving in 16-bit Precision**

You can save the model with 16-bit precision using the following command:

AMD

URL: llms-txt#amd

Contents:

  • :1234:Reinforcement Learning on AMD GPUs
  • :tools:Troubleshooting

Fine-tune with Unsloth on AMD GPUs.

Unsloth supports Radeon RX, MI300X's (192GB) GPUs and more.

{% stepper %} {% step %} Make a new isolated environment (Optional)

To not break any system packages, you can make an isolated pip environment. Reminder to check what Python version you have! It might be pip3, pip3.13, python3, python.3.13 etc.

{% code overflow="wrap" %}

{% endcode %} {% endstep %}

{% step %} Install PyTorch

Install the latest PyTorch, TorchAO, Xformers from https://pytorch.org/

{% code overflow="wrap" %}

{% endcode %} {% endstep %}

{% step %} Install Unsloth

Install Unsloth's dedicated AMD branch

{% code overflow="wrap" %}

{% endcode %} {% endstep %} {% endstepper %}

And that's it! Try some examples in our Unsloth Notebooks page!

:1234:Reinforcement Learning on AMD GPUs

You can use our 📒gpt-oss RL auto win 2048 example on a MI300X (192GB) GPU. The goal is to play the 2048 game automatically and win it with RL. The LLM (gpt-oss 20b) auto devises a strategy to win the 2048 game, and we calculate a high reward for winning strategies, and low rewards for failing strategies.

{% columns %} {% column %}

{% endcolumn %}

{% column %} The reward over time is increasing after around 300 steps or so!

The goal for RL is to maximize the average reward to win the 2048 game.

{% endcolumn %} {% endcolumns %}

We used an AMD MI300X machine (192GB) to run the 2048 RL example with Unsloth, and it worked well!

You can also use our 📒automatic kernel gen RL notebook also with gpt-oss to auto create matrix multiplication kernels in Python. The notebook also devices multiple methods to counteract reward hacking.

{% columns %} {% column width="50%" %} The RL process learns for example how to apply the Strassen algorithm for faster matrix multiplication inside of Python.

The prompt we used to auto create these kernels was:

{% code overflow="wrap" %}

python def matmul(A, B): return ... `

{% endcode %} {% endcolumn %}

{% column width="50%" %}

{% endcolumn %} {% endcolumns %}

:tools:Troubleshooting

As of October 2025, bitsandbytes in AMD is under development - you might get HSA_STATUS_ERROR_EXCEPTION: An HSAIL operation resulted in a hardware exception errors. We disabled bitsandbytes internally in Unsloth automatically until a fix is provided for versions 0.48.2.dev0 and above. This means load_in_4bit = True will instead use 16bit LoRA. Full finetuning also works via full_finetuning = True

To force 4bit, you need to specify the actual model name like unsloth/gemma-3-4b-it-unsloth-bnb-4bit and set use_exact_model_name = True as an extra argument within FastLanguageModel.from_pretrained etc.

AMD GPUs also need the bitsandbytes blocksize to be 128 and not 64 - this also means our pre-quantized models (for example unsloth/Llama-3.2-1B-Instruct-unsloth-bnb-4bit) from HuggingFace for now will not work - we auto switch to downloading the full BF16 weights, then quantize on the fly if we detect an AMD GPU.

Examples:

Example 1 (bash):

apt install python3.10-venv python3.11-venv python3.12-venv python3.13-venv -y

python -m venv unsloth_env
source unsloth_env/bin/activate

Example 2 (bash):

pip install --upgrade torch==2.8.0 pytorch-triton-rocm torchvision torchaudio torchao==0.13.0 xformers --index-url https://download.pytorch.org/whl/rocm6.4

Example 3 (bash):

pip install --no-deps unsloth unsloth-zoo
pip install --no-deps git+https://github.com/unslothai/unsloth-zoo.git
pip install "unsloth[amd] @ git+https://github.com/unslothai/unsloth"

Example 4 (unknown):

Create a new fast matrix multiplication function using only native Python code.
You are given a list of list of numbers.
Output your new function in backticks using the format below:

Game constants

URL: llms-txt#game-constants

GRAVITY = 0.5 PIPE_SPEED = 5 BIRD_SIZE = 30 LAND_HEIGHT = 50 PIPE_WIDTH = 50 PIPE_GAP = 150

class Bird: def init(self): self.x = WIDTH // 2 self.y = HEIGHT // 2 self.velocity = 0 self.shape = random.choice(['square', 'circle', 'triangle']) self.color = (random.randint(0, 100), random.randint(0, 100), random.randint(0, 100)) self.rect = pygame.Rect(self.x - BIRD_SIZE//2, self.y - BIRD_SIZE//2, BIRD_SIZE, BIRD_SIZE)

def update(self):
    self.velocity += GRAVITY
    self.y += self.velocity
    self.rect.y = self.y - BIRD_SIZE//2
    self.rect.x = self.x - BIRD_SIZE//2  # Keep x centered

def draw(self):
    if self.shape == 'square':
        pygame.draw.rect(screen, self.color, self.rect)
    elif self.shape == 'circle':
        pygame.draw.circle(screen, self.color, (self.rect.centerx, self.rect.centery), BIRD_SIZE//2)
    elif self.shape == 'triangle':
        points = [
            (self.rect.centerx, self.rect.top),
            (self.rect.left, self.rect.bottom),
            (self.rect.right, self.rect.bottom)
        ]
        pygame.draw.polygon(screen, self.color, points)

def spawn_pipe(): pipe_x = WIDTH top_height = random.randint(50, HEIGHT - PIPE_GAP - LAND_HEIGHT) rect_top = pygame.Rect(pipe_x, 0, PIPE_WIDTH, top_height) bottom_y = top_height + PIPE_GAP bottom_height = (HEIGHT - LAND_HEIGHT) - bottom_y rect_bottom = pygame.Rect(pipe_x, bottom_y, PIPE_WIDTH, bottom_height) color = random.choice(pipe_colors) return { 'rect_top': rect_top, 'rect_bottom': rect_bottom, 'color': color, 'scored': False }

def main(): best_score = 0 current_score = 0 game_over = False pipes = [] first_time = True # Track first game play

Initial setup

background_color = (173, 216, 230)  # Light blue initially
land_color = random.choice(land_colors)
bird = Bird()

while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() if event.type == pygame.KEYDOWN: if event.key == pygame.K_ESCAPE or event.key == pygame.K_q: pygame.quit() sys.exit() if event.key == pygame.K_SPACE: if game_over: # Reset the game bird = Bird() pipes.clear() current_score = 0 if first_time: # First restart after initial game over background_color = (random.randint(200, 255), random.randint(200, 255), random.randint(200, 255)) first_time = False else: background_color = (random.randint(200, 255), random.randint(200, 255), random.randint(200, 255)) land_color = random.choice(land_colors) game_over = False else: # Jump the bird bird.velocity = -15 # Initial upward velocity

if not game_over: # Update bird and pipes bird.update()

Move pipes left

        remove_pipes = []
        for pipe in pipes:
            pipe['rect_top'].x -= PIPE_SPEED
            pipe['rect_bottom'].x -= PIPE_SPEED
            # Check if bird passed the pipe
            if not pipe['scored'] and bird.rect.x > pipe['rect_top'].right:
                current_score += 1
                pipe['scored'] = True
            # Check if pipe is offscreen
            if pipe['rect_top'].right < 0:
                remove_pipes.append(pipe)
        # Remove offscreen pipes
        for p in remove_pipes:
            pipes.remove(p)

Spawn new pipe if needed

        if not pipes or pipes[-1]['rect_top'].x < WIDTH - 200:
            pipes.append(spawn_pipe())

Check collisions

        land_rect = pygame.Rect(0, HEIGHT - LAND_HEIGHT, WIDTH, LAND_HEIGHT)
        bird_rect = bird.rect
        # Check pipes
        for pipe in pipes:
            if bird_rect.colliderect(pipe['rect_top']) or bird_rect.colliderect(pipe['rect_bottom']):
                game_over = True
                break
        # Check land and top
        if bird_rect.bottom >= land_rect.top or bird_rect.top <= 0:
            game_over = True

if game_over: if current_score > best_score: best_score = current_score

Drawing

    screen.fill(background_color)
    # Draw pipes
    for pipe in pipes:
        pygame.draw.rect(screen, pipe['color'], pipe['rect_top'])
        pygame.draw.rect(screen, pipe['color'], pipe['rect_bottom'])
    # Draw land
    pygame.draw.rect(screen, land_color, (0, HEIGHT - LAND_HEIGHT, WIDTH, LAND_HEIGHT))
    # Draw bird
    bird.draw()
    # Draw score
    font = pygame.font.SysFont(None, 36)
    score_text = font.render(f'Score: {current_score}', True, (0, 0, 0))
    screen.blit(score_text, (WIDTH - 150, 10))
    # Game over screen
    if game_over:
        over_text = font.render('Game Over!', True, (255, 0, 0))
        best_text = font.render(f'Best: {best_score}', True, (255, 0, 0))
        restart_text = font.render('Press SPACE to restart', True, (255, 0, 0))
        screen.blit(over_text, (WIDTH//2 - 70, HEIGHT//2 - 30))
        screen.blit(best_text, (WIDTH//2 - 50, HEIGHT//2 + 10))
        screen.blit(restart_text, (WIDTH//2 - 100, HEIGHT//2 + 50))
    
    pygame.display.flip()
    clock.tick(60)

if name == "main": main() bash ./llama.cpp/llama-cli
--model unsloth-QwQ-32B-GGUF/QwQ-32B-Q4_K_M.gguf
--threads 32
--ctx-size 16384
--n-gpu-layers 99
--seed 3407
--prio 2
--temp 0.6
--repeat-penalty 1.1
--dry-multiplier 0.5
--min-p 0.01
--top-k 40
--top-p 0.95
-no-cnv
--prompt "<|im_start|>user\nCreate a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|im_end|>\n<|im_start|>assistant\n\n"
2>&1 | tee Q4_K_M_no_samplers.txt python import pygame import random

Examples:

Example 1 (unknown):

{% endcode %}

</details>

6. When running it, we get a runnable game!

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F7qQoA6yrMWUVrwIhLbGu%2Fimage.png?alt=media&#x26;token=6d99c8ce-567a-4144-bd7e-fa57e96b5284" alt=""><figcaption></figcaption></figure>

7. Now try the same without our fixes! So remove `--samplers "top_k;top_p;min_p;temperature;dry;typ_p;xtc"`  This will save the output to `Q4_K_M_no_samplers.txt`

Example 2 (unknown):

You will get some looping, but **problematically incorrect Python syntax** and many other issues. For example the below looks correct, but is wrong! Ie line 39 `pipes.clear() ### <<< NameError: name 'pipes' is not defined. Did you forget to import 'pipes'?`

{% code overflow="wrap" lineNumbers="true" %}

Launch the shell

URL: llms-txt#launch-the-shell

Contents:

  • Unified Memory Usage
  • Video Tutorials

CMD ["/bin/bash"] bash docker run -it
--gpus=all
--net=host
--ipc=host
--ulimit memlock=-1
--ulimit stack=67108864
-v $(pwd):$(pwd)
-v $HOME/.cache/huggingface:/root/.cache/huggingface
-w $(pwd)
unsloth-dgx-spark bash NOTEBOOK_URL="https://raw.githubusercontent.com/unslothai/notebooks/refs/heads/main/nb/gpt_oss_(20B)_Reinforcement_Learning_2048_Game_DGX_Spark.ipynb" wget -O "gpt_oss_20B_RL_2048_Game.ipynb" "$NOTEBOOK_URL"

jupyter notebook --ip=0.0.0.0 --port=8888 --no-browser --allow-root


<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F0rz5KRdEx6IPBOlEy6Vj%2Fdgx6.png?alt=media&#x26;token=9df06512-143e-447e-99fe-83466d2a3703" alt="" width="563"><figcaption></figcaption></figure>

Don't forget Unsloth also allows you to [save and run](https://docs.unsloth.ai/basics/running-and-saving-models) your models after fine-tuning so you can locally deploy them directly on your DGX Spark after.
{% endstep %}
{% endstepper %}

Many thanks to [Lakshmi Ramesh](https://www.linkedin.com/in/rlakshmi24/) and [Barath Anandan](https://www.linkedin.com/in/barathsa/) from NVIDIA for helping Unsloths DGX Spark launch and building the Docker image.

### Unified Memory Usage

gpt-oss-120b QLoRA 4-bit fine-tuning will use around **68GB** of unified memory. How your unified memory usage should look **before** (left) and **after** (right) training:

<div><figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F4jXOLrycoFzr4uVnCap0%2Fdgx7.png?alt=media&#x26;token=d6e2c2ac-fae0-4ee6-9cd3-972af33d43a5" alt=""><figcaption></figcaption></figure> <figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FKOSKQeZ7ZtfRHzFaSGFI%2Fdgx8.png?alt=media&#x26;token=0be758e7-bae5-4e28-89a7-cc2ba75c346b" alt=""><figcaption></figcaption></figure></div>

And that's it! Have fun training and running LLMs completely locally on your NVIDIA DGX Spark!

Thanks to Tim from [AnythingLLM](https://github.com/Mintplex-Labs/anything-llm) for providing a great fine-tuning tutorial with Unsloth on DGX Spark:

{% embed url="<https://www.youtube.com/watch?t=962s&v=zs-J9sKxvoM>" %}

**Examples:**

Example 1 (unknown):
```unknown
</details>
{% endstep %}

{% step %}

#### Launch container <a href="#docs-internal-guid-98e78e94-7fff-9d37-504b-0b8ffb3169b3" id="docs-internal-guid-98e78e94-7fff-9d37-504b-0b8ffb3169b3"></a>

Launch the training container with GPU access and volume mounts:

Example 2 (unknown):

<div><figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FxUJYSy5eJggn26wGJzAT%2Fdgx3.png?alt=media&#x26;token=0445fa4f-67dd-41a4-a5f4-19df5a05d86d" alt=""><figcaption></figcaption></figure> <figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Fckhbs6k6vk0ov856ym8h%2Fdgx5.png?alt=media&#x26;token=37f9f6d9-1712-4a9b-a8d4-485944105b38" alt=""><figcaption></figcaption></figure></div>
{% endstep %}

{% step %}

#### Start Jupyter and Run Notebooks <a href="#docs-internal-guid-98e78e94-7fff-9d37-504b-0b8ffb3169b3" id="docs-internal-guid-98e78e94-7fff-9d37-504b-0b8ffb3169b3"></a>

Inside the container, start Jupyter and run the required notebook. You can use the Reinforcement Learning gpt-oss 20b to win 2048 [notebook here](https://github.com/unslothai/notebooks/blob/main/nb/gpt_oss_\(20B\)_Reinforcement_Learning_2048_Game_DGX_Spark.ipynb). In fact all [Unsloth notebooks](https://docs.unsloth.ai/get-started/unsloth-notebooks) work in DGX Spark including the **120b** notebook! Just remove the installation cells.

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FjgfO6NvzOLLtw5xVQEHs%2FNotebooks%20on%20dgx.png?alt=media&#x26;token=88a067a5-c16c-4c73-b073-4b4917551069" alt="" width="563"><figcaption></figcaption></figure>

The below commands can be used to run the RL notebook as well. After Jupyter Notebook is launched, open up the “`gpt_oss_20B_RL_2048_Game.ipynb`”

4bit pre quantized models we support for 4x faster downloading + no OOMs.

URL: llms-txt#4bit-pre-quantized-models-we-support-for-4x-faster-downloading-+-no-ooms.

Contents:

  • Fine-tuning Hyperparameters (LoRA)
  • Data Preparation
  • Train the model
  • Inference: Run Your Trained Model
  • Save and Export Your Model
  • Saving to Llama.cpp
  • 🏁 And that's it!
  • FAQ (Frequently Asked Questions)

fourbit_models = [ "unsloth/gpt-oss-20b-unsloth-bnb-4bit", # 20B model using bitsandbytes 4bit quantization "unsloth/gpt-oss-120b-unsloth-bnb-4bit", "unsloth/gpt-oss-20b", # 20B model using MXFP4 format "unsloth/gpt-oss-120b", ] # More models at https://huggingface.co/unsloth

model, tokenizer = FastLanguageModel.from_pretrained( model_name = "unsloth/gpt-oss-20b", dtype = dtype, # None for auto detection max_seq_length = max_seq_length, # Choose any for long context! load_in_4bit = True, # 4 bit quantization to reduce memory full_finetuning = False, # [NEW!] We have full finetuning now! # token = "hf_...", # use one if using gated models )

You should see output similar to the example below. Note: We explicitly change the dtype to float32 to ensure correct training behavior. {% endstep %}

Fine-tuning Hyperparameters (LoRA)

Now it's time to adjust your training hyperparameters. For a deeper dive into how, when, and what to tune, check out our detailed hyperparameters guide.

{% hint style="info" %} To avoid overfitting, monitor your training loss and avoid setting these values too high. {% endhint %}

This step adds LoRA adapters for parameter-efficient fine-tuning. Only about 1% of the models parameters are trained, which makes the process significantly more efficient.

For this example, we will use the HuggingFaceH4/Multilingual-Thinking. This dataset contains chain-of-thought reasoning examples derived from user questions translated from English into four additional languages.

This is the same dataset referenced in OpenAI's fine-tuning cookbook. The goal of using a multilingual dataset is to help the model learn and generalize reasoning patterns across multiple languages.

gpt-oss introduces a reasoning effort system that controls how much reasoning the model performs. By default, the reasoning effort is set to low, but you can change it by setting the reasoning_effort parameter to low, medium or high.

To format the dataset, we apply a customized version of the gpt-oss prompt:

Let's inspect the dataset by printing the first example:

One unique feature of gpt-oss is its use of the OpenAI Harmony format, which supports structured conversations, reasoning output, and tool calling. This format includes tags such as <|start|> , <|message|> , and <|return|> .

{% hint style="info" %} 🦥 Unsloth fixes the chat template to ensure it is correct. See this tweet for technical details on our template fix. {% endhint %}

Feel free to adapt the prompt and structure to suit your own dataset or use-case. For more guidance, refer to our dataset guide. {% endstep %}

We've pre-selected training hyperparameters for optimal results. However, you can modify them based on your specific use case. Refer to our hyperparameters guide.

In this example, we train for 60 steps to speed up the process. For a full training run, set num_train_epochs=1 and disable the step limiting by setting max_steps=None.

During training, monitor the loss to ensure that it is decreasing over time. This confirms that the training process is functioning correctly.

{% endstep %}

Inference: Run Your Trained Model

Now it's time to run inference with your fine-tuned model. You can modify the instruction and input, but leave the output blank.

In this example, we test the model's ability to reason in French by adding a specific instruction to the system prompt, following the same structure used in our dataset.

This should produce an output similar to:

{% endstep %}

Save and Export Your Model

To save your fine-tuned model, it can be exported in the Safetensors format with our new on-demand dequantization of MXFP4 base models (like gpt-oss) during the LoRA merge process. This makes it possible to export your fine-tuned model in bf16 format.

{% hint style="success" %} New: Saving or merging QLoRA fine-tuned models to GGUF is now supported for use in other frameworks (e.g. Hugging Face, llama.cpp with GGUF). {% endhint %}

After fine-tuning your gpt-oss model, you can merge it into 16-bit format with:

If you prefer to merge the model and push to the hugging-face hub directly:

Saving to Llama.cpp

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

  2. Convert and quantize the merged model:

  3. Run inference on the quantized model:

{% endstep %} {% endstepper %}

🏁 And that's it!

You've fine-tuned gpt-oss with Unsloth. We're currently working on RL and GRPO implementations, as well as improved model saving and running, so stay tuned.

As always, feel free to drop by our Discord or Reddit if you need any help.

FAQ (Frequently Asked Questions)

1. Can I export my model to use in Hugging Face, llama.cpp GGUF or vLLM later?

Yes you can now save/export your gpt-oss fine-tuned model using Unsloth's new update!

2. Can I do fp4 or MXFP4 training with gpt-oss?

No, currently no framework supports fp4 or MXFP4 training. Unsloth however is the only framework to support QLoRA 4-bit fine-tuning for the model, enabling more than 4x less VRAM use.

3. Can I export my model to MXFP4 format after training?

No, currently no library or framework supports this.

4. Can I do Reinforcement Learning (RL) or GRPO with gpt-oss?

Yes! Unsloth now supports RL for gpt-oss with GRPO/GSPO. We made it work on a free Kaggle notebook and achieved the fastest inference for RL. Read more here

Acknowledgements: A huge thank you to Eyera for contributing to this guide!

Examples:

Example 1 (python):

model = FastLanguageModel.get_peft_model(
    model,
    r = 8, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128
    target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
                      "gate_proj", "up_proj", "down_proj",],
    lora_alpha = 16,
    lora_dropout = 0, # Supports any, but = 0 is optimized
    bias = "none",    # Supports any, but = "none" is optimized
    # [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
    use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
    random_state = 3407,
    use_rslora = False,  # We support rank stabilized LoRA
    loftq_config = None, # And LoftQ
)

Example 2 (python):

def formatting_prompts_func(examples):
    convos = examples["messages"]
    texts = [tokenizer.apply_chat_template(convo, tokenize = False, add_generation_prompt = False) for convo in convos]
    return { "text" : texts, }
pass

from datasets import load_dataset

dataset = load_dataset("HuggingFaceH4/Multilingual-Thinking", split="train")
dataset

Example 3 (python):

tokenizer.apply_chat_template(
    text, 
    tokenize = False, 
    add_generation_prompt = False,
    reasoning_effort = "medium",
)

Example 4 (python):

from unsloth.chat_templates import standardize_sharegpt
dataset = standardize_sharegpt(dataset)
dataset = dataset.map(formatting_prompts_func, batched = True,)

Continued Pretraining

URL: llms-txt#continued-pretraining

Contents:

  • What is Continued Pretraining?
  • Advanced Features:
    • Loading LoRA adapters for continued finetuning
    • Continued Pretraining & Finetuning the lm_head and embed_tokens matrices

AKA as Continued Finetuning. Unsloth allows you to continually pretrain so a model can learn a new language.

You can read more about continued pretraining and our release in our blog post.

What is Continued Pretraining?

Continued or continual pretraining (CPT) is necessary to “steer” the language model to understand new domains of knowledge, or out of distribution domains. Base models like Llama-3 8b or Mistral 7b are first pretrained on gigantic datasets of trillions of tokens (Llama-3 for e.g. is 15 trillion).

But sometimes these models have not been well trained on other languages, or text specific domains, like law, medicine or other areas. So continued pretraining (CPT) is necessary to make the language model learn new tokens or datasets.

Advanced Features:

Loading LoRA adapters for continued finetuning

If you saved a LoRA adapter through Unsloth, you can also continue training using your LoRA weights. The optimizer state will be reset as well. To load even optimizer states to continue finetuning, see the next section.

Continued Pretraining & Finetuning the lm_head and embed_tokens matrices

Add lm_head and embed_tokens. For Colab, sometimes you will go out of memory for Llama-3 8b. If so, just add lm_head.

Then use 2 different learning rates - a 2-10x smaller one for the lm_head or embed_tokens like so:

Examples:

Example 1 (python):

from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "LORA_MODEL_NAME",
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
)
trainer = Trainer(...)
trainer.train()

Example 2 (python):

model = FastLanguageModel.get_peft_model(
    model,
    r = 16,
    target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
                      "gate_proj", "up_proj", "down_proj",
                      "lm_head", "embed_tokens",],
    lora_alpha = 16,
)

Example 3 (python):

from unsloth import UnslothTrainer, UnslothTrainingArguments

trainer = UnslothTrainer(
    ....
    args = UnslothTrainingArguments(
        ....
        learning_rate = 5e-5,
        embedding_learning_rate = 5e-6, # 2-10x smaller than learning_rate
    ),
)

Colors for the balls

URL: llms-txt#colors-for-the-balls

Contents:

  • 🕵️ Extra Findings & Tips

BALL_COLORS = [ '#f8b862', '#f6ad49', '#f39800', '#f08300', '#ec6d51', '#ee7948', '#ed6d3d', '#ec6800', '#ec6800', '#ee7800', '#eb6238', '#ea5506', '#ea5506', '#eb6101', '#e49e61', '#e45e32', '#e17b34', '#dd7a56', '#db8449', '#d66a35' ]

@dataclass class Ball: x: float y: float vx: float vy: float radius: float color: str number: int spin: float = 0.0

def move(self): self.x += self.vx self.y += self.vy self.vy += GRAVITY self.vx *= FRICTION self.vy *= FRICTION self.spin *= SPIN_FRICTION

def collide_with_ball(self, other: 'Ball'): dx = other.x - self.x dy = other.y - self.y distance = math.hypot(dx, dy)

    if distance < self.radius + other.radius:
        # Calculate collision normal
        nx = dx / distance
        ny = dy / distance
        
        # Calculate relative velocity
        dvx = other.vx - self.vx
        dvy = other.vy - self.vy
        
        # Calculate impulse
        impulse = 2 * (dvx * nx + dvy * ny) / (1/self.radius + 1/other.radius)
        
        # Apply impulse
        self.vx += impulse * nx / self.radius
        self.vy += impulse * ny / self.radius
        other.vx -= impulse * nx / other.radius
        other.vy -= impulse * ny / other.radius
        
        # Separate balls to prevent sticking
        overlap = (self.radius + other.radius - distance) / 2
        self.x -= overlap * nx
        self.y -= overlap * ny
        other.x += overlap * nx
        other.y += overlap * ny
        
        # Transfer some spin
        transfer = impulse * 0.01
        self.spin -= transfer
        other.spin += transfer

class HeptagonBounceSimulator: def init(self, root): self.root = root self.canvas = tk.Canvas(root, width=WIDTH, height=HEIGHT, bg='white') self.canvas.pack()

    self.balls = self.create_balls()
    self.heptagon_angle = 0
    self.last_time = 0
    self.running = True
    
    self.root.bind('<space>', self.toggle_pause)
    self.root.bind('<Escape>', lambda e: root.destroy())
    
    self.last_time = self.root.after(0, self.update)

def create_balls(self) -> List[Ball]:
    balls = []
    for i in range(20):
        # Start all balls at center with small random velocity
        angle = np.random.uniform(0, 2 * math.pi)
        speed = np.random.uniform(0.5, 2)
        vx = math.cos(angle) * speed
        vy = math.sin(angle) * speed
        
        balls.append(Ball(
            x=CENTER_X,
            y=CENTER_Y,
            vx=vx,
            vy=vy,
            radius=BALL_RADIUS,
            color=BALL_COLORS[i],
            number=i+1,
            spin=np.random.uniform(-2, 2)
        ))
    return balls

def toggle_pause(self, event):
    self.running = not self.running
    if self.running:
        self.last_time = self.root.after(0, self.update)

def get_heptagon_vertices(self) -> List[Tuple[float, float]]:
    vertices = []
    for i in range(7):
        angle = math.radians(self.heptagon_angle + i * 360 / 7)
        x = CENTER_X + HEPTAGON_RADIUS * math.cos(angle)
        y = CENTER_Y + HEPTAGON_RADIUS * math.sin(angle)
        vertices.append((x, y))
    return vertices

def check_ball_heptagon_collision(self, ball: Ball):
    vertices = self.get_heptagon_vertices()
    closest_dist = float('inf')
    closest_normal = (0, 0)
    closest_edge = None
    
    # Check collision with each edge of the heptagon
    for i in range(len(vertices)):
        p1 = vertices[i]
        p2 = vertices[(i + 1) % len(vertices)]
        
        # Vector from p1 to p2
        edge_x = p2[0] - p1[0]
        edge_y = p2[1] - p1[1]
        edge_length = math.hypot(edge_x, edge_y)
        
        # Normalize edge vector
        edge_x /= edge_length
        edge_y /= edge_length
        
        # Normal vector (perpendicular to edge, pointing inward)
        nx = -edge_y
        ny = edge_x
        
        # Vector from p1 to ball
        ball_to_p1_x = ball.x - p1[0]
        ball_to_p1_y = ball.y - p1[1]
        
        # Project ball onto edge normal
        projection = ball_to_p1_x * nx + ball_to_p1_y * ny
        
        # If projection is negative, ball is outside the heptagon
        if projection < ball.radius:
            # Find closest point on edge to ball
            edge_proj = ball_to_p1_x * edge_x + ball_to_p1_y * edge_y
            edge_proj = max(0, min(edge_length, edge_proj))
            closest_x = p1[0] + edge_proj * edge_x
            closest_y = p1[1] + edge_proj * edge_y
            
            # Distance from ball to closest point on edge
            dist = math.hypot(ball.x - closest_x, ball.y - closest_y)
            
            if dist < closest_dist:
                closest_dist = dist
                closest_normal = (nx, ny)
                closest_edge = (p1, p2)
    
    if closest_dist < ball.radius:
        # Calculate bounce response
        dot_product = ball.vx * closest_normal[0] + ball.vy * closest_normal[1]
        
        # Apply bounce with elasticity
        ball.vx -= (1 + ELASTICITY) * dot_product * closest_normal[0]
        ball.vy -= (1 + ELASTICITY) * dot_product * closest_normal[1]
        
        # Add some spin based on impact
        edge_vec = (closest_edge[1][0] - closest_edge[0][0], 
                    closest_edge[1][1] - closest_edge[0][1])
        edge_length = math.hypot(edge_vec[0], edge_vec[1])
        if edge_length > 0:
            edge_vec = (edge_vec[0]/edge_length, edge_vec[1]/edge_length)
            # Cross product of velocity and edge direction
            spin_effect = (ball.vx * edge_vec[1] - ball.vy * edge_vec[0]) * 0.1
            ball.spin += spin_effect
        
        # Move ball outside the heptagon to prevent sticking
        penetration = ball.radius - closest_dist
        ball.x += penetration * closest_normal[0]
        ball.y += penetration * closest_normal[1]

def update(self):
    if not self.running:
        return
    
    # Clear canvas
    self.canvas.delete('all')
    
    # Update heptagon rotation
    self.heptagon_angle += ROTATION_SPEED / 60  # Assuming ~60 FPS
    
    # Draw heptagon
    vertices = self.get_heptagon_vertices()
    self.canvas.create_polygon(vertices, outline='black', fill='', width=2)
    
    # Update and draw balls
    for i, ball in enumerate(self.balls):
        # Move ball
        ball.move()
        
        # Check collisions with heptagon
        self.check_ball_heptagon_collision(ball)
        
        # Draw ball
        self.canvas.create_oval(
            ball.x - ball.radius, ball.y - ball.radius,
            ball.x + ball.radius, ball.y + ball.radius,
            fill=ball.color, outline='black'
        )
        
        # Draw number with rotation based on spin
        angle = ball.spin * 10  # Scale spin for visible rotation
        self.canvas.create_text(
            ball.x, ball.y,
            text=str(ball.number),
            font=('Arial', 10, 'bold'),
            angle=angle
        )
    
    # Check ball-ball collisions
    for i in range(len(self.balls)):
        for j in range(i + 1, len(self.balls)):
            self.balls[i].collide_with_ball(self.balls[j])
    
    # Schedule next update
    self.last_time = self.root.after(16, self.update)  # ~60 FPS

if name == 'main': root = tk.Tk() root.title('Bouncing Balls in a Spinning Heptagon') simulator = HeptagonBounceSimulator(root) root.mainloop()


## :detective: Extra Findings & Tips

1. We find using lower KV cache quantization (4bit) seems to degrade generation quality via empirical tests - more tests need to be done, but we suggest using `q8_0` cache quantization. The goal of quantization is to support longer context lengths since the KV cache uses quite a bit of memory.
2. We found the `down_proj` in this model to be extremely sensitive to quantitation. We had to redo some of our dynamic quants which used 2bits for `down_proj` and now we use 3bits as the minimum for all these matrices.
3. Using `llama.cpp` 's Flash Attention backend does result in somewhat faster decoding speeds. Use `-DGGML_CUDA_FA_ALL_QUANTS=ON` when compiling. Note it's also best to set your CUDA architecture as found in <https://developer.nvidia.com/cuda-gpus> to reduce compilation times, then set it via `-DCMAKE_CUDA_ARCHITECTURES="80"`&#x20;
4. Using a `min_p=0.01`is probably enough. `llama.cpp`defaults to 0.1, which is probably not necessary. Since a temperature of 0.3 is used anyways, we most likely will very unlikely sample low probability tokens, so removing very unlikely tokens is a good idea. DeepSeek recommends 0.0 temperature for coding tasks.

[^1]: MUST USE 8bit - not 4bit

[^2]: CPU threads your machine has

[^3]: &#x20;Approx 2 for 24GB GPU. Approx 18 for 80GB GPU.

---

## Kimi K2: How to Run Locally

**URL:** llms-txt#kimi-k2:-how-to-run-locally

**Contents:**
- :gear: Recommended Settings
  - 🌙 Official Recommended Settings:
- :1234: Chat template and prompt format
- :floppy\_disk: Model uploads
- :turtle:Run Kimi K2 Tutorials
  - ✨ Run in llama.cpp

Guide on running Kimi K2 and Kimi-K2-Instruct-0905 on your own local device!

Kimi-K2-Instruct-0905 the new version of K2 achieves SOTA performance in knowledge, reasoning, coding, and agentic tasks. The full 1T parameter model from Moonshot AI requires 1.09TB of disk space, while the quantized **Unsloth Dynamic 1.8-bit** version reduces this to just 245GB (-80% size)**:** [**Kimi-K2-GGUF**](https://huggingface.co/unsloth/Kimi-K2-Instruct-GGUF)

You can now run **Kimi-K2-Instruct-0905** with our new GGUFs. Use our same settings below but ensure you change the model name from 'Kimi-K2-Instruct' to 'Kimi-K2-Instruct-0905': [K2-0905 GGUFs](https://huggingface.co/unsloth/Kimi-K2-Instruct-0905-GGUF)

All uploads use Unsloth [Dynamic 2.0](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) for SOTA 5-shot MMLU and KL Divergence performance, meaning you can run quantized LLMs with minimal accuracy loss.

<a href="https://docs.unsloth.ai/basics/kimi-k2-how-to-run-locally#run-kimi-k2-tutorials" class="button primary">Run in llama.cpp</a>

## :gear: Recommended Settings

{% hint style="success" %}
You need **250GB of disk space** at least to run the 1bit quant!

The only requirement is **`disk space + RAM + VRAM ≥ 250GB`**. That means you do not need to have that much RAM or VRAM (GPU) to run the model, but it will just be slower.
{% endhint %}

The 1.8-bit (UD-TQ1\_0) quant will fit in a 1x 24GB GPU (with all MoE layers offloaded to system RAM or a fast disk). Expect around 5 tokens/s with this setup if you have bonus 256GB RAM as well. The full Kimi K2 Q8 quant is 1.09TB in size and will need at least 8 x H200 GPUs.

For optimal performance you will need at least **250GB unified memory or 250GB combined RAM+VRAM** for 5+ tokens/s. If you have less than 250GB combined RAM+VRAM, then the speed of the model will definitely take a hit.

**If you do not have 250GB of RAM+VRAM, no worries!** llama.cpp inherently has **disk offloading**, so through mmaping, it'll still work, just be slower - for example before you might get 5 to 10 tokens / second, now it's under 1 token.

We suggest using our **UD-Q2\_K\_XL (381GB)** quant to balance size and accuracy!

{% hint style="success" %}
For the best performance, have your VRAM + RAM combined = the size of the quant you're downloading. If not, it'll still work via disk offloading, just it'll be slower!
{% endhint %}

### 🌙 Official Recommended Settings:

According to [Moonshot AI](https://huggingface.co/moonshotai/Kimi-K2-Instruct), these are the recommended settings for Kimi K2 inference:

* Set the <mark style="background-color:green;">**temperature 0.6**</mark> to reduce repetition and incoherence.
* Original default system prompt is:

* (Optional) Moonshot also suggests the below for the system prompt:

{% hint style="success" %}
We recommend setting <mark style="background-color:green;">**min\_p to 0.01**</mark> to suppress the occurrence of unlikely tokens with low probabilities.
{% endhint %}

## :1234: Chat template and prompt format

Kimi Chat does use a BOS (beginning of sentence token). The system, user and assistant roles are all enclosed with `<|im_middle|>` which is interesting, and each get their own respective token `<|im_system|>, <|im_user|>, <|im_assistant|>`.

{% code overflow="wrap" %}

To separate the conversational boundaries (you must remove each new line), we get:

{% code overflow="wrap" %}

## :floppy\_disk: Model uploads

**ALL our uploads** - including those that are not imatrix-based or dynamic, utilize our calibration dataset, which is specifically optimized for conversational, coding, and reasoning tasks.

<table data-full-width="false"><thead><tr><th>MoE Bits</th><th>Type + Link</th><th>Disk Size</th><th>Details</th></tr></thead><tbody><tr><td>1.66bit</td><td><a href="https://huggingface.co/unsloth/Kimi-K2-Instruct-GGUF/tree/main/UD-TQ1_0">UD-TQ1_0</a></td><td><strong>245GB</strong></td><td>1.92/1.56bit</td></tr><tr><td>1.78bit</td><td><a href="https://huggingface.co/unsloth/Kimi-K2-Instruct-GGUF/tree/main/UD-IQ1_S">UD-IQ1_S</a></td><td><strong>281GB</strong></td><td>2.06/1.56bit</td></tr><tr><td>1.93bit</td><td><a href="https://huggingface.co/unsloth/Kimi-K2-Instruct-GGUF/tree/main/UD-IQ1_M">UD-IQ1_M</a></td><td><strong>304GB</strong></td><td>2.5/2.06/1.56</td></tr><tr><td>2.42bit</td><td><a href="https://huggingface.co/unsloth/Kimi-K2-Instruct-GGUF/tree/main/UD-IQ2_XXS">UD-IQ2_XXS</a></td><td><strong>343GB</strong></td><td>2.5/2.06bit</td></tr><tr><td>2.71bit</td><td><a href="https://huggingface.co/unsloth/Kimi-K2-Instruct-GGUF/tree/main/UD-Q2_K_XL">UD-Q2_K_XL</a></td><td><strong>381GB</strong></td><td> 3.5/2.5bit</td></tr><tr><td>3.12bit</td><td><a href="https://huggingface.co/unsloth/Kimi-K2-Instruct-GGUF/tree/main/UD-IQ3_XXS">UD-IQ3_XXS</a></td><td><strong>417GB</strong></td><td> 3.5/2.06bit</td></tr><tr><td>3.5bit</td><td><a href="https://huggingface.co/unsloth/Kimi-K2-Instruct-GGUF/tree/main/UD-Q3_K_XL">UD-Q3_K_XL</a></td><td><strong>452GB</strong></td><td> 4.5/3.5bit</td></tr><tr><td>4.5bit</td><td><a href="https://huggingface.co/unsloth/Kimi-K2-Instruct-GGUF/tree/main/UD-Q4_K_XL">UD-Q4_K_XL</a></td><td><strong>588GB</strong></td><td> 5.5/4.5bit</td></tr><tr><td>5.5bit</td><td><a href="https://huggingface.co/unsloth/Kimi-K2-Instruct-GGUF/tree/main/UD-Q5_K_XL">UD-Q5_K_XL</a></td><td><strong>732GB</strong></td><td>6.5/5.5bit</td></tr></tbody></table>

We've also uploaded versions in [BF16 format](https://huggingface.co/unsloth/Kimi-K2-Instruct-BF16).

## :turtle:Run Kimi K2 Tutorials

{% hint style="success" %}
You can now use the latest update of [llama.cpp](https://github.com/ggml-org/llama.cpp) to run the model:
{% endhint %}

### ✨ Run in llama.cpp

1. Obtain the latest `llama.cpp` on [GitHub here](https://github.com/ggml-org/llama.cpp). You can follow the build instructions below as well. Change `-DGGML_CUDA=ON` to `-DGGML_CUDA=OFF` if you don't have a GPU or just want CPU inference.

2. If you want to use `llama.cpp` directly to load models, you can do the below: (:UD-IQ1\_S) is the quantization type. You can also download via Hugging Face (point 3). This is similar to `ollama run` . Use `export LLAMA_CACHE="folder"` to force `llama.cpp` to save to a specific location.\ <mark style="background-color:green;">**To run the new September 2025 update for the model, change the model name from 'Kimi-K2-Instruct' to 'Kimi-K2-Instruct-0905'.**</mark>

{% hint style="info" %}
Please try out `-ot ".ffn_.*_exps.=CPU"` to offload all MoE layers to the CPU! This effectively allows you to fit all non MoE layers on 1 GPU, improving generation speeds. You can customize the regex expression to fit more layers if you have more GPU capacity.

If you have a bit more GPU memory, try `-ot ".ffn_(up|down)_exps.=CPU"` This offloads up and down projection MoE layers.

Try `-ot ".ffn_(up)_exps.=CPU"` if you have even more GPU memory. This offloads only up projection MoE layers.

And finally offload all layers via `-ot ".ffn_.*_exps.=CPU"` This uses the least VRAM.

You can also customize the regex, for example `-ot "\.(6|7|8|9|[0-9][0-9]|[0-9][0-9][0-9])\.ffn_(gate|up|down)_exps.=CPU"` means to offload gate, up and down MoE layers but only from the 6th layer onwards.
{% endhint %}

3. Download the model via (after installing `pip install huggingface_hub hf_transfer` ). You can choose `UD-TQ1_0`(dynamic 1.8bit quant) or other quantized versions like `Q2_K_XL` . We <mark style="background-color:green;">**recommend using our 2bit dynamic quant**</mark><mark style="background-color:green;">**&#x20;**</mark><mark style="background-color:green;">**`UD-Q2_K_XL`**</mark><mark style="background-color:green;">**&#x20;**</mark><mark style="background-color:green;">**to balance size and accuracy**</mark>. More versions at: [huggingface.co/unsloth/Kimi-K2-Instruct-GGUF](https://huggingface.co/unsloth/Kimi-K2-Instruct-GGUF)

{% code overflow="wrap" %}

**Examples:**

Example 1 (unknown):
```unknown
You are a helpful assistant

Example 2 (unknown):

You are Kimi, an AI assistant created by Moonshot AI.

Example 3 (python):

<|im_system|>system<|im_middle|>You are a helpful assistant<|im_end|><|im_user|>user<|im_middle|>What is 1+1?<|im_end|><|im_assistant|>assistant<|im_middle|>2<|im_end|>

Example 4 (unknown):

<|im_system|>system<|im_middle|>You are a helpful assistant<|im_end|>
<|im_user|>user<|im_middle|>What is 1+1?<|im_end|>
<|im_assistant|>assistant<|im_middle|>2<|im_end|>

Unsloth Notebooks

URL: llms-txt#unsloth-notebooks

Contents:

  • Colab notebooks
  • Kaggle notebooks

Explore our catalog of Unsloth notebooks:

Also see our GitHub repo for our notebooks: github.com/unslothai/notebooks

GRPO (RL)Text-to-speechVisionUse-caseKaggle

Standard notebooks:

GRPO (Reasoning RL) notebooks:

Text-to-Speech (TTS) notebooks:

Speech-to-Text (SST) notebooks:

Vision (Multimodal) notebooks:

Large LLM notebooks:

Notebooks for large models: These exceed Colabs free 15 GB VRAM tier. With Colabs new 80 GB GPUs, you can fine-tune 120B parameter models.

{% hint style="info" %} Colab subscription or credits are required. We don't earn anything from these notebooks. {% endhint %}

Other important notebooks:

Specific use-case notebooks:

Rest of notebooks:

Standard notebooks:

GRPO (Reasoning) notebooks:

Text-to-Speech (TTS) notebooks:

Vision (Multimodal) notebooks:

Specific use-case notebooks:

Rest of notebooks:

To view a complete list of all our Kaggle notebooks, click here.

{% hint style="info" %} Feel free to contribute to the notebooks by visiting our repo! {% endhint %}


Conda Install

URL: llms-txt#conda-install

To install Unsloth locally on Conda, follow the steps below:

{% hint style="warning" %} Only use Conda if you have it. If not, use Pip. {% endhint %}

Select either pytorch-cuda=11.8,12.1 for CUDA 11.8 or CUDA 12.1. We support python=3.10,3.11,3.12.

If you're looking to install Conda in a Linux environment, read here, or run the below:

Examples:

Example 1 (bash):

conda create --name unsloth_env \
    python=3.11 \
    pytorch-cuda=12.1 \
    pytorch cudatoolkit xformers -c pytorch -c nvidia -c xformers \
    -y
conda activate unsloth_env

pip install unsloth

Example 2 (bash):

mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm -rf ~/miniconda3/miniconda.sh
~/miniconda3/bin/conda init bash
~/miniconda3/bin/conda init zsh

Save to 16-bit precision

URL: llms-txt#save-to-16-bit-precision

model.save_pretrained_merged("model", tokenizer, save_method="merged_16bit") python

Examples:

Example 1 (unknown):

#### **Pushing to Hugging Face Hub**

To share your model, well push it to the Hugging Face Hub using the `push_to_hub_merged` method. This allows saving the model in multiple quantization formats.

Running & Saving Models

URL: llms-txt#running-&-saving-models

Learn how to save your finetuned model so you can run it in your favorite inference engine.

You can also run your fine-tuned models by using Unsloth's 2x faster inference.

Saving to GGUFsaving-to-ggufsaving-to-gguf
Ollamasaving-to-ollamasaving-to-ollama
vLLMsaving-to-vllm-for-deploymentsaving-to-vllm-for-deployment
SGLangsaving-to-sglang-for-deploymentvllm-engine-arguments
Unsloth Inferenceunsloth-inferenceunsloth-inference
Troubleshootingtroubleshooting-inferencetroubleshooting-inference
vLLM Engine Argumentsvllm-engine-argumentssaving-to-sglang-for-deployment
LoRA Hotswappinglora-hot-swapping-guide

Vision Reinforcement Learning (VLM RL)

URL: llms-txt#vision-reinforcement-learning-(vlm-rl)

Train Vision/multimodal models via GRPO and RL with Unsloth!

Unsloth now supports vision/multimodal RL with Qwen3-VL, Gemma 3 and more. Due to Unsloth's unique weight sharing and custom kernels, Unsloth makes VLM RL 1.52× faster, uses 90% less VRAM, and enables 15× longer context lengths than FA2 setups, with no accuracy loss. This update also introduces Qwen's GSPO algorithm.

Unsloth can train Qwen3-VL-8B with GSPO/GRPO on a free Colab T4 GPU. Other VLMs work too, but may need larger GPUs. Gemma requires newer GPUs than T4 because vLLM restricts to Bfloat16, thus we recommend NVIDIA L4 on Colab. Our notebooks solve numerical math problems involving images and diagrams:

  • Qwen-3 VL-8B (vLLM inference): Colab
  • Qwen-2.5 VL-7B (vLLM inference): Colab Kaggle
  • Gemma-3-4B (Unsloth inference): Colab

We have also added vLLM VLM integration into Unsloth natively, so all you have to do to use vLLM inference is enable the fast_inference=True flag when initializing the model. Special thanks to Sinoué GAD for providing the first notebook that made integrating VLM RL easier!

This VLM support also integrates our latest update for even more memory efficient + faster RL including our Standby feature, which uniquely limits speed degradation compared to other implementations.

{% hint style="info" %} You can only use fast_inference for VLMs supported by vLLM. Some models, like Llama 3.2 Vision thus only can run without vLLM, but they still work in Unsloth. {% endhint %}

It is also important to note, that vLLM does not support LoRA for vision/encoder layers, thus set finetune_vision_layers = False when loading a LoRA adapter.
However you CAN train the vision layers as well if you use inference via transformers/Unsloth.

Examples:

Example 1 (python):

os.environ['UNSLOTH_VLLM_STANDBY'] = '1' # To enable memory efficient GRPO with vLLM
model, tokenizer = FastVisionModel.from_pretrained(
    model_name = "Qwen/Qwen2.5-VL-7B-Instruct",
    max_seq_length = 16384, #Must be this large to fit image in context
    load_in_4bit = True, # False for LoRA 16bit
    fast_inference = True, # Enable vLLM fast inference
    gpu_memory_utilization = 0.8, # Reduce if out of memory
)

Updating

URL: llms-txt#updating

Contents:

  • Standard Updating (recommended):
    • Updating without dependency updates:
  • To use an old version of Unsloth:

To update or use an old version of Unsloth, follow the steps below:

Updating without dependency updates:

pip install --upgrade --force-reinstall --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth.git
pip install --upgrade --force-reinstall --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth-zoo.git

To use an old version of Unsloth:

'2025.1.5' is one of the previous old versions of Unsloth. Change it to a specific release listed on our Github here.

Examples:

Example 1 (bash):

pip install --upgrade unsloth unsloth_zoo

Example 2 (bash):

pip install --force-reinstall --no-cache-dir --no-deps unsloth==2025.1.5

Helper functions to extract answers from different formats

URL: llms-txt#helper-functions-to-extract-answers-from-different-formats

def extract_xml_answer(text: str) -> str: answer = text.split("")[-1] answer = answer.split("")[0] return answer.strip()

def extract_hash_answer(text: str) -> str | None: if "####" not in text: return None return text.split("####")[1].strip()


Int4 QAT

URL: llms-txt#int4-qat

from torchao.quantization import Int4WeightOnlyConfig model.save_pretrained_torchao( model, "tokenizer", torchao_config = Int4WeightOnlyConfig(), )


Unsloth Environment Flags

URL: llms-txt#unsloth-environment-flags

Advanced flags which might be useful if you see breaking finetunes, or you want to turn stuff off.

Environment variablePurpose
os.environ["UNSLOTH_RETURN_LOGITS"] = "1"Forcibly returns logits - useful for evaluation if logits are needed.
os.environ["UNSLOTH_COMPILE_DISABLE"] = "1"Disables auto compiler. Could be useful to debug incorrect finetune results.
os.environ["UNSLOTH_DISABLE_FAST_GENERATION"] = "1"Disables fast generation for generic models.
os.environ["UNSLOTH_ENABLE_LOGGING"] = "1"Enables auto compiler logging - useful to see which functions are compiled or not.
os.environ["UNSLOTH_FORCE_FLOAT32"] = "1"On float16 machines, use float32 and not float16 mixed precision. Useful for Gemma 3.
os.environ["UNSLOTH_STUDIO_DISABLED"] = "1"Disables extra features.
os.environ["UNSLOTH_COMPILE_DEBUG"] = "1"Turns on extremely verbose torch.compilelogs.
os.environ["UNSLOTH_COMPILE_MAXIMUM"] = "0"Enables maximum torch.compileoptimizations - not recommended.
os.environ["UNSLOTH_COMPILE_IGNORE_ERRORS"] = "1"Can turn this off to enable fullgraph parsing.
os.environ["UNSLOTH_FULLGRAPH"] = "0"Enable torch.compile fullgraph mode
os.environ["UNSLOTH_DISABLE_AUTO_UPDATES"] = "1"Forces no updates to unsloth-zoo

Another possibility is maybe the model uploads we uploaded are corrupted, but unlikely. Try the following:

Examples:

Example 1 (python):

model, tokenizer = FastVisionModel.from_pretrained(
    "Qwen/Qwen2-VL-7B-Instruct",
    use_exact_model_name = True,
)

Clone and build

URL: llms-txt#clone-and-build

Contents:

  • Docker
  • uv
  • Conda or mamba (Advanced)
  • WSL-Specific Notes

pip install ninja export TORCH_CUDA_ARCH_LIST="12.0" git clone --depth=1 https://github.com/facebookresearch/xformers --recursive cd xformers && python setup.py install && cd .. bash uv pip install unsloth bash curl -LsSf https://astral.sh/uv/install.sh | sh && source $HOME/.local/bin/env bash mkdir 'unsloth-blackwell' && cd 'unsloth-blackwell' uv venv .venv --python=3.12 --seed source .venv/bin/activate bash uv pip install -U vllm --torch-backend=cu128 bash uv pip install unsloth unsloth_zoo bitsandbytes bash uv pip install -qqq
"unsloth_zoo[base] @ git+https://github.com/unslothai/unsloth-zoo"
"unsloth[base] @ git+https://github.com/unslothai/unsloth" bash

First uninstall xformers installed by previous libraries

pip uninstall xformers -y

Clone and build

pip install ninja export TORCH_CUDA_ARCH_LIST="12.0" git clone --depth=1 https://github.com/facebookresearch/xformers --recursive cd xformers && python setup.py install && cd .. bash uv pip install -U transformers bash curl -L -O "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh" bash bash Miniforge3-$(uname)-$(uname -m).sh bash conda create --name unsloth-blackwell python==3.12 -y bash conda activate unsloth-blackwell bash pip install -U vllm --extra-index-url https://download.pytorch.org/whl/cu128 bash pip install unsloth unsloth_zoo bitsandbytes bash

First uninstall xformers installed by previous libraries

pip uninstall xformers -y

Clone and build

pip install ninja export TORCH_CUDA_ARCH_LIST="12.0" git clone --depth=1 https://github.com/facebookresearch/xformers --recursive cd xformers && python setup.py install && cd .. bash pip install -U triton>=3.3.1 bash uv pip install -U transformers bash

Create or edit .wslconfig in your Windows user directory

(typically C:\Users\YourUsername.wslconfig)

Add these lines to the file

[wsl2] memory=16GB # Minimum 16GB recommended for xformers compilation processors=4 # Adjust based on your CPU cores swap=2GB localhostForwarding=true powershell wsl --shutdown bash

Set CUDA architecture for Blackwell GPUs

export TORCH_CUDA_ARCH_LIST="12.0"

Install xformers from source with optimized build flags

pip install -v --no-build-isolation -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers


The `--no-build-isolation` flag helps avoid potential build issues in WSL environments.

**Examples:**

Example 1 (unknown):
```unknown
{% endcode %}

### Docker

[**`unsloth/unsloth`**](https://hub.docker.com/r/unsloth/unsloth) is Unsloth's only Docker image. For Blackwell and 50-series GPUs, use this same image - no separate image needed.

For installation instructions, please follow our [Unsloth Docker guide](https://docs.unsloth.ai/new/how-to-fine-tune-llms-with-unsloth-and-docker).

### uv

Example 2 (unknown):

#### uv (Advanced)

The installation order is important, since we want the overwrite bundled dependencies with specific versions (namely, `xformers` and `triton`).

1. I prefer to use `uv` over `pip` as it's faster and better for resolving dependencies, especially for libraries which depend on `torch` but for which a specific `CUDA` version is required per this scenario.

   Install `uv`

Example 3 (unknown):

Create a project dir and venv:

Example 4 (unknown):

2. Install `vllm`

Gemma 3n: How to Run & Fine-tune

URL: llms-txt#gemma-3n:-how-to-run-&-fine-tune

Contents:

  • 🖥️ Running Gemma 3n
    • ⚙️ Official Recommended Settings
    • 🦙 Tutorial: How to Run Gemma 3n in Ollama
    • 📖 Tutorial: How to Run Gemma 3n in llama.cpp

Run Google's new Gemma 3n locally with Dynamic GGUFs on llama.cpp, Ollama, Open WebUI and fine-tune with Unsloth!

Googles Gemma 3n multimodal model handles image, audio, video, and text inputs. Available in 2B and 4B sizes, it supports 140 languages for text and multimodal tasks. You can now run and fine-tune Gemma-3n-E4B and E2B locally using Unsloth.

Fine-tune Gemma 3n with our free Colab notebook

Gemma 3n has 32K context length, 30s audio input, OCR, auto speech recognition (ASR), and speech translation via prompts.

Running TutorialFine-tuning TutorialFixes + Technical Analysis

Unsloth Gemma 3n (Instruct) uploads with optimal configs:

Dynamic 2.0 GGUF (text only)Dynamic 4-bit Instruct (to fine-tune)16-bit Instruct

See all our Gemma 3n uploads including base and more formats in our collection here.

🖥️ Running Gemma 3n

Currently Gemma 3n is only supported in text format for inference.

{% hint style="info" %} Weve fixed issues with GGUFs not working properly in Ollama only. Please redownload if using Ollama. {% endhint %}

According to the Gemma team, the official recommended settings for inference:

temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0

  • Temperature of 1.0
  • Top_K of 64
  • Min_P of 0.00 (optional, but 0.01 works well, llama.cpp default is 0.1)
  • Top_P of 0.95
  • Repetition Penalty of 1.0. (1.0 means disabled in llama.cpp and transformers)
  • Chat template:
<bos><start_of_turn>user\nHello!<end_of_turn>\n<start_of_turn>model\nHey there!<end_of_turn>\n<start_of_turn>user\nWhat is 1+1?<end_of_turn>\n<start_of_turn>model\n
  
  • Chat template with \nnewlines rendered (except for the last)

{% code overflow="wrap" %}

{% hint style="danger" %} llama.cpp an other inference engines auto add a <bos> - DO NOT add TWO <bos> tokens! You should ignore the <bos> when prompting the model! {% endhint %}

🦙 Tutorial: How to Run Gemma 3n in Ollama

{% hint style="success" %} Please re download Gemma 3N quants or remove the old ones via Ollama since there are some bug fixes. You can do the below to delete the old file and refresh it:

  1. Install ollama if you haven't already!

  2. Run the model! Note you can call ollama servein another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in params in our Hugging Face upload!

📖 Tutorial: How to Run Gemma 3n in llama.cpp

{% hint style="info" %} We would first like to thank Xuan-Son Nguyen from Hugging Face, Georgi Gerganov from the llama.cpp team on making Gemma 3N work in llama.cpp! {% endhint %}

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

  2. If you want to use llama.cpp directly to load models, you can do the below: (:Q4_K_XL) is the quantization type. You can also download via Hugging Face (point 3). This is similar to ollama run

  3. OR download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose Q4_K_M, or other quantized versions (like BF16 full precision).

Examples:

Example 1 (unknown):

<bos><start_of_turn>user
Hello!<end_of_turn>
<start_of_turn>model
Hey there!<end_of_turn>
<start_of_turn>user
What is 1+1?<end_of_turn>
<start_of_turn>model\n

Example 2 (unknown):

ollama rm hf.co/unsloth/gemma-3n-E4B-it-GGUF:UD-Q4_K_XL

ollama run hf.co/unsloth/gemma-3n-E4B-it-GGUF:UD-Q4_K_XL

Example 3 (bash):

apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh

Example 4 (bash):

ollama run hf.co/unsloth/gemma-3n-E4B-it-GGUF:UD-Q4_K_XL

Troubleshooting Inference

URL: llms-txt#troubleshooting-inference

Contents:

  • Running in Unsloth works well, but after exporting & running on other platforms, the results are poor
  • Saving to safetensors, not bin format in Colab
  • If saving to GGUF or vLLM 16bit crashes

If you're experiencing issues when running or saving your model.

Running in Unsloth works well, but after exporting & running on other platforms, the results are poor

You might sometimes encounter an issue where your model runs and produces good results on Unsloth, but when you use it on another platform like Ollama or vLLM, the results are poor or you might get gibberish, endless/infinite generations or repeated outputs.

  • The most common cause of this error is using an incorrect chat template. Its essential to use the SAME chat template that was used when training the model in Unsloth and later when you run it in another framework, such as llama.cpp or Ollama. When inferencing from a saved model, it's crucial to apply the correct template.
  • You must use the correct eos token. If not, you might get gibberish on longer generations.
  • It might also be because your inference engine adds an unnecessary "start of sequence" token (or the lack of thereof on the contrary) so ensure you check both hypotheses!
  • Use our conversational notebooks to force the chat template - this will fix most issues.

Saving to safetensors, not bin format in Colab

We save to .bin in Colab so it's like 4x faster, but set safe_serialization = None to force saving to .safetensors. So model.save_pretrained(..., safe_serialization = None) or model.push_to_hub(..., safe_serialization = None)

If saving to GGUF or vLLM 16bit crashes

You can try reducing the maximum GPU usage during saving by changing maximum_memory_usage.

The default is model.save_pretrained(..., maximum_memory_usage = 0.75). Reduce it to say 0.5 to use 50% of GPU peak memory or lower. This can reduce OOM crashes during saving.


Install xformers from source for blackwell support

URL: llms-txt#install-xformers-from-source-for-blackwell-support

RUN git clone --depth=1 https://github.com/facebookresearch/xformers --recursive &&
cd xformers &&
export TORCH_CUDA_ARCH_LIST="12.1" &&
python setup.py install &&
cd ..


We're installing the latest Torch, Triton, OpenAI's Triton kernels, Transformers and Unsloth!

URL: llms-txt#we're-installing-the-latest-torch,-triton,-openai's-triton-kernels,-transformers-and-unsloth!

Contents:

  • Configuring gpt-oss and Reasoning Effort

!pip install --upgrade -qqq uv try: import numpy; install_numpy = f"numpy=={numpy.version}" except: install_numpy = "numpy" !uv pip install -qqq
"torch>=2.8.0" "triton>=3.4.0" {install_numpy}
"unsloth_zoo[base] @ git+https://github.com/unslothai/unsloth-zoo"
"unsloth[base] @ git+https://github.com/unslothai/unsloth"
torchvision bitsandbytes
git+https://github.com/huggingface/transformers
git+https://github.com/triton-lang/triton.git@05b2c186c1b6c9a08375389d5efe9cb4c401c075#subdirectory=python/triton_kernels


### Configuring gpt-oss and Reasoning Effort

Well load **`gpt-oss-20b`**  using Unsloth's [linearized version](https://docs.unsloth.ai/models/gpt-oss-how-to-run-and-fine-tune/..#making-efficient-gpt-oss-fine-tuning-work) (as no other version will work for QLoRA fine-tuning). Configure the following parameters:

* `max_seq_length = 2048`&#x20;
  * Recommended for quick testing and initial experiments.
* `load_in_4bit = True`&#x20;
  * Use `False` for LoRA training (note: setting this to `False` will need at least 43GB VRAM). You ***MUST*** also set **`model_name = "unsloth/gpt-oss-20b-BF16"`**

<pre class="language-python"><code class="lang-python">from unsloth import FastLanguageModel
import torch
max_seq_length = 1024
dtype = None

---

## Reinforcement Learning - DPO, ORPO & KTO

**URL:** llms-txt#reinforcement-learning---dpo,-orpo-&-kto

**Contents:**
- DPO Code

To use the reward modelling functions for DPO, GRPO, ORPO or KTO with Unsloth, follow the steps below:

DPO (Direct Preference Optimization), ORPO (Odds Ratio Preference Optimization), PPO, KTO Reward Modelling all work with Unsloth.

We have Google Colab notebooks for reproducing GRPO, ORPO, DPO Zephyr, KTO and SimPO:

* [GRPO notebooks](https://docs.unsloth.ai/unsloth-notebooks#grpo-reasoning-rl-notebooks)
* [ORPO notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3_\(8B\)-ORPO.ipynb)
* [DPO Zephyr notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Zephyr_\(7B\)-DPO.ipynb)
* [KTO notebook](https://colab.research.google.com/drive/1MRgGtLWuZX4ypSfGguFgC-IblTvO2ivM?usp=sharing)
* [SimPO notebook](https://colab.research.google.com/drive/1Hs5oQDovOay4mFA6Y9lQhVJ8TnbFLFh2?usp=sharing)

We're also in 🤗Hugging Face's official docs! We're on the [SFT docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth) and the [DPO docs](https://huggingface.co/docs/trl/main/en/dpo_trainer#accelerate-dpo-fine-tuning-using-unsloth).

```python
python
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0" # Optional set GPU device ID

from unsloth import FastLanguageModel, PatchDPOTrainer
from unsloth import is_bfloat16_supported
PatchDPOTrainer()
import torch
from transformers import TrainingArguments
from trl import DPOTrainer

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "unsloth/zephyr-sft-bnb-4bit",
    max_seq_length = max_seq_length,
    dtype = None,
    load_in_4bit = True,
)

---

## Devstral: How to Run & Fine-tune

**URL:** llms-txt#devstral:-how-to-run-&-fine-tune

**Contents:**
- 🖥️ **Running Devstral**
  - :gear: Official Recommended Settings
- :llama: Tutorial: How to Run Devstral in Ollama
- 📖 Tutorial: How to Run Devstral in llama.cpp  <a href="#tutorial-how-to-run-llama-4-scout-in-llama.cpp" id="tutorial-how-to-run-llama-4-scout-in-llama.cpp"></a>

Run and fine-tune Mistral Devstral 1.1, including Small-2507 and 2505.

**Devstral-Small-2507** (Devstral 1.1) is Mistral's new agentic LLM for software engineering. It excels at tool-calling, exploring codebases, and powering coding agents. Mistral AI released the original 2505 version in May, 2025.

Finetuned from [**Mistral-Small-3.1**](https://huggingface.co/unsloth/Mistral-Small-3.1-24B-Instruct-2503-GGUF), Devstral supports a 128k context window. Devstral Small 1.1 has improved performance, achieving a score of 53.6% performance on [SWE-bench verified](https://openai.com/index/introducing-swe-bench-verified/), making it (July 10, 2025) the #1 open model on the benchmark.

Unsloth Devstral 1.1 GGUFs contain additional <mark style="background-color:green;">**tool-calling support**</mark> and <mark style="background-color:green;">**chat template fixes**</mark>. Devstral 1.1 still works well with OpenHands but now also generalizes better to other prompts and coding environments.

As text-only, Devstrals vision encoder was removed prior to fine-tuning. We've added [*<mark style="background-color:green;">**optional Vision support**</mark>*](#possible-vision-support) for the model.

{% hint style="success" %}
We also worked with Mistral behind the scenes to help debug, test and correct any possible bugs and issues! Make sure to **download Mistral's official downloads or Unsloth's GGUFs** / dynamic quants to get the **correct implementation** (ie correct system prompt, correct chat template etc)

Please use `--jinja` in llama.cpp to enable the system prompt!
{% endhint %}

All Devstral uploads use our Unsloth [Dynamic 2.0](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) methodology, delivering the best performance on 5-shot MMLU and KL Divergence benchmarks. This means, you can run and fine-tune quantized Mistral LLMs with minimal accuracy loss!

#### **Devstral - Unsloth Dynamic** quants:

| Devstral 2507 (new)                                                                                                    | Devstral 2505                                                                                               |
| ---------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- |
| GGUF: [Devstral-Small-2507-GGUF](https://huggingface.co/unsloth/Devstral-Small-2507-GGUF)                              | [Devstral-Small-2505-GGUF](https://huggingface.co/unsloth/Devstral-Small-2505-GGUF)                         |
| 4-bit BnB: [Devstral-Small-2507-unsloth-bnb-4bit](https://huggingface.co/unsloth/Devstral-Small-2507-unsloth-bnb-4bit) | [Devstral-Small-2505-unsloth-bnb-4bit](https://huggingface.co/unsloth/Devstral-Small-2505-unsloth-bnb-4bit) |

## 🖥️ **Running Devstral**

### :gear: Official Recommended Settings

According to Mistral AI, these are the recommended settings for inference:

* <mark style="background-color:blue;">**Temperature from 0.0 to 0.15**</mark>
* Min\_P of 0.01 (optional, but 0.01 works well, llama.cpp default is 0.1)
* <mark style="background-color:orange;">**Use**</mark><mark style="background-color:orange;">**&#x20;**</mark><mark style="background-color:orange;">**`--jinja`**</mark><mark style="background-color:orange;">**&#x20;**</mark><mark style="background-color:orange;">**to enable the system prompt.**</mark>

**A system prompt is recommended**, and is a derivative of Open Hand's system prompt. The full system prompt is provided [here](https://huggingface.co/unsloth/Devstral-Small-2505/blob/main/SYSTEM_PROMPT.txt).

{% hint style="success" %}
Our dynamic uploads have the '`UD`' prefix in them. Those without are not dynamic however still utilize our calibration dataset.
{% endhint %}

## :llama: Tutorial: How to Run Devstral in Ollama

1. Install `ollama` if you haven't already!&#x20;

2. Run the model with our dynamic quant. Note you can call `ollama serve &`in another terminal if it fails! We include all suggested parameters (temperature etc) in `params` in our Hugging Face upload!
3. Also Devstral supports 128K context lengths, so best to enable [**KV cache quantization**](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-set-the-quantization-type-for-the-kv-cache). We use 8bit quantization which saves 50% memory usage. You can also try `"q4_0"`

## 📖 Tutorial: How to Run Devstral in llama.cpp  <a href="#tutorial-how-to-run-llama-4-scout-in-llama.cpp" id="tutorial-how-to-run-llama-4-scout-in-llama.cpp"></a>

1. Obtain the latest `llama.cpp` on [GitHub here](https://github.com/ggml-org/llama.cpp). You can follow the build instructions below as well. Change `-DGGML_CUDA=ON` to `-DGGML_CUDA=OFF` if you don't have a GPU or just want CPU inference.

2. If you want to use `llama.cpp` directly to load models, you can do the below: (:Q4\_K\_XL) is the quantization type. You can also download via Hugging Face (point 3). This is similar to `ollama run`

3. **OR** download the model via (after installing `pip install huggingface_hub hf_transfer` ). You can choose Q4\_K\_M, or other quantized versions (like BF16 full precision).

**Examples:**

Example 1 (unknown):
```unknown
You are Devstral, a helpful agentic model trained by Mistral AI and using the OpenHands scaffold. You can interact with a computer to solve tasks.

<ROLE>
Your primary role is to assist users by executing commands, modifying code, and solving technical problems effectively. You should be thorough, methodical, and prioritize quality over speed.
* If the user asks a question, like "why is X happening", don't try to fix the problem. Just give an answer to the question.
</ROLE>

.... SYSTEM PROMPT CONTINUES ....

Example 2 (bash):

apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh

Example 3 (bash):

export OLLAMA_KV_CACHE_TYPE="q8_0"
ollama run hf.co/unsloth/Devstral-Small-2507-GGUF:UD-Q4_K_XL

Example 4 (bash):

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggerganov/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli
cp llama.cpp/build/bin/llama-* llama.cpp

Install triton from source for latest blackwell support

URL: llms-txt#install-triton-from-source-for-latest-blackwell-support

RUN git clone https://github.com/triton-lang/triton.git &&
cd triton &&
git checkout c5d671f91d90f40900027382f98b17a3e04045f6 &&
pip install -r python/requirements.txt &&
pip install . &&
cd ..


FAQ + Is Fine-tuning Right For Me?

URL: llms-txt#faq-+-is-fine-tuning-right-for-me?

Contents:

  • Understanding Fine-Tuning
    • Real-World Applications of Fine-Tuning
  • The Benefits of Fine-Tuning
  • Common Misconceptions
    • Does Fine-Tuning Add New Knowledge to a Model?
    • Is RAG Always Better Than Fine-Tuning?
    • Is Fine-Tuning Expensive?
  • FAQ:
    • Why You Should Combine RAG & Fine-Tuning
    • LoRA vs. QLoRA: Which One to Use?

If you're stuck on if fine-tuning is right for you, see here! Learn about fine-tuning misconceptions, how it compared to RAG and more:

Understanding Fine-Tuning

Fine-tuning an LLM customizes its behavior, deepens its domain expertise, and optimizes its performance for specific tasks. By refining a pre-trained model (e.g. Llama-3.1-8B) with specialized data, you can:

  • Update Knowledge Introduce new, domain-specific information that the base model didnt originally include.
  • Customize Behavior Adjust the models tone, personality, or response style to fit specific needs or a brand voice.
  • Optimize for Tasks Improve accuracy and relevance on particular tasks or queries your use-case requires.

Think of fine-tuning as creating a specialized expert out of a generalist model. Some debate whether to use Retrieval-Augmented Generation (RAG) instead of fine-tuning, but fine-tuning can incorporate knowledge and behaviors directly into the model in ways RAG cannot. In practice, combining both approaches yields the best results - leading to greater accuracy, better usability, and fewer hallucinations.

Real-World Applications of Fine-Tuning

Fine-tuning can be applied across various domains and needs. Here are a few practical examples of how it makes a difference:

  • Sentiment Analysis for Finance Train an LLM to determine if a news headline impacts a company positively or negatively, tailoring its understanding to financial context.
  • Customer Support Chatbots Fine-tune on past customer interactions to provide more accurate and personalized responses in a companys style and terminology.
  • Legal Document Assistance Fine-tune on legal texts (contracts, case law, regulations) for tasks like contract analysis, case law research, or compliance support, ensuring the model uses precise legal language.

The Benefits of Fine-Tuning

Fine-tuning offers several notable benefits beyond what a base model or a purely retrieval-based system can provide:

Fine-Tuning vs. RAG: Whats the Difference?

Fine-tuning can do mostly everything RAG can - but not the other way around. During training, fine-tuning embeds external knowledge directly into the model. This allows the model to handle niche queries, summarize documents, and maintain context without relying on an outside retrieval system. Thats not to say RAG lacks advantages as it is excels at accessing up-to-date information from external databases. It is in fact possible to retrieve fresh data with fine-tuning as well, however it is better to combine RAG with fine-tuning for efficiency.

Task-Specific Mastery

Fine-tuning deeply integrates domain knowledge into the model. This makes it highly effective at handling structured, repetitive, or nuanced queries, scenarios where RAG-alone systems often struggle. In other words, a fine-tuned model becomes a specialist in the tasks or content it was trained on.

Independence from Retrieval

A fine-tuned model has no dependency on external data sources at inference time. It remains reliable even if a connected retrieval system fails or is incomplete, because all needed information is already within the models own parameters. This self-sufficiency means fewer points of failure in production.

Faster Responses

Fine-tuned models dont need to call out to an external knowledge base during generation. Skipping the retrieval step means they can produce answers much more quickly. This speed makes fine-tuned models ideal for time-sensitive applications where every second counts.

Custom Behavior and Tone

Fine-tuning allows precise control over how the model communicates. This ensures the models responses stay consistent with a brands voice, adhere to regulatory requirements, or match specific tone preferences. You get a model that not only knows what to say, but how to say it in the desired style.

Reliable Performance

Even in a hybrid setup that uses both fine-tuning and RAG, the fine-tuned model provides a reliable fallback. If the retrieval component fails to find the right information or returns incorrect data, the models built-in knowledge can still generate a useful answer. This guarantees more consistent and robust performance for your system.

Common Misconceptions

Despite fine-tunings advantages, a few myths persist. Lets address two of the most common misconceptions about fine-tuning:

Does Fine-Tuning Add New Knowledge to a Model?

Yes - it absolutely can. A common myth suggests that fine-tuning doesnt introduce new knowledge, but in reality it does. If your fine-tuning dataset contains new domain-specific information, the model will learn that content during training and incorporate it into its responses. In effect, fine-tuning can and does teach the model new facts and patterns from scratch.

Is RAG Always Better Than Fine-Tuning?

Not necessarily. Many assume RAG will consistently outperform a fine-tuned model, but thats not the case when fine-tuning is done properly. In fact, a well-tuned model often matches or even surpasses RAG-based systems on specialized tasks. Claims that “RAG is always better” usually stem from fine-tuning attempts that werent optimally configured - for example, using incorrect LoRA parameters or insufficient training.

Unsloth takes care of these complexities by automatically selecting the best parameter configurations for you. All you need is a good-quality dataset, and you'll get a fine-tuned model that performs to its fullest potential.

Is Fine-Tuning Expensive?

Not at all! While full fine-tuning or pretraining can be costly, these are not necessary (pretraining is especially not necessary). In most cases, LoRA or QLoRA fine-tuning can be done for minimal cost. In fact, with Unsloths free notebooks for Colab or Kaggle, you can fine-tune models without spending a dime. Better yet, you can even fine-tune locally on your own device.

Why You Should Combine RAG & Fine-Tuning

Instead of choosing between RAG and fine-tuning, consider using both together for the best results. Combining a retrieval system with a fine-tuned model brings out the strengths of each approach. Heres why:

  • Task-Specific Expertise Fine-tuning excels at specialized tasks or formats (making the model an expert in a specific area), while RAG keeps the model up-to-date with the latest external knowledge.
  • Better Adaptability A fine-tuned model can still give useful answers even if the retrieval component fails or returns incomplete information. Meanwhile, RAG ensures the system stays current without requiring you to retrain the model for every new piece of data.
  • Efficiency Fine-tuning provides a strong foundational knowledge base within the model, and RAG handles dynamic or quickly-changing details without the need for exhaustive re-training from scratch. This balance yields an efficient workflow and reduces overall compute costs.

LoRA vs. QLoRA: Which One to Use?

When it comes to implementing fine-tuning, two popular techniques can dramatically cut down the compute and memory requirements: LoRA and QLoRA. Heres a quick comparison of each:

  • LoRA (Low-Rank Adaptation) Fine-tunes only a small set of additional “adapter” weight matrices (in 16-bit precision), while leaving most of the original model unchanged. This significantly reduces the number of parameters that need updating during training.
  • QLoRA (Quantized LoRA) Combines LoRA with 4-bit quantization of the model weights, enabling efficient fine-tuning of very large models on minimal hardware. By using 4-bit precision where possible, it dramatically lowers memory usage and compute overhead.

We recommend starting with QLoRA, as its one of the most efficient and accessible methods available. Thanks to Unsloths dynamic 4-bit quants, the accuracy loss compared to standard 16-bit LoRA fine-tuning is now negligible.

Experimentation is Key

Theres no single “best” approach to fine-tuning - only best practices for different scenarios. Its important to experiment with different methods and configurations to find what works best for your dataset and use case. A great starting point is QLoRA (4-bit), which offers a very cost-effective, resource-friendly way to fine-tune models without heavy computational requirements.

{% content-ref url="../fine-tuning-llms-guide/lora-hyperparameters-guide" %} lora-hyperparameters-guide {% endcontent-ref %}


Connect via SSH

URL: llms-txt#connect-via-ssh

Contents:

  • ⚙️ Advanced Settings
  • 🔒 Security Notes

ssh -i ~/.ssh/container_key -p 2222 unsloth@localhost bash -p <host_port>:<container_port> bash -v <local_folder>:<container_folder> bash docker run -d -e JUPYTER_PORT=8000
-e JUPYTER_PASSWORD="mypassword"
-e "SSH_KEY=$(cat ~/.ssh/container_key.pub)"
-e USER_PASSWORD="unsloth2024"
-p 8000:8000 -p 2222:22
-v $(pwd)/work:/workspace/work
--gpus all
unsloth/unsloth


### **🔒 Security Notes**

* Container runs as non-root `unsloth` user by default
* Use `USER_PASSWORD` for sudo operations inside container
* SSH access requires public key authentication

**Examples:**

Example 1 (unknown):
```unknown
### ⚙️ Advanced Settings

| Variable           | Description                        | Default   |
| ------------------ | ---------------------------------- | --------- |
| `JUPYTER_PASSWORD` | Jupyter Lab password               | `unsloth` |
| `JUPYTER_PORT`     | Jupyter Lab port inside container  | `8888`    |
| `SSH_KEY`          | SSH public key for authentication  | `None`    |
| `USER_PASSWORD`    | Password for `unsloth` user (sudo) | `unsloth` |

Example 2 (unknown):

* Jupyter Lab: `-p 8000:8888`
* SSH access: `-p 2222:22`

{% hint style="warning" %}
**Important**: Use volume mounts to preserve your work between container runs.
{% endhint %}

Example 3 (unknown):



DeepSeek-R1 Dynamic 1.58-bit

URL: llms-txt#deepseek-r1-dynamic-1.58-bit

Contents:

  • 1-bit (Small) - Dynamic vs. Basic
  • 1-bit (Medium) - Dynamic vs. Basic
  • 2-bit (Extra extra Small) - Dynamic vs. Basic
  • Dynamic Quantization trial output
  • Non Dynamic Quantization trial output

See performance comparison tables for Unsloth's Dynamic GGUF Quants vs Standard IMatrix Quants.

Read our full DeepSeek-R1 blogpost here: unsloth.ai/blog/deepseekr1-dynamic

1-bit (Small) - Dynamic vs. Basic

GGUF TypeQuantSize (GB)SeedPygameBackgroundAccelerate SPACEBird shapeLandTop right scorePipesBest ScoreQuitRunnableScoreAvg ScoreErrorsNotes
DynamicIQ1_S131340710.510.50.510.51107score =!inc SyntaxError: invalid syntaxSelects random shapes and colors at the start, but doesn't rotate across trials
DynamicIQ1_S1313408110.2510.510.51107.25score =B4 NameError: name 'B4' is not definedBetter - selects pipe colors randomnly, but all are just 1 color - should be different. Dropping to ground fails to reset acceleration.
DynamicIQ1_S131340910.50.50.50111106.56.92score =3D 0 SyntaxError: invalid decimal literalToo hard to play - acceleration too fast. Pipe colors now are random, but bird shape not changing. Land collison fails.
BasicIQ1_S133340700000000000No codeFully failed. Repeats "with Dark Colurs" forever
BasicIQ1_S133340800000000000No codeFully failed. Repeats "Pygame's" forever
BasicIQ1_S1333409000000000000No codeFully failed. Repeats "pipe_x = screen_height
pipe_x = screen_height
pipe_height = screen_height - Pipe_height" forever.

1-bit (Medium) - Dynamic vs. Basic

GGUF TypeQuantSize (GB)SeedPygameBackgroundAccelerate SPACEBird shapeLandTop right scorePipesBest ScoreQuitRunnableScoreAvg ScoreErrorsNotes
DynamicIQ1_M1583407110.7511111119.75NoneA bit fast and hard to play.
DynamicIQ1_M1583408110.511111119.5NoneVery good - land should be clearer. Acceleration should be slower.
DynamicIQ1_M158340910.510.50.510.511189.08NoneBackground color does not change across trials.Pipes do not touch the top. No land is seen.
BasicIQ1_M149340710000000102if game_over: NameError: name 'game_over' is not definedFully failed. Black screen only
BasicIQ1_M149340810000000102No codeFully failed. Black screen then closes.
BasicIQ1_M1493409100000000011.67window.fill((100, 100, 255)) Light Blue SyntaxError: invalid syntax && main() NameError: name 'main' is not defined.Fully failed.

2-bit (Extra extra Small) - Dynamic vs. Basic

GGUF TypeQuantSize (GB)SeedPygameBackgroundAccelerate SPACEBird shapeLandTop right scorePipesBest ScoreQuitRunnableScoreAvg ScoreErrorsNotes
DynamicIQ2_XXS1833407110.511111119.5NoneToo hard to play - acceleration too slow. Lags
DynamicIQ2_XXS18334081111110.50.5108global best_score SyntaxError: name 'best_score' is assigned to before global declarationHad to edit 2 lines - remove global best_score, and set pipe_list = []
DynamicIQ2_XXS18334091111111111109.17NoneExtremely good. Even makes pipes have random distances between them.
BasicIQ2_XXS175340710.50.50.5100.51005pipe_color = random.choice([(34, 139, 34), (139, 69, 19), (47, 47, 47)) SyntaxError: closing parenthesis ')' does not match opening parenthesis '[' && pygame.draw.polygon(screen, bird_color, points) ValueError: points argument must contain more than 2 pointsFails quiting. Same color. Collison detection a bit off. No score
BasicIQ2_XXS175340810.50.50.5110.51006pipes.append({'x': SCREEN_WIDTH, 'gap_y': random.randint(50, SCREEN_HEIGHT - 150)) SyntaxError: closing parenthesis ')' does not match opening parenthesis '{'Acceleration weird. Chooses 1 color per round. Cannot quit.
BasicIQ2_XXS1753409111111100.507.56.17screen = pygame.display.set_mode((SCREEN_WIDTH, SCREENHEIGHT)) NameError: name 'SCREENHEIGHT' is not defined. Did you mean: 'SCREEN_HEIGHT'?OK. Colors change. Best score does not update. Quit only ESC not Q.

Dynamic Quantization trial output

{% tabs %} {% tab title="IQ1_S code" %} {% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FqpBdpW55h5mNAzVoTxPI%2Finference_UD-IQ1_S_3407.txt?alt=media&token=37b19689-73e5-46d0-98be-352e515dfdf8" %}

{% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FTdIrJSqc2VbNJy1bf3w5%2Finference_UD-IQ1_S_3408.txt?alt=media&token=e11f73bb-80be-49e5-91e2-f3a1f5495dcd" %}

{% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FBk2ZwEIcLmvZQ3jlMLzw%2Finference_UD-IQ1_S_3409.txt?alt=media&token=052885f5-bee9-420d-a9c0-827412ac17c8" %} {% endtab %}

{% tab title="IQ1_M code" %} {% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Ft7YmT1H3Nflcy5kAp1LE%2Finference_UD-IQ1_M_3407.txt?alt=media&token=6f62f911-3364-4f92-b311-c1fa9b759370" %}

{% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FH6BCTeWlJpUkfeEmeqpu%2Finference_UD-IQ1_M_3408.txt?alt=media&token=7727a999-8c0a-4baf-8542-be8686a01630" %}

{% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FvVJI0H2F9KTNj5kwUCtC%2Finference_UD-IQ1_M_3409.txt?alt=media&token=0f863d41-53d6-4c94-8d57-bf1eeb79ead5" %} {% endtab %}

{% tab title="IQ2_XXS code" %} {% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F26jxRY5mWuon67OfvGtq%2Finference_UD-IQ2_XXS_3407.txt?alt=media&token=daf9bf7d-245e-4b54-b0c0-a6273833835a" %}

{% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FEhjjYN7vAh7gbmR8oXbS%2Finference_UD-IQ2_XXS_3408.txt?alt=media&token=4b50d6dd-2798-44c7-aa92-7e67c09868a4" %}

{% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FXwCSfIf16nTwHzcWepoV%2Finference_UD-IQ2_XXS_3409.txt?alt=media&token=2f7539c9-026d-41e7-b7c7-5738a89ae5d4" %} {% endtab %} {% endtabs %}

Non Dynamic Quantization trial output

{% tabs %} {% tab title="IQ1_S basic code" %} {% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FFtAMzAucSfKMkkmXItTj%2Finference_basic-IQ1_S_3407.txt?alt=media&token=76bfcf47-e1ce-442b-af49-6bfb6af7d046" %}

{% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F4NhjCVFMwCwT2OCj0IJ5%2Finference_basic-IQ1_S_3408.txt?alt=media&token=d4715674-3347-400b-9eb6-ae5d4470feeb" %}

{% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Fb0ZW3xs7R7IMryO7n7Yp%2Finference_basic-IQ1_S_3409.txt?alt=media&token=64b8825b-7103-4708-9d12-12770e43b546" %}

{% tab title="IQ1_M basic code" %} {% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FmZ2TsQEzoGjhGlqUjtmj%2Finference_basic-IQ1_M_3407.txt?alt=media&token=975a30d6-2d90-47eb-9d68-b50fd47337f7" %}

{% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FIx9TQ99Qpmk7BViNLFBl%2Finference_basic-IQ1_M_3408.txt?alt=media&token=b88e1e5b-4535-4d93-bd67-f81def7377d5" %}

{% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FDX7XYpJPxXKAMZeGhSrr%2Finference_basic-IQ1_M_3409.txt?alt=media&token=6da9127e-272b-4e74-b990-6657e25eea6b" %}

{% tab title="IQ2_XXS basic code" %} {% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FajsVHsVqlWpwHk7mY32t%2Finference_basic-IQ2_XXS_3407.txt?alt=media&token=cbbf36a2-0d6a-4a87-8232-45b0b7fcc588" %}

{% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F4vjncPu2r2D7F5jVOC7I%2Finference_basic-IQ2_XXS_3408.txt?alt=media&token=9ed635a2-bf97-4f49-b26f-6e985d0ab1b7" %}

{% file src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FJmVOFgrRyXjY4lYZXE96%2Finference_basic-IQ2_XXS_3409.txt?alt=media&token=faad5bff-ba7f-41f1-abd5-7896f17a5b25" %}

{% endtab %} {% endtabs %}


Troubleshooting & FAQs

URL: llms-txt#troubleshooting-&-faqs

Contents:

  • Running in Unsloth works well, but after exporting & running on other platforms, the results are poor
  • Saving to GGUF / vLLM 16bit crashes
  • How do I manually save to GGUF?

Tips to solve issues, and frequently asked questions.

If you're still encountering any issues with versions or dependencies, please use our Docker image which will have everything pre-installed.

{% hint style="success" %} Try always to update Unsloth if you find any issues.

pip install --upgrade --force-reinstall --no-cache-dir --no-deps unsloth unsloth_zoo {% endhint %}

Running in Unsloth works well, but after exporting & running on other platforms, the results are poor

You might sometimes encounter an issue where your model runs and produces good results on Unsloth, but when you use it on another platform like Ollama or vLLM, the results are poor or you might get gibberish, endless/infinite generations or repeated outputs.

  • The most common cause of this error is using an incorrect chat template. Its essential to use the SAME chat template that was used when training the model in Unsloth and later when you run it in another framework, such as llama.cpp or Ollama. When inferencing from a saved model, it's crucial to apply the correct template.
  • It might also be because your inference engine adds an unnecessary "start of sequence" token (or the lack of thereof on the contrary) so ensure you check both hypotheses!
  • Use our conversational notebooks to force the chat template - this will fix most issues.

Saving to GGUF / vLLM 16bit crashes

You can try reducing the maximum GPU usage during saving by changing maximum_memory_usage.

The default is model.save_pretrained(..., maximum_memory_usage = 0.75). Reduce it to say 0.5 to use 50% of GPU peak memory or lower. This can reduce OOM crashes during saving.

How do I manually save to GGUF?

First save your model to 16bit via:

Compile llama.cpp from source like below:

Then, save the model to F16:

Examples:

Example 1 (python):

model.save_pretrained_merged("merged_model", tokenizer, save_method = "merged_16bit",)

Example 2 (bash):

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggerganov/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=ON -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli
cp llama.cpp/build/bin/llama-* llama.cpp

Example 3 (bash):

python llama.cpp/convert_hf_to_gguf.py merged_model \
    --outfile model-F16.gguf --outtype f16 \
    --split-max-size 50G

DeepSeek-R1-0528: How to Run Locally

URL: llms-txt#deepseek-r1-0528:-how-to-run-locally

Contents:

  • ⚙️ Recommended Settings
    • 🐳 Official Recommended Settings:
    • 🔢 Chat template/prompt format
  • Model uploads
  • Run DeepSeek-R1-0528 Tutorials:
    • 🦙 Run in Ollama/Open WebUI
    • 🦙 Run Full R1-0528 on Ollama/Open WebUI
    • Run Qwen3 distilled R1 in llama.cpp
    • Run Full R1-0528 on llama.cpp

A guide on how to run DeepSeek-R1-0528 including Qwen3 on your own local device!

DeepSeek-R1-0528 is DeepSeek's new update to their R1 reasoning model. The full 671B parameter model requires 715GB of disk space. The quantized dynamic 1.66-bit version uses 162GB (-80% reduction in size). GGUF: DeepSeek-R1-0528-GGUF

DeepSeek also released a R1-0528 distilled version by fine-tuning Qwen3 (8B). The distill achieves similar performance to Qwen3 (235B). You can also fine-tune Qwen3 Distill with Unsloth. Qwen3 GGUF: DeepSeek-R1-0528-Qwen3-8B-GGUF

All uploads use Unsloth Dynamic 2.0 for SOTA 5-shot MMLU and KL Divergence performance, meaning you can run & fine-tune quantized DeepSeek LLMs with minimal accuracy loss.

Tutorials navigation:

Run in llama.cppRun in Ollama/Open WebUIFine-tuning R1-0528

{% hint style="success" %} NEW: Huge improvements to tool calling and chat template fixes.

New TQ1_0 dynamic 1.66-bit quant - 162GB in size. Ideal for 192GB RAM (including Mac) and Ollama users. Try: ollama run hf.co/unsloth/DeepSeek-R1-0528-GGUF:TQ1_0 {% endhint %}

For DeepSeek-R1-0528-Qwen3-8B, the model can pretty much fit in any setup, and even those with as less as 20GB RAM. There is no need for any prep beforehand.

However, for the full R1-0528 model which is 715GB in size, you will need extra prep. The 1.78-bit (IQ1_S) quant will fit in a 1x 24GB GPU (with all layers offloaded). Expect around 5 tokens/s with this setup if you have bonus 128GB RAM as well.

It is recommended to have at least 64GB RAM to run this quant (you will get 1 token/s without a GPU). For optimal performance you will need at least 180GB unified memory or 180GB combined RAM+VRAM for 5+ tokens/s.

We suggest using our 2.7bit (Q2_K_XL) or 2.4bit (IQ2_XXS) quant to balance size and accuracy! The 2.4bit one also works well.

{% hint style="success" %} Though not necessary, for the best performance, have your VRAM + RAM combined = to the size of the quant you're downloading. {% endhint %}

According to DeepSeek, these are the recommended settings for R1 (R1-0528 and Qwen3 distill should use the same settings) inference:

  • Set the temperature 0.6 to reduce repetition and incoherence.
  • Set top_p to 0.95 (recommended)
  • Run multiple tests and average results for reliable evaluation.

🔢 Chat template/prompt format

R1-0528 uses the same chat template as the original R1 model. You do not need to force <think>\n , but you can still add it in!

A BOS is forcibly added, and an EOS separates each interaction. To counteract double BOS tokens during inference, you should only call tokenizer.encode(..., add_special_tokens = False) since the chat template auto adds a BOS token as well.
For llama.cpp / GGUF inference, you should skip the BOS since itll auto add it:

The <think> and </think> tokens get their own designated tokens.

ALL our uploads - including those that are not imatrix-based or dynamic, utilize our calibration dataset, which is specifically optimized for conversational, coding, and language tasks.

We also uploaded IQ4_NL and Q4_1 quants which run specifically faster for ARM and Apple devices respectively.

MoE BitsType + LinkDisk SizeDetails
1.66bitTQ1_0162GB1.92/1.56bit
1.78bitIQ1_S185GB2.06/1.56bit
1.93bitIQ1_M200GB2.5/2.06/1.56
2.42bitIQ2_XXS216GB2.5/2.06bit
2.71bitQ2_K_XL251GB 3.5/2.5bit
3.12bitIQ3_XXS273GB 3.5/2.06bit
3.5bitQ3_K_XL296GB 4.5/3.5bit
4.5bitQ4_K_XL384GB 5.5/4.5bit
5.5bitQ5_K_XL481GB6.5/5.5bit

We've also uploaded versions in BF16 format, and original FP8 (float8) format.

Run DeepSeek-R1-0528 Tutorials:

🦙 Run in Ollama/Open WebUI

  1. Install ollama if you haven't already! You can only run models up to 32B in size. To run the full 720GB R1-0528 model, see here.

  2. Run the model! Note you can call ollama servein another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in params in our Hugging Face upload!

  3. (NEW) To run the full R1-0528 model in Ollama, you can use our TQ1_0 (162GB quant):

🦙 Run Full R1-0528 on Ollama/Open WebUI

Open WebUI has made an step-by-step tutorial on how to run R1 here and for R1-0528, you will just need to replace R1 with the new 0528 quant: docs.openwebui.com/tutorials/integrations/deepseekr1-dynamic/

(NEW) To run the full R1-0528 model in Ollama, you can use our TQ1_0 (162GB quant):

If you want to use any of the quants that are larger than TQ1_0 (162GB) on Ollama, you need to first merge the 3 GGUF split files into 1 like the code below. Then you will need to run the model locally.

Run Qwen3 distilled R1 in llama.cpp

  1. To run the full 720GB R1-0528 model, see here. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

  2. Then use llama.cpp directly to download the model:

Run Full R1-0528 on llama.cpp

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

  2. If you want to use llama.cpp directly to load models, you can do the below: (:IQ1_S) is the quantization type. You can also download via Hugging Face (point 3). This is similar to ollama run . Use export LLAMA_CACHE="folder" to force llama.cpp to save to a specific location.

{% hint style="success" %} Please try out -ot ".ffn_.*_exps.=CPU" to offload all MoE layers to the CPU! This effectively allows you to fit all non MoE layers on 1 GPU, improving generation speeds. You can customize the regex expression to fit more layers if you have more GPU capacity.

If you have a bit more GPU memory, try -ot ".ffn_(up|down)_exps.=CPU" This offloads up and down projection MoE layers.

Try -ot ".ffn_(up)_exps.=CPU" if you have even more GPU memory. This offloads only up projection MoE layers.

And finally offload all layers via -ot ".ffn_.*_exps.=CPU" This uses the least VRAM.

You can also customize the regex, for example -ot "\.(6|7|8|9|[0-9][0-9]|[0-9][0-9][0-9])\.ffn_(gate|up|down)_exps.=CPU" means to offload gate, up and down MoE layers but only from the 6th layer onwards. {% endhint %}

  1. Download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose UD-IQ1_S(dynamic 1.78bit quant) or other quantized versions like Q4_K_M . We recommend using our 2.7bit dynamic quant UD-Q2_K_XL to balance size and accuracy. More versions at: https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF

{% code overflow="wrap" %}

Examples:

Example 1 (unknown):

<begin▁of▁sentence><User>What is 1+1?<Assistant>It's 2.<end▁of▁sentence><User>Explain more!<Assistant>

Example 2 (unknown):

<User>What is 1+1?<Assistant>

Example 3 (bash):

apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh

Example 4 (bash):

ollama run hf.co/unsloth/DeepSeek-R1-0528-Qwen3-8B-GGUF:Q4_K_XL

GLM-4.6: How to Run Locally

URL: llms-txt#glm-4.6:-how-to-run-locally

Contents:

  • Unsloth Chat Template fixes
  • ⚙️ Recommended Settings
    • Official Recommended Settings
  • Run GLM-4.6 Tutorials:
    • 🦙 Run in Ollama
    • Run in llama.cpp

A guide on how to run Z.ai's new GLM-4.6 model on your own local device!

GLM-4.6 is the latest reasoning model from Z.ai, achieving SOTA performance on coding and agent benchmarks while offering improved conversational chats. The full 355B parameter model requires 400GB of disk space, while the Unsloth Dynamic 2-bit GGUF reduces the size to 135GB (-75%). GLM-4.6-GGUF

There is currently no smaller GLM-4.6-Air model available, however Z.ai's team says that it is expected soon.

{% hint style="success" %} We did multiple chat template fixes for GLM-4.6 to make llama.cpp/llama-cli --jinja work - please only use --jinja otherwise the output will be wrong!

You asked for benchmarks on our quants, so were showcasing Aider Polyglot results! Our Dynamic 3-bit DeepSeek V3.1 GGUF scores 75.6%, surpassing many full-precision SOTA LLMs. Read more. {% endhint %}

All uploads use Unsloth Dynamic 2.0 for SOTA 5-shot MMLU and Aider performance, meaning you can run & fine-tune quantized GLM LLMs with minimal accuracy loss.

Tutorials navigation:

Run in llama.cppRun in Ollama

Unsloth Chat Template fixes

One of the significant fixes we did addresses an issue with prompting GGUFs, where the second prompt wouldnt work. We fixed this issue however, this problem still persists in GGUFs without our fixes. For example, when using any non-Unsloth GLM-4.6 GGUF, the first conversation works fine, but the second one breaks.

Weve resolved this in our chat template, so when using our version, conversations beyond the second (third, fourth, etc.) work without any errors. There are still some issues with tool-calling, which we havent fully investigated yet due to bandwidth limitations. Weve already informed the GLM team about these remaining issues.

The 2-bit dynamic quant UD-Q2_K_XL uses 135GB of disk space - this works well in a 1x24GB card and 128GB of RAM with MoE offloading. The 1-bit UD-TQ1 GGUF also works natively in Ollama!

{% hint style="info" %} You must use --jinja for llama.cpp quants - this uses our fixed chat templates and enables the correct template! You might get incorrect results if you do not use --jinja {% endhint %}

The 4-bit quants will fit in a 1x 40GB GPU (with MoE layers offloaded to RAM). Expect around 5 tokens/s with this setup if you have bonus 165GB RAM as well. It is recommended to have at least 205GB RAM to run this 4-bit. For optimal performance you will need at least 205GB unified memory or 205GB combined RAM+VRAM for 5+ tokens/s. To learn how to increase generation speed and fit longer contexts, read here.

{% hint style="success" %} Though not a must, for best performance, have your VRAM + RAM combined equal to the size of the quant you're downloading. If not, hard drive / SSD offloading will work with llama.cpp, just inference will be slower. {% endhint %}

According to Z.ai, these are the recommended settings for GLM inference:

  • Set the temperature 1.0
  • Set top_p to 0.95 (recommended for coding)
  • Set top_k to 40 (recommended for coding)
  • 200K context length or less
  • Use --jinja for llama.cpp variants - we fixed some chat template issues as well!

Run GLM-4.6 Tutorials:

🦙 Run in Ollama

{% stepper %} {% step %} Install ollama if you haven't already! To run more variants of the model, see here.

{% step %} Run the model! Note you can call ollama servein another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in params in our Hugging Face upload!

{% step %} To run other quants, you need to first merge the GGUF split files into 1 like the code below. Then you will need to run the model locally.

{% endstep %} {% endstepper %}

Run in llama.cpp

{% stepper %} {% step %} Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

{% step %} If you want to use llama.cpp directly to load models, you can do the below: (:Q2_K_XL) is the quantization type. You can also download via Hugging Face (point 3). This is similar to ollama run . Use export LLAMA_CACHE="folder" to force llama.cpp to save to a specific location. Remember the model has only a maximum of 128K context length.

{% hint style="success" %} Please try out -ot ".ffn_.*_exps.=CPU" to offload all MoE layers to the CPU! This effectively allows you to fit all non MoE layers on 1 GPU, improving generation speeds. You can customize the regex expression to fit more layers if you have more GPU capacity.

If you have a bit more GPU memory, try -ot ".ffn_(up|down)_exps.=CPU" This offloads up and down projection MoE layers.

Try -ot ".ffn_(up)_exps.=CPU" if you have even more GPU memory. This offloads only up projection MoE layers.

And finally offload all layers via -ot ".ffn_.*_exps.=CPU" This uses the least VRAM.

You can also customize the regex, for example -ot "\.(6|7|8|9|[0-9][0-9]|[0-9][0-9][0-9])\.ffn_(gate|up|down)_exps.=CPU" means to offload gate, up and down MoE layers but only from the 6th layer onwards. {% endhint %}

{% step %} Download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose UD-Q2_K_XL (dynamic 2bit quant) or other quantized versions like Q4_K_XL . We recommend using our 2.7bit dynamic quant UD-Q2_K_XL to balance size and accuracy.

Examples:

Example 1 (bash):

apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh

Example 2 (unknown):

OLLAMA_MODELS=unsloth ollama serve &

OLLAMA_MODELS=unsloth ollama run hf.co/unsloth/GLM-4.6-GGUF:TQ1_0

Example 3 (bash):

./llama.cpp/llama-gguf-split --merge \
  GLM-4.6-GGUF/GLM-4.6-UD-Q2_K_XL/GLM-4.6-UD-Q2_K_XL-00001-of-00003.gguf \
	merged_file.gguf

Example 4 (bash):

OLLAMA_MODELS=unsloth ollama serve &

OLLAMA_MODELS=unsloth ollama run merged_file.gguf

Docker

URL: llms-txt#docker

Contents:

  • Quickstart
  • 📖 Usage Example

Install Unsloth using our official Docker container

Learn how to use our Docker containers with all dependencies pre-installed for immediate installation. No setup required, just run and start training!

Unsloth Docker image: unsloth/unsloth

{% hint style="success" %} You can now use our main Docker image unsloth/unsloth for Blackwell and 50-series GPUs - no separate image needed. {% endhint %}

{% stepper %} {% step %}

Install Docker and NVIDIA Container Toolkit.

Install Docker via Linux or Desktop (other).
Then install NVIDIA Container Toolkit:

export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.17.8-1
sudo apt-get update && sudo apt-get install -y \
  nvidia-container-toolkit=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
  nvidia-container-toolkit-base=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
  libnvidia-container-tools=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
  libnvidia-container1=${NVIDIA_CONTAINER_TOOLKIT_VERSION}
{% endstep %}

Run the container.

unsloth/unsloth is Unsloth's only Docker image. For Blackwell and 50-series GPUs, use this same image - no separate one needed.

{% endstep %}

Access Jupyter Lab

Go to http://localhost:8888 and open Unsloth.

Access the unsloth-notebooks tabs to see Unsloth notebooks.

{% endstep %}

Start training with Unsloth

If you're new, follow our step-by-step Fine-tuning Guide, RL Guide or just save/copy any of our premade notebooks.

{% endstep %} {% endstepper %}

📂 Container Structure

  • /workspace/work/ — Your mounted work directory
  • /workspace/unsloth-notebooks/ — Example fine-tuning notebooks
  • /home/unsloth/ — User home directory

Setting up SSH Key

If you don't have an SSH key pair:

Examples:

Example 1 (bash):

docker run -d -e JUPYTER_PASSWORD="mypassword" \
  -p 8888:8888 -p 2222:22 \
  -v $(pwd)/work:/workspace/work \
  --gpus all \
  unsloth/unsloth

Example 2 (bash):

docker run -d -e JUPYTER_PORT=8000 \
  -e JUPYTER_PASSWORD="mypassword" \
  -e "SSH_KEY=$(cat ~/.ssh/container_key.pub)" \
  -e USER_PASSWORD="unsloth2024" \
  -p 8000:8000 -p 2222:22 \
  -v $(pwd)/work:/workspace/work \
  --gpus all \
  unsloth/unsloth

Datasets Guide

URL: llms-txt#datasets-guide

Contents:

  • What is a Dataset?
    • Data Format
  • Getting Started
  • Formatting the Data
    • Common Data Formats for LLM Training
    • Applying Chat Templates with Unsloth
    • Formatting Data Q&A
  • Synthetic Data Generation
    • Synthetic Dataset Notebook
    • Using a local LLM or ChatGPT for synthetic data

Learn how to create & prepare a dataset for fine-tuning.

What is a Dataset?

For LLMs, datasets are collections of data that can be used to train our models. In order to be useful for training, text data needs to be in a format that can be tokenized. You'll also learn how to use datasets inside of Unsloth.

One of the key parts of creating a dataset is your chat template and how you are going to design it. Tokenization is also important as it breaks text into tokens, which can be words, sub-words, or characters so LLMs can process it effectively. These tokens are then turned into embeddings and are adjusted to help the model understand the meaning and context.

To enable the process of tokenization, datasets need to be in a format that can be read by a tokenizer.

FormatDescription Training Type
Raw CorpusRaw text from a source such as a website, book, or article.Continued Pretraining (CPT)
InstructInstructions for the model to follow and an example of the output to aim for.Supervised fine-tuning (SFT)
ConversationMultiple-turn conversation between a user and an AI assistant.Supervised fine-tuning (SFT)
RLHFConversation between a user and an AI assistant, with the assistant's responses being ranked by a script, another model or human evaluator.Reinforcement Learning (RL)

{% hint style="info" %} It's worth noting that different styles of format exist for each of these types. {% endhint %}

Before we format our data, we want to identify the following:

{% stepper %} {% step %} Purpose of dataset

Knowing the purpose of the dataset will help us determine what data we need and format to use.

The purpose could be, adapting a model to a new task such as summarization or improving a model's ability to role-play a specific character. For example:

  • Chat-based dialogues (Q&A, learn a new language, customer support, conversations).
  • Structured tasks (classification, summarization, generation tasks).
  • Domain-specific data (medical, finance, technical). {% endstep %}

{% step %} Style of output

The style of output will let us know what sources of data we will use to reach our desired output.

For example, the type of output you want to achieve could be JSON, HTML, text or code. Or perhaps you want it to be Spanish, English or German etc. {% endstep %}

{% step %} Data source

When we know the purpose and style of the data we need, we need to analyze the quality and quantity of the data. Hugging Face and Wikipedia are great sources of datasets and Wikipedia is especially useful if you are looking to train a model to learn a language.

The Source of data can be a CSV file, PDF or even a website. You can also synthetically generate data but extra care is required to make sure each example is high quality and relevant. {% endstep %} {% endstepper %}

{% hint style="success" %} One of the best ways to create a better dataset is by combining it with a more generalized dataset from Hugging Face like ShareGPT to make your model smarter and diverse. You could also add synthetically generated data. {% endhint %}

Formatting the Data

When we have identified the relevant criteria, and collected the necessary data, we can then format our data into a machine readable format that is ready for training.

Common Data Formats for LLM Training

For continued pretraining, we use raw text format without specific structure:

This format preserves natural language flow and allows the model to learn from continuous text.

If we are adapting a model to a new task, and intend for the model to output text in a single turn based on a specific set of instructions, we can use Instruction format in Alpaca style

When we want multiple turns of conversation we can use the ShareGPT format:

The template format uses the "from"/"value" attribute keys and messages alternates between humanand gpt, allowing for natural dialogue flow.

The other common format is OpenAI's ChatML format and is what Hugging Face defaults to. This is probably the most used format, and alternates between user and assistant

Applying Chat Templates with Unsloth

For datasets that usually follow the common chatml format, the process of preparing the dataset for training or finetuning, consists of four simple steps:

  • Check the chat templates that Unsloth currently supports:\


This will print out the list of templates currently supported by Unsloth. Here is an example output:\

  • Use get_chat_template to apply the right chat template to your tokenizer:\

  • Define your formatting function. Here's an example:\



This function loops through your dataset applying the chat template you defined to each sample.\

  • Finally, let's load the dataset and apply the required modifications to our dataset: \


If your dataset uses the ShareGPT format with "from"/"value" keys instead of the ChatML "role"/"content" format, you can use the standardize_sharegpt function to convert it first. The revised code will now look as follows:
\

Formatting Data Q&A

Q: How can I use the Alpaca instruct format?

A: If your dataset is already formatted in the Alpaca format, then follow the formatting steps as shown in the Llama3.1 notebook . If you need to convert your data to the Alpaca format, one approach is to create a Python script to process your raw data. If you're working on a summarization task, you can use a local LLM to generate instructions and outputs for each example.

Q: Should I always use the standardize_sharegpt method?

A: Only use the standardize_sharegpt method if your target dataset is formatted in the sharegpt format, but your model expect a ChatML format instead.

\ Q: Why not use the apply_chat_template function that comes with the tokenizer.

A: The chat_template attribute when a model is first uploaded by the original model owners sometimes contains errors and may take time to be updated. In contrast, at Unsloth, we thoroughly check and fix any errors in the chat_template for every model when we upload the quantized versions to our repositories. Additionally, our get_chat_template and apply_chat_template methods offer advanced data manipulation features, which are fully documented on our Chat Templates documentation page.

Q: What if my template is not currently supported by Unsloth?

A: Submit a feature request on the unsloth github issues forum. As a temporary workaround, you could also use the tokenizer's own apply_chat_template function until your feature request is approved and merged.

Synthetic Data Generation

You can also use any local LLM like Llama 3.3 (70B) or OpenAI's GPT 4.5 to generate synthetic data. Generally, it is better to use a bigger like Llama 3.3 (70B) to ensure the highest quality outputs. You can directly use inference engines like vLLM, Ollama or llama.cpp to generate synthetic data but it will require some manual work to collect it and prompt for more data. There's 3 goals for synthetic data:

  • Produce entirely new data - either from scratch or from your existing dataset
  • Diversify your dataset so your model does not overfit and become too specific
  • Augment existing data e.g. automatically structure your dataset in the correct chosen format

Synthetic Dataset Notebook

We collaborated with Meta to launch a free notebook for creating Synthetic Datasets automatically using local models like Llama 3.2. Access the notebook here.

What the notebook does:

  • Auto-parses PDFs, websites, YouTube videos and more
  • Uses Metas Synthetic Data Kit + Llama 3.2 (3B) to generate QA pairs
  • Cleans and filters the data automatically
  • Fine-tunes the dataset with Unsloth + Llama
  • Notebook is fully done locally with no API calling necessary

Using a local LLM or ChatGPT for synthetic data

Your goal is to prompt the model to generate and process QA data that is in your specified format. The model will need to learn the structure that you provided and also the context so ensure you at least have 10 examples of data already. Examples prompts:

  • Prompt for generating more dialogue on an existing dataset:
Using the dataset example I provided, follow the structure and generate conversations based on the examples.
  
  • Prompt if you no have dataset:

{% code overflow="wrap" %}

{% endcode %}

  • Prompt for a dataset without formatting:

{% code overflow="wrap" %}

It is recommended to check the quality of generated data to remove or improve on irrelevant or poor-quality responses. Depending on your dataset it may also have to be balanced in many areas so your model does not overfit. You can then feed this cleaned dataset back into your LLM to regenerate data, now with even more guidance.

Dataset FAQ + Tips

How big should my dataset be?

We generally recommend using a bare minimum of at least 100 rows of data for fine-tuning to achieve reasonable results. For optimal performance, a dataset with over 1,000 rows is preferable, and in this case, more data usually leads to better outcomes. If your dataset is too small you can also add synthetic data or add a dataset from Hugging Face to diversify it. However, the effectiveness of your fine-tuned model depends heavily on the quality of the dataset, so be sure to thoroughly clean and prepare your data.

How should I structure my dataset if I want to fine-tune a reasoning model?

If you want to fine-tune a model that already has reasoning capabilities like the distilled versions of DeepSeek-R1 (e.g. DeepSeek-R1-Distill-Llama-8B), you will need to still follow question/task and answer pairs however, for your answer you will need to change the answer so it includes reasoning/chain-of-thought process and the steps it took to derive the answer.

For a model that does not have reasoning and you want to train it so that it later encompasses reasoning capabilities, you will need to utilize a standard dataset but this time without reasoning in its answers. This is training process is known as Reinforcement Learning and GRPO.

Multiple datasets

If you have multiple datasets for fine-tuning, you can either:

  • Standardize the format of all datasets, combine them into a single dataset, and fine-tune on this unified dataset.
  • Use the Multiple Datasets notebook to fine-tune on multiple datasets directly.

Can I fine-tune the same model multiple times?

You can fine-tune an already fine-tuned model multiple times, but it's best to combine all the datasets and perform the fine-tuning in a single process instead. Training an already fine-tuned model can potentially alter the quality and knowledge acquired during the previous fine-tuning process.

Using Datasets in Unsloth

See an example of using the Alpaca dataset inside of Unsloth on Google Colab:

We will now use the Alpaca Dataset created by calling GPT-4 itself. It is a list of 52,000 instructions and outputs which was very popular when Llama-1 was released, since it made finetuning a base LLM be competitive with ChatGPT itself.

You can access the GPT4 version of the Alpaca dataset here. Below shows some examples of the dataset:

You can see there are 3 columns in each row - an instruction, and input and an output. We essentially combine each row into 1 large prompt like below. We then use this to finetune the language model, and this made it very similar to ChatGPT. We call this process supervised instruction finetuning.

Multiple columns for finetuning

But a big issue is for ChatGPT style assistants, we only allow 1 instruction / 1 prompt, and not multiple columns / inputs. For example in ChatGPT, you can see we must submit 1 prompt, and not multiple prompts.

This essentially means we have to "merge" multiple columns into 1 large prompt for finetuning to actually function!

For example the very famous Titanic dataset has many many columns. Your job was to predict whether a passenger has survived or died based on their age, passenger class, fare price etc. We can't simply pass this into ChatGPT, but rather, we have to "merge" this information into 1 large prompt.

For example, if we ask ChatGPT with our "merged" single prompt which includes all the information for that passenger, we can then ask it to guess or predict whether the passenger has died or survived.

Other finetuning libraries require you to manually prepare your dataset for finetuning, by merging all your columns into 1 prompt. In Unsloth, we simply provide the function called to_sharegpt which does this in 1 go!

Now this is a bit more complicated, since we allow a lot of customization, but there are a few points:

  • You must enclose all columns in curly braces {}. These are the column names in the actual CSV / Excel file.
  • Optional text components must be enclosed in [[]]. For example if the column "input" is empty, the merging function will not show the text and skip this. This is useful for datasets with missing values.
  • Select the output or target / prediction column in output_column_name. For the Alpaca dataset, this will be output.

For example in the Titanic dataset, we can create a large merged prompt format like below, where each column / piece of text becomes optional.

For example, pretend the dataset looks like this with a lot of missing data:

Embarked Age Fare
S 23
18 7.25

Then, we do not want the result to be:

  1. The passenger embarked from S. Their age is 23. Their fare is EMPTY.
  2. The passenger embarked from EMPTY. Their age is 18. Their fare is $7.25.

Instead by optionally enclosing columns using [[]], we can exclude this information entirely.

  1. The passenger embarked from S. Their age is 23. [[Their fare is EMPTY.]]

  2. [[The passenger embarked from EMPTY.]] Their age is 18. Their fare is $7.25.

  3. The passenger embarked from S. Their age is 23.

  4. Their age is 18. Their fare is $7.25.

Multi turn conversations

A bit issue if you didn't notice is the Alpaca dataset is single turn, whilst remember using ChatGPT was interactive and you can talk to it in multiple turns. For example, the left is what we want, but the right which is the Alpaca dataset only provides singular conversations. We want the finetuned language model to somehow learn how to do multi turn conversations just like ChatGPT.

So we introduced the conversation_extension parameter, which essentially selects some random rows in your single turn dataset, and merges them into 1 conversation! For example, if you set it to 3, we randomly select 3 rows and merge them into 1! Setting them too long can make training slower, but could make your chatbot and final finetune much better!

Then set output_column_name to the prediction / output column. For the Alpaca dataset dataset, it would be the output column.

We then use the standardize_sharegpt function to just make the dataset in a correct format for finetuning! Always call this!

Vision Fine-tuning

The dataset for fine-tuning a vision or multimodal model also includes image inputs. For example, the Llama 3.2 Vision Notebook uses a radiography case to show how AI can help medical professionals analyze X-rays, CT scans, and ultrasounds more efficiently.

We'll be using a sampled version of the ROCO radiography dataset. You can access the dataset here. The dataset includes X-rays, CT scans and ultrasounds showcasing medical conditions and diseases. Each image has a caption written by experts describing it. The goal is to finetune a VLM to make it a useful analysis tool for medical professionals.

Let's take a look at the dataset, and check what the 1st example shows:

Image Caption

Panoramic radiography shows an osteolytic lesion in the right posterior maxilla with resorption of the floor of the maxillary sinus (arrows).

To format the dataset, all vision finetuning tasks should be formatted as follows:

We will craft an custom instruction asking the VLM to be an expert radiographer. Notice also instead of just 1 instruction, you can add multiple turns to make it a dynamic conversation.

Let's convert the dataset into the "correct" format for finetuning:

The first example is now structured like below:

{% code overflow="wrap" %}

Before we do any finetuning, maybe the vision model already knows how to analyse the images? Let's check if this is the case!

For more details, view our dataset section in the notebook here.

Examples:

Example 1 (json):

"text": "Pasta carbonara is a traditional Roman pasta dish. The sauce is made by mixing raw eggs with grated Pecorino Romano cheese and black pepper. The hot pasta is then tossed with crispy guanciale (cured pork cheek) and the egg mixture, creating a creamy sauce from the residual heat. Despite popular belief, authentic carbonara never contains cream or garlic. The dish likely originated in Rome in the mid-20th century, though its exact origins are debated..."

Example 2 (json):

"Instruction": "Task we want the model to perform."

"Input": "Optional, but useful, it will essentially be the user's query."

"Output": "The expected result of the task and the output of the model."

Example 3 (json):

{
  "conversations": [
    {
      "from": "human",
      "value": "Can you help me make pasta carbonara?"
    },
    {
      "from": "gpt",
      "value": "Would you like the traditional Roman recipe, or a simpler version?"
    },
    {
      "from": "human",
      "value": "The traditional version please"
    },
    {
      "from": "gpt",
      "value": "The authentic Roman carbonara uses just a few ingredients: pasta, guanciale, eggs, Pecorino Romano, and black pepper. Would you like the detailed recipe?"
    }
  ]
}

Example 4 (unknown):

{
  "messages": [
    {
      "role": "user",
      "content": "What is 1+1?"
    },
    {
      "role": "assistant",
      "content": "It's 2!"
    },
  ]
}

Unsloth Requirements

URL: llms-txt#unsloth-requirements

Contents:

  • System Requirements
  • Fine-tuning VRAM requirements:

Here are Unsloth's requirements including system and GPU VRAM requirements.

System Requirements

  • Operating System: Works on Linux and Windows.
  • Supports NVIDIA GPUs since 2018+ including Blackwell RTX 50 and DGX Spark.
    Minimum CUDA Capability 7.0 (V100, T4, Titan V, RTX 20 & 50, A100, H100, L40 etc) Check your GPU! GTX 1070, 1080 works, but is slow.
  • The official Unsloth Docker image unsloth/unsloth is available on Docker Hub.
  • Unsloth works on AMD and Intel GPUs! Apple/Silicon/MLX is in the works.
  • If you have different versions of torch, transformers etc., pip install unsloth will automatically install all the latest versions of those libraries so you don't need to worry about version compatibility.
  • Your device should have xformers, torch, BitsandBytes and triton support.

{% hint style="info" %} Python 3.13 is now supported! {% endhint %}

Fine-tuning VRAM requirements:

How much GPU memory do I need for LLM fine-tuning using Unsloth?

{% hint style="info" %} A common issue when you OOM or run out of memory is because you set your batch size too high. Set it to 1, 2, or 3 to use less VRAM.

For context length benchmarks, see here. {% endhint %}

Check this table for VRAM requirements sorted by model parameters and fine-tuning method. QLoRA uses 4-bit, LoRA uses 16-bit. Keep in mind that sometimes more VRAM is required depending on the model so these numbers are the absolute minimum:

Model parameters QLoRA (4-bit) VRAM LoRA (16-bit) VRAM
3B 3.5 GB 8 GB
7B 5 GB 19 GB
8B 6 GB 22 GB
9B 6.5 GB 24 GB
11B 7.5 GB 29 GB
14B 8.5 GB 33 GB
27B 22GB 64GB
32B 26 GB 76 GB
40B 30GB 96GB
70B 41 GB 164 GB
81B 48GB 192GB
90B 53GB 212GB
405B 237 GB 950 GB

vLLM Engine Arguments

URL: llms-txt#vllm-engine-arguments

Contents:

  • :tada:Float8 Quantization
  • :shaved_ice:LoRA Hot Swapping / Dynamic LoRAs

vLLM engine arguments, flags, options for serving models on vLLM.

ArgumentExample and use-case
--gpu-memory-utilizationDefault 0.9. How much VRAM usage vLLM can use. Reduce if going out of memory. Try setting this to 0.95 or 0.97.
--max-model-lenSet maximum sequence length. Reduce this if going out of memory! For example set --max-model-len 32768 to use only 32K sequence lengths.
--quantizationUse fp8 for dynamic float8 quantization. Use this in tandem with --kv-cache-dtype fp8 to enable float8 KV cache as well.
--kv-cache-dtypeUse fp8 for float8 KV cache to reduce memory usage by 50%.
--portDefault is 8000. How to access vLLM's localhost ie http://localhost:8000
--api-keyOptional - Set the password (or no password) to access the model.
--tensor-parallel-sizeDefault is 1. Splits model across tensors. Set this to how many GPUs you are using - if you have 4, set this to 4. 8, then 8. You should have NCCL, otherwise this might be slow.
--pipeline-parallel-sizeDefault is 1. Splits model across layers. Use this with --pipeline-parallel-size where TP is used within each node, and PP is used across multi-node setups (set PP to number of nodes)
--enable-loraEnables LoRA serving. Useful for serving Unsloth finetuned LoRAs.
--max-lorasHow many LoRAs you want to serve at 1 time. Set this to 1 for 1 LoRA, or say 16. This is a queue so LoRAs can be hot-swapped.
--max-lora-rankMaximum rank of all LoRAs. Possible choices are 8, 16, 32, 64, 128, 256, 320, 512
--dtypeAllows auto, bfloat16, float16 Float8 and other quantizations use a different flag - see --quantization
--tokenizerSpecify the tokenizer path like unsloth/gpt-oss-20b if the served model has a different tokenizer.
--hf-tokenAdd your HuggingFace token if needed for gated models
--swap-spaceDefault is 4GB. CPU offloading usage. Reduce if you have VRAM, or increase for low memory GPUs.
--seedDefault is 0 for vLLM
--disable-log-statsDisables logging like throughput, server requests.
--enforce-eagerDisables compilation. Faster to load, but slower for inference.
--disable-cascade-attnUseful for Reinforcement Learning runs for vLLM < 0.11.0, as Cascade Attention was slightly buggy on A100 GPUs (Unsloth fixes this)

:tada:Float8 Quantization

For example to host Llama 3.3 70B Instruct (supports 128K context length) with Float8 KV Cache and quantization, try:

:shaved_ice:LoRA Hot Swapping / Dynamic LoRAs

To enable LoRA serving for at most 4 LoRAs at 1 time (these are hot swapped / changed), first set the environment flag to allow hot swapping:

Then, serve it with LoRA support:

To load a LoRA dynamically (set the lora name as well), do:

To remove it from the pool:

Examples:

Example 1 (bash):

vllm serve unsloth/Llama-3.3-70B-Instruct \
    --quantization fp8 \
    --kv-cache-dtype fp8
    --gpu-memory-utilization 0.97 \
    --max-model-len 65536

Example 2 (bash):

export VLLM_ALLOW_RUNTIME_LORA_UPDATING=True

Example 3 (bash):

export VLLM_ALLOW_RUNTIME_LORA_UPDATING=True
vllm serve unsloth/Llama-3.3-70B-Instruct \
    --quantization fp8 \
    --kv-cache-dtype fp8
    --gpu-memory-utilization 0.97 \
    --max-model-len 65536 \
    --enable-lora \
    --max-loras 4 \
    --max-lora-rank 64

Example 4 (bash):

curl -X POST http://localhost:8000/v1/load_lora_adapter \
    -H "Content-Type: application/json" \
    -d '{
        "lora_name": "LORA_NAME",
        "lora_path": "/path/to/LORA"
    }'

QwQ-32B: How to Run effectively

URL: llms-txt#qwq-32b:-how-to-run-effectively

Contents:

  • ⚙️ Official Recommended Settings
  • 👍 Recommended settings for llama.cpp
  • ☀️ Dry Repetition Penalty
  • 🦙 Tutorial: How to Run QwQ-32B in Ollama
  • 📖 Tutorial: How to Run QwQ-32B in llama.cpp

How to run QwQ-32B effectively with our bug fixes and without endless generations + GGUFs.

Qwen released QwQ-32B - a reasoning model with performance comparable to DeepSeek-R1 on many benchmarks. However, people have been experiencing infinite generations, many repetitions, <think> token issues and finetuning issues. We hope this guide will help debug and fix most issues!

{% hint style="info" %} Our model uploads with our bug fixes work great for fine-tuning, vLLM and Transformers. If you're using llama.cpp and engines that use llama.cpp as backend, follow our instructions here to fix endless generations. {% endhint %}

Unsloth QwQ-32B uploads with our bug fixes:

GGUF Dynamic 4-bit BnB 4-bit 16-bit

According to Qwen, these are the recommended settings for inference:

  • Temperature of 0.6
  • Top_K of 40 (or 20 to 40)
  • Min_P of 0.00 (optional, but 0.01 works well, llama.cpp default is 0.1)
  • Top_P of 0.95
  • Repetition Penalty of 1.0. (1.0 means disabled in llama.cpp and transformers)
  • Chat template: <|im_start|>user\nCreate a Flappy Bird game in Python.<|im_end|>\n<|im_start|>assistant\n<think>\n

{% hint style="warning" %} llama.cpp uses min_p = 0.1by default, which might cause issues. Force it to 0.0. {% endhint %}

We noticed many people use a Repetition Penalty greater than 1.0. For example 1.1 to 1.5. This actually interferes with llama.cpp's sampling mechanisms. The goal of a repetition penalty is to penalize repeated generations, but we found this doesn't work as expected.

Turning off Repetition Penalty also works (ie setting it to 1.0), but we found using it to be useful to penalize endless generations.

To use it, we found you must also edit the ordering of samplers in llama.cpp to before applying Repetition Penalty, otherwise there will be endless generations. So add this:

By default, llama.cpp uses this ordering:

We reorder essentially temperature and dry, and move min_p forward. This means we apply samplers in this order:

If you still encounter issues, you can increase the--repeat-penalty 1.0 to 1.2 or 1.3.

Courtesy to @krist486 for bringing llama.cpp sampling directions to my attention.

☀️ Dry Repetition Penalty

We investigated usage of dry penalty as suggested in https://github.com/ggml-org/llama.cpp/blob/master/examples/main/README.md using a value of 0.8, but we actually found this to rather cause syntax issues especially for coding. If you still encounter issues, you can increase thedry penalty to 0.8.

Utilizing our swapped sampling ordering can also help if you decide to use dry penalty.

🦙 Tutorial: How to Run QwQ-32B in Ollama

  1. Install ollama if you haven't already!

  2. Run run the model! Note you can call ollama servein another terminal if it fails! We include all our fixes and suggested parameters (temperature, min_p etc) in param in our Hugging Face upload!

📖 Tutorial: How to Run QwQ-32B in llama.cpp

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

  2. Download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose Q4_K_M, or other quantized versions (like BF16 full precision). More versions at: https://huggingface.co/unsloth/QwQ-32B-GGUF

Examples:

Example 1 (bash):

--samplers "top_k;top_p;min_p;temperature;dry;typ_p;xtc"

Example 2 (bash):

--samplers "dry;top_k;typ_p;top_p;min_p;xtc;temperature"

Example 3 (bash):

top_k=40
top_p=0.95
min_p=0.0
temperature=0.6
dry
typ_p
xtc

Example 4 (bash):

apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh

Qwen3-VL: How to Run & Fine-tune

URL: llms-txt#qwen3-vl:-how-to-run-&-fine-tune

Contents:

  • 🖥️ Running Qwen3-VL
    • ⚙️ Recommended Settings
    • :bug:Chat template bug fixes
    • 📖 Llama.cpp: Run Qwen3-VL Tutorial

Learn to fine-tune and run Qwen3-VL locally with Unsloth.

Qwen3-VL is Qwens new vision models with instruct and thinking versions. The 2B, 4B, 8B and 32B models are dense, while 30B and 235B are MoE. The 235B thinking LLM delivers SOTA vision and coding performance rivaling GPT-5 (high) and Gemini 2.5 Pro.

Qwen3-VL has vision, video and OCR capabilities as well as 256K context (can be extended to 1M).

Unsloth supports Qwen3-VL fine-tuning and RL. Train Qwen3-VL (8B) for free with our notebooks.

Running Qwen3-VLFine-tuning Qwen3-VL

Qwen3-VL Unsloth uploads:

Qwen3-VL is now supported for GGUFs by llama.cpp as of 30th October 2025, so you can run them locally!

Dynamic GGUFs (to run) 4-bit BnB Unsloth Dynamic 16-bit full-precision

🖥️ Running Qwen3-VL

To run the model in llama.cpp, vLLM, Ollama etc., here are the recommended settings:

Qwen recommends these settings for both models (they're a bit different for Instruct vs Thinking):

Instruct Settings: Thinking Settings:
Temperature = 0.7 Temperature = 1.0
Top_P = 0.8 Top_P = 0.95
presence_penalty = 1.5 presence_penalty = 0.0
Output Length = 32768 (up to 256K) Output Length = 40960 (up to 256K)
Top_K = 20 Top_K = 20

Qwen3-VL also used the below settings for their benchmarking numbers, as mentioned on GitHub.

{% columns %} {% column %} Instruct Settings:

{% column %} Thinking Settings:

{% endcolumn %} {% endcolumns %}

:bug:Chat template bug fixes

At Unsloth, we care about accuracy the most, so we investigated why after the 2nd turn of running the Thinking models, llama.cpp would break, as seen below:

{% columns %} {% column %}

{% column %} The error code:

{% endcolumn %} {% endcolumns %}

We have successfully fixed the Thinking chat template for the VL models so we re-uploaded all Thinking quants and Unsloth's quants. They should now all work after the 2nd conversation - other quants will fail to load after the 2nd conversation.

📖 Llama.cpp: Run Qwen3-VL Tutorial

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

  2. Let's first get an image! You can also upload images as well. We shall use https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/unsloth%20made%20with%20love.png, which is just our mini logo showing how finetunes are made with Unsloth:

  1. Let's download this image

{% code overflow="wrap" %}

  1. Let's get the 2nd image at https://files.worldwildlife.org/wwfcmsprod/images/Sloth_Sitting_iStock_3_12_2014/story_full_width/8l7pbjmj29_iStock_000011145477Large_mini__1_.jpg

{% code overflow="wrap" %}

  1. Then, let's use llama.cpp's auto model downloading feature, try this for the 8B Instruct model:

  2. Once in, you will see the below screen:

  1. Load up the image via /image PATH ie /image unsloth.png then press ENTER
  1. When you hit ENTER, it'll say "unsloth.png image loaded"
  1. Now let's ask a question like "What is this image?":
  1. Now load in picture 2 via /image picture.png then hit ENTER and ask "What is this image?"
  1. And finally let's ask how are both images are related (it works!)

{% code overflow="wrap" %}

  1. You can also download the model via (after installing pip install huggingface_hub hf_transfer ) HuggingFace's snapshot_download which is useful for large model downloads, since llama.cpp's auto downloader might lag. You can choose Q4_K_M, or other quantized versions.

Examples:

Example 1 (bash):

export greedy='false'
export seed=3407
export top_p=0.8
export top_k=20
export temperature=0.7
export repetition_penalty=1.0
export presence_penalty=1.5
export out_seq_length=32768

Example 2 (bash):

export greedy='false'
export seed=1234
export top_p=0.95
export top_k=20
export temperature=1.0
export repetition_penalty=1.0
export presence_penalty=0.0
export out_seq_length=40960

Example 3 (unknown):

terminate called after throwing an instance of 'std::runtime_error'
  what():  Value is not callable: null at row 63, column 78:
            {%- if '</think>' in content %}
                {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                             ^

Example 4 (bash):

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first
cp llama.cpp/build/bin/llama-* llama.cpp

Main game loop:

URL: llms-txt#main-game-loop:

Contents:

  • 🌄 Still doesn't work? Try Min_p = 0.1, Temperature = 1.5
  • 🤔 <think> token not shown?
  • Extra Notes
  • ✏️ Tokenizer Bug Fixes
  • :tools: Dynamic 4-bit Quants

while running : for event in pygame.event.get() : if quit ... etc

pygame.quit() print("Code is simplified. Due time constraints, full working version requires further implementation.") bash ./llama.cpp/llama-cli --model unsloth-QwQ-32B-GGUF/QwQ-32B-Q4_K_M.gguf
--threads 32 --n-gpu-layers 99
--ctx-size 16384
--temp 1.5
--min-p 0.1
--top-k 0
--top-p 1.0
-no-cnv
--prompt "<|im_start|>user\nCreate a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|im_end|>\n<|im_start|>assistant\n\n" bash ./llama.cpp/llama-cli --model unsloth-QwQ-32B-GGUF/QwQ-32B-Q4_K_M.gguf
--threads 32 --n-gpu-layers 99
--ctx-size 16384
--temp 0.6
--min-p 0.0
--top-k 40
--top-p 0.95
-no-cnv
--prompt "<|im_start|>user\nCreate a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|im_end|>\n<|im_start|>assistant\n\n"

{%- if tools %} {{- '<|im_start|>system\n' }} {%- if messages[0]['role'] == 'system' %} {{- messages[0]['content'] }} {%- else %} {{- '' }} {%- endif %} {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within XML tags:\n" }} {%- for tool in tools %} {{- "\n" }} {{- tool | tojson }} {%- endfor %} {{- "\n\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{"name": , "arguments": }\n</tool_call><|im_end|>\n" }} {%- else %} {%- if messages[0]['role'] == 'system' %} {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- for message in messages %} {%- if (message.role == "user") or (message.role == "system" and not loop.first) %} {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" and not message.tool_calls %} {%- set content = message.content.split('')[-1].lstrip('\n') %} {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" %} {%- set content = message.content.split('')[-1].lstrip('\n') %} {{- '<|im_start|>' + message.role }} {%- if message.content %} {{- '\n' + content }} {%- endif %} {%- for tool_call in message.tool_calls %} {%- if tool_call.function is defined %} {%- set tool_call = tool_call.function %} {%- endif %} {{- '\n<tool_call>\n{"name": "' }} {{- tool_call.name }} {{- '", "arguments": ' }} {{- tool_call.arguments | tojson }} {{- '}\n</tool_call>' }} {%- endfor %} {{- '<|im_end|>\n' }} {%- elif message.role == "tool" %} {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %} {{- '<|im_start|>user' }} {%- endif %} {{- '\n<tool_response>\n' }} {{- message.content }} {{- '\n</tool_response>' }} {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} {{- '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{- '<|im_start|>assistant\n\n' }} {%- endif %}

{%- if tools %} {{- '<|im_start|>system\n' }} {%- if messages[0]['role'] == 'system' %} {{- messages[0]['content'] }} {%- else %} {{- '' }} {%- endif %} {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within XML tags:\n" }} {%- for tool in tools %} {{- "\n" }} {{- tool | tojson }} {%- endfor %} {{- "\n\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{"name": , "arguments": }\n</tool_call><|im_end|>\n" }} {%- else %} {%- if messages[0]['role'] == 'system' %} {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- for message in messages %} {%- if (message.role == "user") or (message.role == "system" and not loop.first) %} {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" and not message.tool_calls %} {%- set content = message.content.split('')[-1].lstrip('\n') %} {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" %} {%- set content = message.content.split('')[-1].lstrip('\n') %} {{- '<|im_start|>' + message.role }} {%- if message.content %} {{- '\n' + content }} {%- endif %} {%- for tool_call in message.tool_calls %} {%- if tool_call.function is defined %} {%- set tool_call = tool_call.function %} {%- endif %} {{- '\n<tool_call>\n{"name": "' }} {{- tool_call.name }} {{- '", "arguments": ' }} {{- tool_call.arguments | tojson }} {{- '}\n</tool_call>' }} {%- endfor %} {{- '<|im_end|>\n' }} {%- elif message.role == "tool" %} {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %} {{- '<|im_start|>user' }} {%- endif %} {{- '\n<tool_response>\n' }} {{- message.content }} {{- '\n</tool_response>' }} {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} {{- '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{- '<|im_start|>assistant\n' }} {%- endif %} json { ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } bash --override-kv qwen2.context_length=int:131072
--override-kv qwen2.rope.scaling.type=str:yarn
--override-kv qwen2.rope.scaling.factor=float:4
--override-kv qwen2.rope.scaling.original_context_length=int:32768
--override-kv qwen2.rope.scaling.attn_factor=float:1.13862943649292
bash --override-kv qwen2.attention.layer_norm_rms_epsilon=float:0.000001 \

"eos_token": "<|im_end|>", "pad_token": "<|endoftext|>",


## :tools: Dynamic 4-bit Quants

We also uploaded dynamic 4bit quants which increase accuracy vs naive 4bit quantizations! We attach the QwQ quantization error plot analysis for both activation and weight quantization errors:

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F32wjrIWeUEQTMq9PhmbS%2FQwQ%20quantization%20errors.png?alt=media&#x26;token=0733fd33-9fe9-4aad-812c-75dbad00373f" alt=""><figcaption></figcaption></figure>

We uploaded dynamic 4-bit quants to: <https://huggingface.co/unsloth/QwQ-32B-unsloth-bnb-4bit>

Since vLLM 0.7.3 (2025 February 20th) <https://github.com/vllm-project/vllm/releases/tag/v0.7.3>, vLLM now supports loading Unsloth dynamic 4bit quants!

All our GGUFs are at <https://huggingface.co/unsloth/QwQ-32B-GGUF>!

**Examples:**

Example 1 (unknown):
```unknown
9. You might be wondering maybe it's Q4\_K\_M? B16 ie full precision should work fine right? Incorrect - the outputs again fail if we do not use our fix of -`-samplers "top_k;top_p;min_p;temperature;dry;typ_p;xtc"` when using a Repetition Penalty.

## :sunrise\_over\_mountains: Still doesn't work? Try Min\_p = 0.1, Temperature = 1.5

According to the Min\_p paper <https://arxiv.org/pdf/2407.01082>, for more creative and diverse outputs, and if you still see repetitions, try disabling top\_p and top\_k!

Example 2 (unknown):

Another approach is to disable `min_p` directly, since llama.cpp by default uses `min_p = 0.1`!

Example 3 (unknown):

## :thinking: \<think> token not shown?

Some people are reporting that because \<think> is default added in the chat template, some systems are not outputting the thinking traces correctly. You will have to manually edit the Jinja template from:

{% code overflow="wrap" %}

Example 4 (unknown):

{% endcode %}

to another by removing the `<think>\n` at the end. The model will now have to manually add `<think>\n` during inference, which might not always succeed. DeepSeek also edited all models to default add a `<think>` token to force the model to go into reasoning model.

So change `{%- if add_generation_prompt %} {{- '<|im_start|>assistant\n<think>\n' }} {%- endif %}` to `{%- if add_generation_prompt %} {{- '<|im_start|>assistant\n' }} {%- endif %}`  ie remove `<think>\n`

<details>

<summary>Full jinja template with removed &#x3C;think>\n part</summary>

{% code overflow="wrap" %}

Push to Hugging Face Hub (requires a token)

URL: llms-txt#push-to-hugging-face-hub-(requires-a-token)

Contents:

  • Video Tutorials

model.push_to_hub_merged( "your-username/model-name", tokenizer, save_method="merged_16bit", token="your-token" ) python model.push_to_hub_gguf( "your-username/model-name", tokenizer, quantization_method=["q4_k_m", "q8_0", "q5_k_m"], token="your-token", )


Once saved in GGUF format, the model can be easily deployed in lightweight environments using **llama.cpp** or used in other inference engines.
{% endstep %}
{% endstepper %}

Here are some video tutorials created by amazing YouTubers who we think are fantastic!

{% embed url="<https://www.youtube.com/watch?v=SoPE1cUz3Hs>" %}
Local GRPO on your own device
{% endembed %}

{% embed url="<https://www.youtube.com/watch?t=3289s&v=bbFEYPx9Hpo>" %}
Great to learn about how to prep your dataset and explanations behind Reinforcement Learning + GRPO basics
{% endembed %}

{% embed url="<https://www.youtube.com/watch?v=juOh1afy-IE>" %}

{% embed url="<https://www.youtube.com/watch?v=oF0_eMhzRaQ>" %}

**Examples:**

Example 1 (unknown):
```unknown
#### **Saving in GGUF Format for llama.cpp**

Unsloth also supports saving in **GGUF format**, making it compatible with **llama.cpp** and **Ollama**.

Int8 QAT

URL: llms-txt#int8-qat

Contents:

  • :teapot:Quantizing models without training

from torchao.quantization import Int8DynamicActivationInt8WeightConfig model.save_pretrained_torchao( model, "tokenizer", torchao_config = Int8DynamicActivationInt8WeightConfig(), ) python

Examples:

Example 1 (unknown):

{% endcode %}

You can then run the merged QAT lower precision model in vLLM, Unsloth and other systems for inference! These are all in the [Qwen3-4B QAT Colab notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_\(4B\)_Instruct-QAT.ipynb) we have as well!

### :teapot:Quantizing models without training

You can also call `model.save_pretrained_torchao` directly without doing any QAT as well! This is simply PTQ or native quantization. For example, saving to Dynamic float8 format is below:

{% code overflow="wrap" %}

Define the system prompt that instructs the model to use a specific format

URL: llms-txt#define-the-system-prompt-that-instructs-the-model-to-use-a-specific-format

SYSTEM_PROMPT = """ Respond in the following format: ... ... """

XML_COT_FORMAT = """
{reasoning} {answer} """

import re from datasets import load_dataset, Dataset

Examples:

Example 1 (unknown):

Now, to prepare the dataset:

os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"

URL: llms-txt#os.environ["hf_hub_enable_hf_transfer"]-=-"1"

Contents:

  • Running on Mac / Apple devices
  • Run in Ollama/Open WebUI
  • DeepSeek Chat Template
  • GGUF R1 Table

from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/DeepSeek-R1-GGUF", local_dir = "DeepSeek-R1-GGUF", allow_patterns = ["UD-IQ1_S"], # Select quant type UD-IQ1_S for 1.58bit ) bash ./llama.cpp/llama-cli
--model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf
--cache-type-k q4_0
--threads 12 -no-cnv --prio 2
--temp 0.6
--ctx-size 8192
--seed 3407
--prompt "<User>What is 1+1?<Assistant>" txt Okay, so I need to figure out what 1 plus 1 is. Hmm, where do I even start? I remember from school that adding numbers is pretty basic, but I want to make sure I understand it properly. Let me think, 1 plus 1. So, I have one item and I add another one. Maybe like a apple plus another apple. If I have one apple and someone gives me another, I now have two apples. So, 1 plus 1 should be 2. That makes sense. Wait, but sometimes math can be tricky. Could it be something else? Like, in a different number system maybe? But I think the question is straightforward, using regular numbers, not like binary or hexadecimal or anything. I also recall that in arithmetic, addition is combining quantities. So, if you have two quantities of 1, combining them gives you a total of 2. Yeah, that seems right. Is there a scenario where 1 plus 1 wouldn't be 2? I can't think of any... bash ./llama.cpp/llama-cli
--model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf
--cache-type-k q4_0
--threads 12 -no-cnv --prio 2
--n-gpu-layers 7
--temp 0.6
--ctx-size 8192
--seed 3407
--prompt "<User>Create a Flappy Bird game in Python.<Assistant>"

<User>Create a Flappy Bird game in Python. You must include these things:

  1. You must use pygame.
  2. The background color should be randomly chosen and is a light shade. Start with a light blue color.
  3. Pressing SPACE multiple times will accelerate the bird.
  4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.
  5. Place on the bottom some land colored as dark brown or yellow chosen randomly.
  6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.
  7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.
  8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again. The final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<Assistant>

./llama.cpp/llama-cli
--model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf
--cache-type-k q4_0
--threads 12 -no-cnv --prio 2
--n-gpu-layers 7
--temp 0.6
--ctx-size 8192
--seed 3407
--prompt "<User>Create a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<Assistant>"

./llama.cpp/llama-gguf-split --merge
DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf
merged_file.gguf

./llama.cpp/llama-cli
--model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf
--cache-type-k q4_0
--threads 16
--prio 2
--temp 0.6
--ctx-size 8192
--seed 3407
--n-gpu-layers 59
-no-cnv
--prompt "<User>Create a Flappy Bird game in Python.<Assistant>"

./llama.cpp/llama-gguf-split --merge
DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf
merged_file.gguf


## DeepSeek Chat Template

All distilled versions and the main 671B R1 model use the same chat template:

`<begin▁of▁sentence><User>What is 1+1?<Assistant>It's 2.<end▁of▁sentence><User>Explain more!<Assistant>`

A BOS is forcibly added, and an EOS separates each interaction. To counteract double BOS tokens during inference, you should only call *tokenizer.encode(..., add\_special\_tokens = False)* since the chat template auto adds a BOS token as well.\
For llama.cpp / GGUF inference, you should skip the BOS since itll auto add it.

`<User>What is 1+1?<Assistant>`

The \<think> and \</think> tokens get their own designated tokens. For the distilled versions for Qwen and Llama, some tokens are re-mapped, whilst Qwen for example did not have a BOS token, so <|object\_ref\_start|> had to be used instead.\
\
**Tokenizer ID Mappings:**

| Token                     | R1     | Distill Qwen | Distill Llama |
| ------------------------- | ------ | ------------ | ------------- |
| \<think>                  | 128798 | 151648       | 128013        |
| \</think>                 | 128799 | 151649       | 128014        |
| <\|begin\_of\_sentence\|> | 0      | 151646       | 128000        |
| <\|end\_of\_sentence\|>   | 1      | 151643       | 128001        |
| <\|User\|>                | 128803 | 151644       | 128011        |
| <\|Assistant\|>           | 128804 | 151645       | 128012        |
| Padding token             | 2      | 151654       | 128004        |

Original tokens in models:

| Token                 | Qwen 2.5 32B Base        | Llama 3.3 70B Instruct            |
| --------------------- | ------------------------ | --------------------------------- |
| \<think>              | <\|box\_start\|>         | <\|reserved\_special\_token\_5\|> |
| \</think>             | <\|box\_end\|>           | <\|reserved\_special\_token\_6\|> |
| <begin▁of▁sentence> | <\|object\_ref\_start\|> | <\|begin\_of\_text\|>             |
| <end▁of▁sentence>   | <\|endoftext\|>          | <\|end\_of\_text\|>               |
| <User>              | <\|im\_start\|>          | <\|reserved\_special\_token\_3\|> |
| <Assistant>         | <\|im\_end\|>            | <\|reserved\_special\_token\_4\|> |
| Padding token         | <\|vision\_pad\|>        | <\|finetune\_right\_pad\_id\|>    |

All Distilled and the original R1 versions seem to have accidentally assigned the padding token to <end▁of▁sentence>, which is mostly not a good idea, especially if you want to further finetune on top of these reasoning models. This will cause endless infinite generations, since most frameworks will mask the EOS token out as -100.\
\
We fixed all distilled and the original R1 versions with the correct padding token (Qwen uses <|vision\_pad|>, Llama uses <|finetune\_right\_pad\_id|>, and R1 uses <▁pad▁> or our own added <PAD▁TOKEN>.

<table data-full-width="true"><thead><tr><th>MoE Bits</th><th>Type</th><th>Disk Size</th><th>Accuracy</th><th>Link</th><th>Details</th></tr></thead><tbody><tr><td>1.58bit</td><td>UD-IQ1_S</td><td><strong>131GB</strong></td><td>Fair</td><td><a href="https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_S">Link</a></td><td>MoE all 1.56bit. <code>down_proj</code> in MoE mixture of 2.06/1.56bit</td></tr><tr><td>1.73bit</td><td>UD-IQ1_M</td><td><strong>158GB</strong></td><td>Good</td><td><a href="https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_M">Link</a></td><td>MoE all 1.56bit. <code>down_proj</code> in MoE left at 2.06bit</td></tr><tr><td>2.22bit</td><td>UD-IQ2_XXS</td><td><strong>183GB</strong></td><td>Better</td><td><a href="https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ2_XXS">Link</a></td><td>MoE all 2.06bit. <code>down_proj</code> in MoE mixture of 2.5/2.06bit</td></tr><tr><td>2.51bit</td><td>UD-Q2_K_XL</td><td><strong>212GB</strong></td><td>Best</td><td><a href="https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-Q2_K_XL">Link</a></td><td>MoE all 2.5bit. <code>down_proj</code> in MoE mixture of 3.5/2.5bit</td></tr></tbody></table>

**Examples:**

Example 1 (unknown):
```unknown
6. Example with Q4\_0 K quantized cache **Notice -no-cnv disables auto conversation mode**

Example 2 (unknown):

Example output:

Example 3 (unknown):

4. If you have a GPU (RTX 4090 for example) with 24GB, you can offload multiple layers to the GPU for faster processing. If you have multiple GPUs, you can probably offload more layers.

Example 4 (unknown):

5. To test our Flappy Bird example as mentioned in our blog post here: <https://unsloth.ai/blog/deepseekr1-dynamic>, we can produce the 2nd example like below using our 1.58bit dynamic quant:

<table data-column-title-hidden data-view="cards" data-full-width="false"><thead><tr><th></th><th></th><th></th><th data-hidden data-card-cover data-type="files"></th></tr></thead><tbody><tr><td>Original DeepSeek R1</td><td></td><td></td><td><a href="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FHHUZZTFj0WpgSuWFlibf%2FInShot_20250127_043158375_H8Uu6tyJXYAFwUEIu04Am.gif?alt=media&#x26;token=a959720d-b1b4-4b80-b10d-1c41928dfdcf">InShot_20250127_043158375_H8Uu6tyJXYAFwUEIu04Am.gif</a></td></tr><tr><td>1.58bit Dynamic Quant</td><td></td><td></td><td><a href="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FqgLhnVaN53kV4cvZaDci%2FInShot_20250127_042648160_lrtL8-eRhl4qtLaUDSU87.gif?alt=media&#x26;token=e608b30a-1cbe-49ac-b18a-967a50c67c68">InShot_20250127_042648160_lrtL8-eRhl4qtLaUDSU87.gif</a></td></tr></tbody></table>

The prompt used is as below:

{% code overflow="wrap" %}

IBM Granite 4.0

URL: llms-txt#ibm-granite-4.0

Contents:

  • Run Granite-4.0 Tutorials
    • ⚙️ Recommended Inference Settings
    • 🦙 Ollama: Run Granite-4.0 Tutorial
    • 📖 llama.cpp: Run Granite-4.0 Tutorial

How to run IBM Granite-4.0 with Unsloth GGUFs on llama.cpp, Ollama and how to fine-tune!

IBM releases Granite-4.0 models with 3 sizes including Nano (350M & 1B), Micro (3B), Tiny (7B/1B active) and Small (32B/9B active). Trained on 15T tokens, IBMs new Hybrid (H) Mamba architecture enables Granite-4.0 models to run faster with lower memory use.

Learn how to run Unsloth Granite-4.0 Dynamic GGUFs or fine-tune/RL the model. You can fine-tune Granite-4.0 with our free Colab notebook for a support agent use-case.

Running TutorialFine-tuning Tutorial

Unsloth Granite-4.0 uploads:

Dynamic GGUFsDynamic 4-bit + FP816-bit Instruct

Dynamic 4-bit Instruct:

FP8 Dynamic:

You can also view our Granite-4.0 collection for all uploads including Dynamic Float8 quants etc.

Granite-4.0 Models Explanations:

  • Nano and H-Nano: The 350M and 1B models offer strong instruction-following abilities, enabling advanced on-device and edge AI and research/fine-tuning applications.
  • H-Small (MoE): Enterprise workhorse for daily tasks, supports multiple long-context sessions on entry GPUs like L40S (32B total, 9B active).
  • H-Tiny (MoE): Fast, cost-efficient for high-volume, low-complexity tasks; optimized for local and edge use (7B total, 1B active).
  • H-Micro (Dense): Lightweight, efficient for high-volume, low-complexity workloads; ideal for local and edge deployment (3B total).
  • Micro (Dense): Alternative dense option when Mamba2 isnt fully supported (3B total).

Run Granite-4.0 Tutorials

IBM recommends these settings:

temperature=0.0, top_p=1.0, top_k=0

  • Temperature of 0.0
  • Top_K = 0
  • Top_P = 1.0
  • Recommended minimum context: 16,384
  • Maximum context length window: 131,072 (128K context)

🦙 Ollama: Run Granite-4.0 Tutorial

  1. Install ollama if you haven't already!

  2. Run the model! Note you can call ollama servein another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in params in our Hugging Face upload! You can change the model name 'granite-4.0-h-small-GGUF' to any Granite model like 'granite-4.0-h-micro:Q8_K_XL'.

📖 llama.cpp: Run Granite-4.0 Tutorial

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

  2. If you want to use llama.cpp directly to load models, you can do the below: (:Q4_K_XL) is the quantization type. You can also download via Hugging Face (point 3). This is similar to ollama run

  3. OR download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose Q4_K_M, or other quantized versions (like BF16 full precision).

Examples:

Example 1 (unknown):

<|start_of_role|>system<|end_of_role|>You are a helpful assistant. Please ensure responses are professional, accurate, and safe.<|end_of_text|>
<|start_of_role|>user<|end_of_role|>Please list one IBM Research laboratory located in the United States. You should only output its name and location.<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>Almaden Research Center, San Jose, California<|end_of_text|>

Example 2 (bash):

apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh

Example 3 (bash):

ollama run hf.co/unsloth/granite-4.0-h-small-GGUF:UD-Q4_K_XL

Example 4 (bash):

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp

For BF16:

URL: llms-txt#for-bf16:

python llama.cpp/convert_hf_to_gguf.py merged_model
--outfile model-BF16.gguf --outtype bf16
--split-max-size 50G


Setting up Wandb

URL: llms-txt#setting-up-wandb

Contents:

  • :question:How do I do Early Stopping?

os.environ["WANDB_PROJECT"] = "" os.environ["WANDB_LOG_MODEL"] = "checkpoint"

report_to = "wandb", logging_steps = 1, # Change if needed save_steps = 100 # Change if needed run_name = "" # (Optional)

import wandb run = wandb.init() artifact = run.use_artifact('//', type='model') artifact_dir = artifact.download() trainer.train(resume_from_checkpoint=artifact_dir) python from trl import SFTConfig, SFTTrainer trainer = SFTTrainer( args = SFTConfig( fp16_full_eval = True, per_device_eval_batch_size = 2, eval_accumulation_steps = 4, output_dir = "training_checkpoints", # location of saved checkpoints for early stopping save_strategy = "steps", # save model every N steps save_steps = 10, # how many steps until we save the model save_total_limit = 3, # keep ony 3 saved checkpoints to save disk space eval_strategy = "steps", # evaluate every N steps eval_steps = 10, # how many steps until we do evaluation load_best_model_at_end = True, # MUST USE for early stopping metric_for_best_model = "eval_loss", # metric we want to early stop on greater_is_better = False, # the lower the eval loss, the better ), model = model, tokenizer = tokenizer, train_dataset = new_dataset["train"], eval_dataset = new_dataset["test"], ) python from transformers import EarlyStoppingCallback early_stopping_callback = EarlyStoppingCallback( early_stopping_patience = 3, # How many steps we will wait if the eval loss doesn't decrease # For example the loss might increase, but decrease after 3 steps early_stopping_threshold = 0.0, # Can set higher - sets how much loss should decrease by until # we consider early stopping. For eg 0.01 means if loss was # 0.02 then 0.01, we consider to early stop the run. ) trainer.add_callback(early_stopping_callback)


Then train the model as usual via `trainer.train() .`

**Examples:**

Example 1 (unknown):
```unknown
Then in `TrainingArguments()` set

Example 2 (unknown):

To train the model, do `trainer.train()`; to resume training, do

Example 3 (unknown):

## :question:How do I do Early Stopping?

If you want to stop or pause the finetuning / training run since the evaluation loss is not decreasing, then you can use early stopping which stops the training process. Use `EarlyStoppingCallback`.

As usual, set up your trainer and your evaluation dataset. The below is used to stop the training run if the `eval_loss` (the evaluation loss) is not decreasing after 3 steps or so.

Example 4 (unknown):

We then add the callback which can also be customized:

LoRA Hyperparameters Guide

URL: llms-txt#lora-hyperparameters-guide

Contents:

  • :question:But what is LoRA?
  • 🔢 Key Fine-tuning Hyperparameters
    • Learning Rate
    • Epochs
    • LoRA or QLoRA
    • Hyperparameters & Recommendations:
  • 🌳 Gradient Accumulation and Batch Size equivalency
    • Effective Batch Size
    • The VRAM & Performance Trade-off
    • 🦥 Unsloth Gradient Accumulation Fix

Optimal lora rank. alpha, number of epochs, batch size & gradient accumulation, QLoRA vs LoRA, target modules and more!

LoRA hyperparameters are adjustable parameters that control how Low-Rank Adaptation (LoRA) fine-tunes LLMs. With many options (such as learning rate and epochs) and millions of possible combinations, selecting the right values is crucial for achieving accuracy, stability, quality, and fewer hallucinations during fine-tuning.

You'll learn the best practices for these parameters, based on insights from hundreds of research papers and experiments, and see how they impact the model. While we recommend using Unsloth's defaults, understanding these concepts will give you full control.

The goal is to change hyperparameter numbers to increase accuracy while counteracting overfitting or underfitting. Overfitting occurs when the model memorizes the training data, harming its ability to generalize to new, unseen inputs. The objective is a model that generalizes well, not one that simply memorizes.

{% columns %} {% column %}

:question:But what is LoRA?

In LLMs, we have model weights. Llama 70B has 70 billion numbers. Instead of changing all 70b numbers, we instead add thin matrices A and B to each weight, and optimize those. This means we only optimize 1% of weights. {% endcolumn %}

Instead of optimizing Model Weights (yellow), we optimize 2 thin matrices A and B.

{% endcolumn %} {% endcolumns %}

🔢 Key Fine-tuning Hyperparameters

Learning Rate

Defines how much the models weights are adjusted during each training step.

  • Higher Learning Rates: Lead to faster initial convergence but can cause training to become unstable or fail to find an optimal minimum if set too high.
  • Lower Learning Rates: Result in more stable and precise training but may require more epochs to converge, increasing overall training time. While low learning rates are often thought to cause underfitting, they actually can lead to overfitting or even prevent the model from learning.
  • Typical Range: 2e-4 (0.0002) to 5e-6 (0.000005).
    🟩 For normal LoRA/QLoRA Fine-tuning, we recommend 2e-4 as a starting point.
    🟦 For Reinforcement Learning (DPO, GRPO etc.), we recommend 5e-6 .
    For Full Fine-tuning, lower learning rates are generally more appropriate.

The number of times the model sees the full training dataset.

  • More Epochs: Can help the model learn better, but a high number can cause it to memorize the training data, hurting its performance on new tasks.
  • Fewer Epochs: Reduces training time and can prevent overfitting, but may result in an undertrained model if the number is insufficient for the model to learn the dataset's underlying patterns.
  • Recommended: 1-3 epochs. For most instruction-based datasets, training for more than 3 epochs offers diminishing returns and increases the risk of overfitting.

LoRA or QLoRA

LoRA uses 16-bit precision, while QLoRA is a 4-bit fine-tuning method.

  • LoRA: 16-bit fine-tuning. It's slightly faster and slightly more accurate, but consumes significantly more VRAM (4× more than QLoRA). Recommended for 16-bit environments and scenarios where maximum accuracy is required.
  • QLoRA: 4-bit fine-tuning. Slightly slower and marginally less accurate, but uses much less VRAM (4× less).
    🦥 70B LLaMA fits in <48GB VRAM with QLoRA in Unsloth - more details here.

Hyperparameters & Recommendations:

HyperparameterFunctionRecommended Settings
LoRA Rank (r)Controls the number of trainable parameters in the LoRA adapter matrices. A higher rank increases model capacity but also memory usage.8, 16, 32, 64, 128

Choose 16 or 32
LoRA Alpha (lora_alpha)Scales the strength of the fine-tuned adjustments in relation to the rank (r).r (standard) or r * 2 (common heuristic). More details here.
LoRA DropoutA regularization technique that randomly sets a fraction of LoRA activations to zero during training to prevent overfitting. Not that useful, so we default set it to 0. 0 (default) to 0.1
Weight DecayA regularization term that penalizes large weights to prevent overfitting and improve generalization. Don't use too large numbers!0.01 (recommended) - 0.1
Warmup StepsGradually increases the learning rate at the start of training.5-10% of total steps
Scheduler TypeAdjusts the learning rate dynamically during training.linear or cosine
Seed (random_state)A fixed number to ensure reproducibility of results.Any integer (e.g., 42, 3407)
Target Modules

Specify which parts of the model you want to apply LoRA adapters to — either the attention, the MLP, or both.


Attention: q_proj, k_proj, v_proj, o_proj

MLP: gate_proj, up_proj, down_proj

Recommended to target all major linear layers: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj.

🌳 Gradient Accumulation and Batch Size equivalency

Effective Batch Size

Correctly configuring your batch size is critical for balancing training stability with your GPU's VRAM limitations. This is managed by two parameters whose product is the Effective Batch Size.

Effective Batch Size = batch_size * gradient_accumulation_steps

  • A larger Effective Batch Size generally leads to smoother, more stable training.
  • A smaller Effective Batch Size may introduce more variance.

While every task is different, the following configuration provides a great starting point for achieving a stable Effective Batch Size of 16, which works well for most fine-tuning tasks on modern GPUs.

Parameter Description Recommended Setting
Batch Size (batch_size)

The number of samples processed in a single forward/backward pass on one GPU.

Primary Driver of VRAM Usage. Higher values can improve hardware utilization and speed up training, but only if they fit in memory.

2
Gradient Accumulation (gradient_accumulation_steps)

The number of micro-batches to process before performing a single model weight update.

Primary Driver of Training Time. Allows simulation of a larger batch_size to conserve VRAM. Higher values increase training time per epoch.

8
Effective Batch Size (Calculated) The true batch size used for each gradient update. It directly influences training stability, quality, and final model performance.

4 to 16
Recommended: 16 (from 2 * 8)

The VRAM & Performance Trade-off

Assume you want 32 samples of data per training step. Then you can use any of the following configurations:

  • batch_size = 32, gradient_accumulation_steps = 1
  • batch_size = 16, gradient_accumulation_steps = 2
  • batch_size = 8, gradient_accumulation_steps = 4
  • batch_size = 4, gradient_accumulation_steps = 8
  • batch_size = 2, gradient_accumulation_steps = 16
  • batch_size = 1, gradient_accumulation_steps = 32

While all of these are equivalent for the model's weight updates, they have vastly different hardware requirements.

The first configuration (batch_size = 32) uses the most VRAM and will likely fail on most GPUs. The last configuration (batch_size = 1) uses the least VRAM, but at the cost of slightly slower training. To avoid OOM (out of memory) errors, always prefer to set a smaller batch_size and increase gradient_accumulation_steps to reach your target Effective Batch Size.

🦥 Unsloth Gradient Accumulation Fix

Gradient accumulation and batch sizes are now fully equivalent in Unsloth due to our bug fixes for gradient accumulation. We have implemented specific bug fixes for gradient accumulation that resolve a common issue where the two methods did not produce the same results. This was a known challenge in the wider community, but for Unsloth users, the two methods are now interchangeable.

Read our blog post for more details.

Prior to our fixes, combinations of batch_size and gradient_accumulation_steps that yielded the same Effective Batch Size (i.e., batch_size × gradient_accumulation_steps = 16) did not result in equivalent training behavior. For example, configurations like b1/g16, b2/g8, b4/g4, b8/g2, and b16/g1 all have an Effective Batch Size of 16, but as shown in the graph, the loss curves did not align when using standard gradient accumulation:

(Before - Standard Gradient Accumulation)

After applying our fixes, the loss curves now align correctly, regardless of how the Effective Batch Size of 16 is achieved:

(After - 🦥 Unsloth Gradient Accumulation)

🦥 LoRA Hyperparameters in Unsloth

The following demonstrates a standard configuration. While Unsloth provides optimized defaults, understanding these parameters is key to manual tuning.

The rank (r) of the fine-tuning process. A larger rank uses more memory and will be slower, but can increase accuracy on complex tasks. We suggest ranks like 8 or 16 (for fast fine-tunes) and up to 128. Using a rank that is too large can cause overfitting and harm your model's quality.\

For optimal performance, LoRA should be applied to all major linear layers. Research has shown that targeting all major layers is crucial for matching the performance of full fine-tuning. While it's possible to remove modules to reduce memory usage, we strongly advise against it to preserve maximum quality as the savings are minimal.\

A scaling factor that controls the strength of the fine-tuned adjustments. Setting it equal to the rank (r) is a reliable baseline. A popular and effective heuristic is to set it to double the rank (r * 2), which makes the model learn more aggressively by giving more weight to the LoRA updates. More details here.\

A regularization technique that helps prevent overfitting by randomly setting a fraction of the LoRA activations to zero during each training step. Recent research suggests that for the short training runs common in fine-tuning, lora_dropout may be an unreliable regularizer.
🦥 Unsloth's internal code can optimize training when lora_dropout = 0, making it slightly faster, but we recommend a non-zero value if you suspect overfitting.\

Leave this as "none" for faster training and reduced memory usage. This setting avoids training the bias terms in the linear layers, which adds trainable parameters for little to no practical gain.\

Options are True, False, and "unsloth".
🦥 We recommend "unsloth" as it reduces memory usage by an extra 30% and supports extremely long context fine-tunes. You can read more on our blog post about long context training.\

The seed to ensure deterministic, reproducible runs. Training involves random numbers, so setting a fixed seed is essential for consistent experiments.\

An advanced feature that implements Rank-Stabilized LoRA. If set to True, the effective scaling becomes lora_alpha / sqrt(r) instead of the standard lora_alpha / r. This can sometimes improve stability, particularly for higher ranks. More details here.\

An advanced technique, as proposed in LoftQ, initializes LoRA matrices with the top 'r' singular vectors from the pretrained weights. This can improve accuracy but may cause a significant memory spike at the start of training.

Verifying LoRA Weight Updates:

When validating that LoRA adapter weights have been updated after fine-tuning, avoid using np.allclose() for comparison. This method can miss subtle but meaningful changes, particularly in LoRA A, which is initialized with small Gaussian values. These changes may not register as significant under loose numerical tolerances. Thanks to contributors for this section.

To reliably confirm weight updates, we recommend:

  • Using checksum or hash comparisons (e.g., MD5)
  • Computing the sum of absolute differences between tensors
  • Inspecting tensor statistics (e.g., mean, variance) manually
  • Or using np.array_equal() if exact equality is expected

:triangular_ruler:LoRA Alpha and Rank relationship

{% hint style="success" %} It's best to set lora_alpha = 2 * lora_rank or lora_alpha = lora_rank {% endhint %}

{% columns %} {% column width="50%" %}


\hat{W} = W + \frac{\alpha}{\text{rank}} \times AB

rsLoRA other scaling options. sqrt(r) is the best.


\hat{W}\_{\text{rslora}} = W + \frac{\alpha}{\sqrt{\text{rank}}} \times AB

{% endcolumn %}

{% column %} The formula for LoRA is on the left. We need to scale the thin matrices A and B by alpha divided by the rank. This means we should keep alpha/rank at least = 1.

According to the rsLoRA (rank stabilized lora) paper, we should instead scale alpha by the sqrt of the rank. Other options exist, but theoretically this is the optimum. The left plot shows other ranks and their perplexities (lower is better). To enable this, set use_rslora = True in Unsloth.

Our recommendation is to set the alpha to equal to the rank, or at least 2 times the rank. This means alpha/rank = 1 or 2. {% endcolumn %} {% endcolumns %}

🎯 LoRA Target Modules and QLoRA vs LoRA

{% hint style="success" %} Use:
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj",] to target both MLP and attention layers to increase accuracy.

QLoRA uses 4-bit precision, reducing VRAM usage by over 75%.

LoRA (16-bit) is slightly more accurate and faster. {% endhint %}

According to empirical experiments and research papers like the original QLoRA paper, it's best to apply LoRA to both attention and MLP layers.

{% columns %} {% column %}

{% endcolumn %}

{% column %} The chart shows RougeL scores (higher is better) for different target module configurations, comparing LoRA vs QLoRA.

The first 3 dots show:

  1. QLoRA-All: LoRA applied to all FFN/MLP and Attention layers.
    🔥 This performs best overall.
  2. QLoRA-FFN: LoRA only on FFN.
    Equivalent to: gate_proj, up_proj, down_proj.
  3. QLoRA-Attention: LoRA applied only to Attention layers.
    Equivalent to: q_proj, k_proj, v_proj, o_proj. {% endcolumn %} {% endcolumns %}

😎 Training on completions only, masking out inputs

The QLoRA paper shows that masking out inputs and training only on completions (outputs or assistant messages) can further increase accuracy by a few percentage points (1%). Below demonstrates how this is done in Unsloth:

{% columns %} {% column %} NOT training on completions only:

USER: Hello what is 2+2?
ASSISTANT: The answer is 4.
USER: Hello what is 3+3?
ASSISTANT: The answer is 6.

{% column %} Training on completions only:

USER: Hello what is 2+2?
ASSISTANT: The answer is 4.
USER: Hello what is 3+3?
ASSISTANT: The answer is 6. {% endcolumn %} {% endcolumns %}

The QLoRA paper states that training on completions only increases accuracy by quite a bit, especially for multi-turn conversational finetunes! We do this in our conversational notebooks here.

To enable training on completions in Unsloth, you will need to define the instruction and assistant parts. 🦥 We plan to further automate this for you in the future!

For Llama 3, 3.1, 3.2, 3.3 and 4 models, you define the parts as follows:

For Gemma 2, 3, 3n models, you define the parts as follows:

🔑 Avoiding Overfitting & Underfitting

Overfitting (Poor Generalization/Too Specialized)

The model memorizes the training data, including its statistical noise, and consequently fails to generalize to unseen data.

{% hint style="success" %} If your training loss drops below 0.2, your model is likely overfitting — meaning it may perform poorly on unseen tasks.

One simple trick is LoRA alpha scaling — just multiply the alpha value of each LoRA matrix by 0.5. This effectively scales down the impact of fine-tuning.

This is closely related to merging / averaging weights.
You can take the original base (or instruct) model, add the LoRA weights, then divide the result by 2. This gives you an averaged model — which is functionally equivalent to reducing the alpha by half. {% endhint %}

  • Adjust the learning rate: A high learning rate often leads to overfitting, especially during short training runs. For longer training, a higher learning rate may work better. Its best to experiment with both to see which performs best.
  • Reduce the number of training epochs. Stop training after 1, 2, or 3 epochs.
  • Increase weight_decay. A value of 0.01 or 0.1 is a good starting point.
  • Increase lora_dropout. Use a value like 0.1 to add regularization.
  • Increase batch size or gradient accumulation steps.
  • Dataset expansion - make your dataset larger by combining or concatenating open source datasets with your dataset. Choose higher quality ones.
  • Evaluation early stopping - enable evaluation and stop when the evaluation loss increases for a few steps.
  • LoRA Alpha Scaling - scale the alpha down after training and during inference - this will make the finetune less pronounced.
  • Weight averaging - literally add the original instruct model and the finetune and divide the weights by 2.

Underfitting (Too Generic)

The model fails to capture the underlying patterns in the training data, often due to insufficient complexity or training duration.

  • Adjust the Learning Rate: If the current rate is too low, increasing it may speed up convergence, especially for short training runs. For longer runs, try lowering the learning rate instead. Test both approaches to see which works best.
  • Increase Training Epochs: Train for more epochs, but monitor validation loss to avoid overfitting.
  • Increase LoRA Rank (r) and alpha: Rank should at least equal to the alpha number, and rank should be bigger for smaller models/more complex datasets; it usually is between 4 and 64.
  • Use a More Domain-Relevant Dataset: Ensure the training data is high-quality and directly relevant to the target task.
  • Decrease batch size to 1. This will cause the model to update more vigorously.

{% hint style="success" %} Fine-tuning has no single "best" approach, only best practices. Experimentation is key to finding what works for your specific needs. Our notebooks automatically set optimal parameters based on many papers research and our experiments, giving you a great starting point. Happy fine-tuning! {% endhint %}

Acknowledgements: A huge thank you to Eyera for contributing to this guide!

Examples:

Example 1 (python):

r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128

Example 2 (python):

target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
                     "gate_proj", "up_proj", "down_proj",],

Example 3 (python):

lora_alpha = 16,

Example 4 (python):

lora_dropout = 0, # Supports any, but = 0 is optimized

Reinforcement Learning (RL) Guide

URL: llms-txt#reinforcement-learning-(rl)-guide

Contents:

  • :sloth:What you will learn
  • :question:What is Reinforcement Learning (RL)?
    • :person_running:From RLHF, PPO to GRPO and RLVR
    • :fingers_crossed:Luck (well Patience) Is All You Need
  • :sloth:What Unsloth offers for RL
    • GRPO notebooks:

Learn all about Reinforcement Learning (RL) and how to train your own DeepSeek-R1 reasoning model with Unsloth using GRPO. A complete guide from beginner to advanced.

Reinforcement Learning is where an "agent" learns to make decisions by interacting with an environment and receiving feedback in the form of rewards or penalties.

  • Action: What the model generates (e.g. a sentence).
  • Reward: A signal indicating how good or bad the model's action was (e.g. did the response follow instructions? was it helpful?).
  • Environment: The scenario or task the model is working on (e.g. answering a users question).

{% hint style="success" %} For advanced GRPO documentation on batching, generation and training parameters, read our guide! {% endhint %}

:sloth:What you will learn

  1. What is RL? RLVR? PPO? GRPO? RLHF? RFT? Is "Luck is All You Need?" for RL?
  2. What is an environment? Agent? Action? Reward function? Rewards?

This article covers everything (from beginner to advanced) you need to know about GRPO, Reinforcement Learning (RL) and reward functions, along with tips, and the basics of using GRPO with Unsloth. If you're looking for a step-by-step tutorial for using GRPO, see our guide here.

:question:What is Reinforcement Learning (RL)?

The goal of RL is to:

  1. Increase the chance of seeing "good" outcomes.
  2. Decrease the chance of seeing "bad" outcomes.

That's it! There are intricacies on what "good" and "bad" means, or how do we go about "increasing" or "decreasing" it, or what even "outcomes" means.

{% columns %} {% column width="50%" %} For example, in the Pacman game:

  1. The environment is the game world.
  2. The actions you can take are UP, LEFT, RIGHT and DOWN.
  3. The rewards are good if you eat a cookie, or bad if you hit one of the squiggly enemies.
  4. In RL, you can't know the "best action" you can take, but you can observe intermediate steps, or the final game state (win or lose) {% endcolumn %}
{% endcolumn %} {% endcolumns %}

{% columns %} {% column width="50%" %}

{% endcolumn %}

{% column %} Another example is imagine you are given the question: "What is 2 + 2?" (4) An unaligned language model will spit out 3, 4, C, D, -10, literally anything.

  1. Numbers are better than C or D right?
  2. Getting 3 is better than say 8 right?
  3. Getting 4 is definitely correct.

We just designed a reward function! {% endcolumn %} {% endcolumns %}

:person_running:From RLHF, PPO to GRPO and RLVR

{% columns %} {% column %}

{% endcolumn %}

{% column %} OpenAI popularized the concept of RLHF (Reinforcement Learning from Human Feedback), where we train an "agent" to produce outputs to a question (the state) that are rated more useful by human beings.

The thumbs up and down in ChatGPT for example can be used in the RLHF process. {% endcolumn %} {% endcolumns %}

{% columns %} {% column %}

PPO formula

The clip(..., 1-e, 1+e) term is used to force PPO not to take too large changes. There is also a KL term with beta set to > 0 to force the model not to deviate too much away. {% endcolumn %}

{% column %} In order to do RLHF, PPO (Proximal policy optimization) was developed. The agent is the language model in this case. In fact it's composed of 3 systems:

  1. The Generating Policy (current trained model)
  2. The Reference Policy (original model)
  3. The Value Model (average reward estimator)

We use the Reward Model to calculate the reward for the current environment, and our goal is to maximize this!

The formula for PPO looks quite complicated because it was designed to be stable. Visit our AI Engineer talk we gave in 2025 about RL for more in depth maths derivations about PPO. {% endcolumn %} {% endcolumns %}

{% columns %} {% column %}

{% endcolumn %}

{% column %} DeepSeek developed GRPO (Group Relative Policy Optimization) to train their R1 reasoning models. The key differences to PPO are:

  1. The Value Model is removed, replaced with statistics from calling the reward model multiple times.
  2. The Reward Model is removed and replaced with just custom reward function which RLVR can be used. {% endcolumn %} {% endcolumns %}

This means GRPO is extremely efficient. Previously PPO needed to train multiple models - now with the reward model and value model removed, we can save memory and speed up everything.

RLVR (Reinforcement Learning with Verifiable Rewards) allows us to reward the model based on tasks with easy to verify solutions. For example:

  1. Maths equations can be easily verified. Eg 2+2 = 4.
  2. Code output can be verified as having executed correctly or not.
  3. Designing verifiable reward functions can be tough, and so most examples are math or code.
  4. Use-cases for GRPO isnt just for code or math—its reasoning process can enhance tasks like email automation, database retrieval, law, and medicine, greatly improving accuracy based on your dataset and reward function - the trick is to define a rubric - ie a list of smaller verifiable rewards, and not a final all consuming singular reward. OpenAI popularized this in their reinforcement learning finetuning (RFT) offering for example.

{% columns %} {% column %} Why "Group Relative"?

GRPO removes the value model entirely, but we still need to estimate the "average reward" given the current state.

The trick is to sample the LLM! We then calculate the average reward through statistics of the sampling process across multiple different questions. {% endcolumn %}

{% endcolumn %} {% endcolumns %}

{% columns %} {% column %} For example for "What is 2+2?" we sample 4 times. We might get 4, 3, D, C. We then calculate the reward for each of these answers, then calculate the average reward and standard deviation, then Z-score standardize this!

This creates the advantages A, which we will use in replacement of the value model. This saves a lot of memory! {% endcolumn %}

GRPO advantage calculation

{% endcolumn %} {% endcolumns %}

:fingers_crossed:Luck (well Patience) Is All You Need

The trick of RL is you need 2 things only:

  1. A question or instruction eg "What is 2+2?" "Create a Flappy Bird game in Python"
  2. A reward function and verifier to verify if the output is good or bad.

With only these 2, we can essentially call a language model an infinite times until we get a good answer. For example for "What is 2+2?", an untrained bad language model will output:

0, cat, -10, 1928, 3, A, B, 122, 17, 182, 172, A, C, BAHS, %$, #, 9, -192, 12.31**** then suddenly 4.**

The reward signal was 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0**** then suddenly 1.

So by luck and by chance, RL managed to find the correct answer across multiple rollouts. Our goal is we want to see the good answer 4 more, and the rest (the bad answers) much less.

So the goal of RL is to be patient - in the limit, if the probability of the correct answer is at least a small number (not zero), it's just a waiting game - you will 100% for sure encounter the correct answer in the limit.

So I like to call it as "Luck Is All You Need" for RL.

Well a better phrase is "Patience is All You Need" for RL.

RL essentially provides us a trick - instead of simply waiting for infinity, we do get "bad signals" ie bad answers, and we can essentially "guide" the model to already try not generating bad solutions. This means although you waited very long for a "good" answer to pop up, the model already has been changed to try its best not to output bad answers.

In the "What is 2+2?" example - 0, cat, -10, 1928, 3, A, B, 122, 17, 182, 172, A, C, BAHS, %$, #, 9, -192, 12.31**** then suddenly 4.**

Since we got bad answers, RL will influence the model to try NOT to output bad answers. This means over time, we are carefully "pruning" or moving the model's output distribution away from bad answers. This means RL is efficient, since we are NOT just waiting for infinity, but we are actively trying to "push" the model to go as much as possible to the "correct answer space".

{% hint style="danger" %} If the probability is always 0, then RL will never work. This is also why people like to do RL from an already instruction finetuned model, which can partially follow instructions reasonably well - this boosts the probability most likely above 0. {% endhint %}

:sloth:What Unsloth offers for RL

  • With 15GB VRAM, Unsloth allows you to transform any model up to 17B parameters like Llama 3.1 (8B), Phi-4 (14B), Mistral (7B) or Qwen2.5 (7B) into a reasoning model
  • Unsloth now supports RL for Vision/multimodal models!
  • Minimum requirement: Just 5GB VRAM is enough to train your own reasoning model locally (for any model with 1.5B parameters or less)

{% content-ref url="reinforcement-learning-rl-guide/tutorial-train-your-own-reasoning-model-with-grpo" %} tutorial-train-your-own-reasoning-model-with-grpo {% endcontent-ref %}

gpt-oss-20b GSPO - new Qwen3-VL-8B - Vision GSPO - new Gemma 3 (4B) - Vision GSPO - new
Qwen3 (4B) - Advanced DeepSeek-R1-0528-Qwen3-8B Llama 3.2 (3B) - Advanced
Gemma 3 (1B) Phi-4 (14B) Qwen2.5 (3B)
Mistral v0.3 (7B) Llama 3.1 (8B)

{% hint style="success" %} NEW! We now support GSPO and most other new GRPO techniques. You can play with the following arguments in GRPOConfig to enable:

epsilon=0.2,
epsilon_high=0.28, # one sided
delta=1.5 # two sided

---

## (2) Continued training from a saved LoRA adapter

**URL:** llms-txt#(2)-continued-training-from-a-saved-lora-adapter

---

## gpt-oss: How to Run & Fine-tune

**URL:** llms-txt#gpt-oss:-how-to-run-&-fine-tune

**Contents:**
- :scroll:Unsloth fixes for gpt-oss
  - :1234: Precision issues
- 🖥️ **Running gpt-oss**
  - :gear: Recommended Settings
  - Run gpt-oss-20B

Run & fine-tune OpenAI's new open-source models!

OpenAI releases '**gpt-oss-120b'** and '**gpt-oss-20b'**, two SOTA open language models under the Apache 2.0 license. Both 128k context models outperform similarly sized open models in reasoning, tool use, and agentic tasks. You can now run & fine-tune them locally with Unsloth!

<a href="#run-gpt-oss-20b" class="button secondary">Run gpt-oss-20b</a><a href="#run-gpt-oss-120b" class="button secondary">Run gpt-oss-120b</a><a href="#fine-tuning-gpt-oss-with-unsloth" class="button primary">Fine-tune gpt-oss</a>

{% hint style="success" %}
[**Aug 28 update**](https://docs.unsloth.ai/models/long-context-gpt-oss-training#new-saving-to-gguf-vllm-after-gpt-oss-training)**:** You can now export/save your QLoRA fine-tuned gpt-oss model to llama.cpp, vLLM, HF etc.

We also introduced [Unsloth Flex Attention](https://docs.unsloth.ai/models/long-context-gpt-oss-training#introducing-unsloth-flex-attention-support) which enables **>8× longer context lengths**, **>50% less VRAM usage** and **>1.5× faster training** vs. all implementations. [Read more here](https://docs.unsloth.ai/models/long-context-gpt-oss-training#introducing-unsloth-flex-attention-support)
{% endhint %}

> [**Fine-tune**](#fine-tuning-gpt-oss-with-unsloth) **gpt-oss-20b for free with our** [**Colab notebook**](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/gpt-oss-\(20B\)-Fine-tuning.ipynb)

Trained with [RL](https://docs.unsloth.ai/get-started/reinforcement-learning-rl-guide), **gpt-oss-120b** rivals o4-mini and **gpt-oss-20b** rivals o3-mini. Both excel at function calling and CoT reasoning, surpassing o1 and GPT-4o.

#### **gpt-oss - Unsloth GGUFs:**

{% hint style="success" %}
**Includes Unsloth's** [**chat template fixes**](#unsloth-fixes-for-gpt-oss)**. For best results, use our uploads & train with Unsloth!**
{% endhint %}

* 20B: [gpt-oss-**20B**](https://huggingface.co/unsloth/gpt-oss-20b-GGUF)
* 120B: [gpt-oss-**120B**](https://huggingface.co/unsloth/gpt-oss-120b-GGUF)

## :scroll:Unsloth fixes for gpt-oss

OpenAI released a standalone parsing and tokenization library called [Harmony](https://github.com/openai/harmony) which allows one to tokenize conversations to OpenAI's preferred format for gpt-oss. The official OpenAI [cookbook article](https://app.gitbook.com/o/HpyELzcNe0topgVLGCZY/s/xhOjnexMCB3dmuQFQ2Zq/) provides many more details on how to use the Harmony library.

Inference engines generally use the jinja chat template instead and not the Harmony package, and we found some issues with them after comparing with Harmony directly. If you see below, the top is the correct rendered form as from Harmony. The below is the one rendered by the current jinja chat template. There are quite a few differences!

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FFqIrmxJhFtJutzMn5wLx%2FScreenshot%202025-08-08%20at%2008-19-49%20Untitled151.ipynb%20-%20Colab.png?alt=media&#x26;token=e740b75f-1634-45ad-9be7-55370d13cd7e" alt=""><figcaption></figcaption></figure>

We also made some functions to directly allow you to use OpenAI's Harmony library directly without a jinja chat template if you desire - you can simply parse in normal conversations like below:

Then use the `encode_conversations_with_harmony` function from Unsloth:

The harmony format includes multiple interesting things:

1. `reasoning_effort = "medium"` You can select low, medium or high, and this changes gpt-oss's reasoning budget - generally the higher the better the accuracy of the model.
2. `developer_instructions` is like a system prompt which you can add.
3. `model_identity` is best left alone - you can edit it, but we're unsure if custom ones will function.

We find multiple issues with current jinja chat templates (there exists multiple implementations across the ecosystem):

1. Function and tool calls are rendered with `tojson`, which is fine it's a dict, but if it's a string, speech marks and other **symbols become backslashed**.
2. There are some **extra new lines** in the jinja template on some boundaries.
3. Tool calling thoughts from the model should have the **`analysis` tag and not `final` tag**.
4. Other chat templates seem to not utilize `<|channel|>final` at all - one should use this for the final assistant message. You should not use this for thinking traces or tool calls.

Our chat templates for the GGUF, our BnB and BF16 uploads and all versions are fixed! For example when comparing both ours and Harmony's format, we get no different characters:

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Fq3pLyJyjBA7MTENhEX8S%2FScreenshot%202025-08-08%20at%2008-20-00%20Untitled151.ipynb%20-%20Colab.png?alt=media&#x26;token=a02d2626-c535-4aa3-bd72-09bf5829ac8e" alt=""><figcaption></figcaption></figure>

### :1234: Precision issues

We found multiple precision issues in Tesla T4 and float16 machines primarily since the model was trained using BF16, and so outliers and overflows existed. MXFP4 is not actually supported on Ampere and older GPUs, so Triton provides `tl.dot_scaled` for MXFP4 matrix multiplication. It upcasts the matrices to BF16 internally on the fly.

We made a [MXFP4 inference notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/GPT_OSS_MXFP4_\(20B\)-Inference.ipynb) as well in Tesla T4 Colab!

{% hint style="info" %}
[Software emulation](https://triton-lang.org/main/python-api/generated/triton.language.dot_scaled.html) enables targeting hardware architectures without native microscaling operation support. Right now for such case, microscaled lhs/rhs are upcasted to `bf16` element type beforehand for dot computation,
{% endhint %}

We found if you use float16 as the mixed precision autocast data-type, you will get infinities after some time. To counteract this, we found doing the MoE in bfloat16, then leaving it in either bfloat16 or float32 precision. If older GPUs don't even have bfloat16 support (like T4), then float32 is used.

We also change all precisions of operations (like the router) to float32 for float16 machines.

## 🖥️ **Running gpt-oss**

Below are guides for the [20B](#run-gpt-oss-20b) and [120B](#run-gpt-oss-120b) variants of the model.

{% hint style="info" %}
Any quant smaller than F16, including 2-bit has minimal accuracy loss, since only some parts (e.g., attention layers) are lower bit while most remain full-precision. Thats why sizes are close to the F16 model; for example, the 2-bit (11.5 GB) version performs nearly the same as the full 16-bit (14 GB) one. Once llama.cpp supports better quantization for these models, we'll upload them ASAP.
{% endhint %}

The `gpt-oss` models from OpenAI include a feature that allows users to adjust the model's "reasoning effort." This gives you control over the trade-off between the model's performance and its response speed (latency) which by the amount of token the model will use to think.

The `gpt-oss` models offer three distinct levels of reasoning effort you can choose from:

* **Low**: Optimized for tasks that need very fast responses and don't require complex, multi-step reasoning.
* **Medium**: A balance between performance and speed.
* **High**: Provides the strongest reasoning performance for tasks that require it, though this results in higher latency.

### :gear: Recommended Settings

OpenAI recommends these inference settings for both models:

`temperature=1.0`, `top_p=1.0`, `top_k=0`

* <mark style="background-color:green;">**Temperature of 1.0**</mark>
* Top\_K = 0 (or experiment with 100 for possible better results)
* Top\_P = 1.0
* Recommended minimum context: 16,384
* Maximum context length window: 131,072

The end of sentence/generation token: EOS is `<|return|>`

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F5uMxZIFbSS7976wghYcR%2Fgpt-oss-20b.svg?alt=media&#x26;token=43e2694c-317b-49ec-9723-2c08e1cc9dd3" alt=""><figcaption></figcaption></figure>

To achieve inference speeds of 6+ tokens per second for our Dynamic 4-bit quant, have at least **14GB of unified memory** (combined VRAM and RAM) or **14GB of system RAM** alone. As a rule of thumb, your available memory should match or exceed the size of the model youre using. GGUF Link: [unsloth/gpt-oss-20b-GGUF](https://huggingface.co/unsloth/gpt-oss-20b-GGUF)

**NOTE:** The model can run on less memory than its total size, but this will slow down inference. Maximum memory is only needed for the fastest speeds.&#x20;

{% hint style="info" %}
Follow the [**best practices above**](#recommended-settings). They're the same as the 120B model.
{% endhint %}

You can run the model on Google Colab, Docker, LM Studio or llama.cpp for now. See below:

> **You can run gpt-oss-20b for free with our** [**Google Colab notebook**](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/GPT_OSS_MXFP4_\(20B\)-Inference.ipynb)

#### 🐋 Docker: Run gpt-oss-20b Tutorial

If you already have Docker desktop, all you need to do is run the command below and you're done:

#### :sparkles: Llama.cpp: Run gpt-oss-20b Tutorial

1. Obtain the latest `llama.cpp` on [GitHub here](https://github.com/ggml-org/llama.cpp). You can follow the build instructions below as well. Change `-DGGML_CUDA=ON` to `-DGGML_CUDA=OFF` if you don't have a GPU or just want CPU inference.

2. You can directly pull from Hugging Face via:

3. Download the model via (after installing `pip install huggingface_hub hf_transfer` ).

**Examples:**

Example 1 (python):
```python
messages = [
    {"role" : "user", "content" : "What is 1+1?"},
    {"role" : "assistant", "content" : "2"},
    {"role": "user",  "content": "What's the temperature in San Francisco now? How about tomorrow? Today's date is 2024-09-30."},
    {"role": "assistant",  "content": "User asks: 'What is the weather in San Francisco?' We need to use get_current_temperature tool.", "thinking" : ""},
    {"role": "assistant", "content": "", "tool_calls": [{"name": "get_current_temperature", "arguments": '{"location": "San Francisco, California, United States", "unit": "celsius"}'}]},
    {"role": "tool", "name": "get_current_temperature", "content": '{"temperature": 19.9, "location": "San Francisco, California, United States", "unit": "celsius"}'},
]

Example 2 (python):

from unsloth_zoo import encode_conversations_with_harmony

def encode_conversations_with_harmony(
    messages,
    reasoning_effort = "medium",
    add_generation_prompt = True,
    tool_calls = None,
    developer_instructions = None,
    model_identity = "You are ChatGPT, a large language model trained by OpenAI.",
)

Example 3 (unknown):

<|start|>system<|message|>You are ChatGPT, a large language model trained by OpenAI.\nKnowledge cutoff: 2024-06\nCurrent date: 2025-08-05\n\nReasoning: medium\n\n# Valid channels: analysis, commentary, final. Channel must be included for every message.<|end|><|start|>user<|message|>Hello<|end|><|start|>assistant<|channel|>final<|message|>Hi there!<|end|><|start|>user<|message|>What is 1+1?<|end|><|start|>assistant

Example 4 (bash):

docker model pull hf.co/unsloth/gpt-oss-20b-GGUF:F16

Constants

URL: llms-txt#constants

WIDTH, HEIGHT = 800, 600 GROUND_HEIGHT = 20 GRAVITY = 0.7 PIPE_SPEED = -3 BIRD_SIZE = 45 MIN_GAP = 130 MAX_GAP = 200 PIPE_COLORS = [(0, 96, 0), (205, 133, 63), (89, 97, 107)] DARK_BROWN = (94, 72, 4) YELLOW = (252, 228, 6)

screen = pygame.display.set_mode((WIDTH, HEIGHT)) clock = pygame.time.Clock()

def random_light_color(): return ( random.randint(180, 230), random.randint(190, 300), random.randint(250, 255) )

def reset_game(): global bird_x, bird_y global pipes, score global background_color, land_color global bird_shape, bird_color

Bird properties

bird_x = WIDTH * 0.3
bird_y = HEIGHT // 2
bird_vel = -5  # Initial upward thrust

pipes.clear() ### <<< NameError: name 'pipes' is not defined. Did you forget to import 'pipes'? python import pygame from random import randint # For generating colors/shapes/positions randomly pygame.init()

Examples:

Example 1 (unknown):

{% endcode %}

8. If you use `--repeat-penalty 1.5`, it gets even worse and more obvious, with actually totally incorrect syntax.

Generate output

URL: llms-txt#generate-output

model_outputs = llm.generate(model_input, sampling_param)


Magistral: How to Run & Fine-tune

URL: llms-txt#magistral:-how-to-run-&-fine-tune

Contents:

  • 🖥️ Running Magistral
    • ⚙️ Official Recommended Settings
    • :question:Testing the model
  • 🦙 Tutorial: How to Run Magistral in Ollama
  • 📖 Tutorial: How to Run Magistral in llama.cpp

Meet Magistral - Mistral's new reasoning models.

Magistral-Small-2509 is a reasoning LLM developed by Mistral AI. It excels at coding and mathematics and supports multiple languages. Magistral supports a 128k token context window and was finetuned from Mistral-Small-3.2. Magistral runs perfectly well locally on a single RTX 4090 or a Mac with 16 to 24GB RAM.

Running Magistral Tutorial Fine-tuning Magistral

{% hint style="success" %} Update: Magistral-2509 new update is out as of September, 2025!

Now with Vision support! We worked with Mistral again with the release of Magistral. Make sure to download Mistral's official uploads or Unsloth's uploads to get the correct implementation (ie correct system prompt, correct chat template etc.)

If you're using llama.cpp, please use --jinja to enable the system prompt! {% endhint %}

All uploads use Unsloth Dynamic 2.0 for SOTA 5-shot MMLU and KL Divergence performance, meaning you can run & fine-tune quantized Mistral LLMs with minimal accuracy loss.

Magistral-Small - Unsloth Dynamic uploads:

Dynamic 2.0 GGUF (to run)Dynamic 4-bit (to finetune/deploy)Dynamic Float8

🖥️ Running Magistral

According to Mistral AI, these are the recommended settings for inference:

  • Temperature of: 0.7
  • Min_P of: 0.01 (optional, but 0.01 works well, llama.cpp default is 0.1)
  • Set top_p to: 0.95
  • A 128k context window is supported, but performance might degrade past 40k. So we recommend setting the maximum length to 40k if you see bad performance.

This is the recommended system prompt for Magistral 2509, 2507:

{% code overflow="wrap" %}

This is the recommended system prompt for Magistral 2506:

{% hint style="success" %} Our dynamic uploads have the 'UD' prefix in them. Those without are not dynamic however still utilize our calibration dataset. {% endhint %}

  • Multilingual: Magistral supports many languages including: English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Swedish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, and Farsi.

:question:Testing the model

Mistral has their own vibe checking prompts which can be used to evaluate Magistral. Keep in mind these tests are based on running the full unquantized version of the model, however you could also test them on quantized versions:

Easy - Make sure they always work

Medium - Should most of the time be correct

Hard - Should sometimes get them right

We provide some example outputs at the end of the blog.

🦙 Tutorial: How to Run Magistral in Ollama

  1. Install ollama if you haven't already!

  2. Run the model with our dynamic quant. We did not set the context length automatically, so it will just use Ollama's default set context length.
    Note you can call ollama serve &in another terminal if it fails! We include all suggested parameters (temperature etc) in params in our Hugging Face upload!

  3. Also Magistral supports 40K context lengths, so best to enable KV cache quantization. We use 8bit quantization which saves 50% memory usage. You can also try "q4_0" or "q8_0"

  4. Ollama also sets the default context length to 4096, as mentioned here. Use OLLAMA_CONTEXT_LENGTH=8192 to change it to 8192. Magistral supports up to 128K, but 40K (40960) is tested most.

📖 Tutorial: How to Run Magistral in llama.cpp

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

  2. If you want to use llama.cpp directly to load models, you can do the below: (:Q4_K_XL) is the quantization type. You can also download via Hugging Face (point 3). This is similar to ollama run

{% code overflow="wrap" %}

{% hint style="warning" %} In llama.cpp, please use --jinja to enable the system prompt! {% endhint %}

  1. OR download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose UD-Q4_K_XL, (Unsloth Dynamic), Q4_K_M, or other quantized versions (like BF16 full precision).

Examples:

Example 1 (unknown):

First draft your thinking process (inner monologue) until you arrive at a response. Format your response using Markdown, and use LaTeX for any mathematical equations. Write both your thoughts and the response in the same language as the input.

Your thinking process must follow the template below:[THINK]Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate the response. Use the same language as the input.[/THINK]Here, provide a self-contained response.

Example 2 (unknown):

A user will ask you to solve a task. You should first draft your thinking process (inner monologue) until you have derived the final answer. Afterwards, write a self-contained summary of your thoughts (i.e. your summary should be succinct but contain all the critical steps you needed to reach the conclusion). You should use Markdown to format your response. Write both your thoughts and summary in the same language as the task posed by the user. NEVER use \boxed{} in your response.

Your thinking process must follow the template below:
<think>
Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate a correct answer.
</think>

Here, provide a concise summary that reflects your reasoning and presents a clear final answer to the user. Don't mention that this is a summary.

Problem:

Example 3 (py):

prompt_1 = 'How many "r" are in strawberry?'

prompt_2 = 'John is one of 4 children. The first sister is 4 years old. Next year, the second sister will be twice as old as the first sister. The third sister is two years older than the second sister. The third sister is half the ago of her older brother. How old is John?'

prompt_3 = '9.11 and 9.8, which is greater?'

Example 4 (py):

prompt_4 = "Think about 5 random numbers. Verify if you can combine them with addition, multiplication, subtraction or division to 133"

prompt_5 = "Write 4 sentences, each with at least 8 words. Now make absolutely sure that every sentence has exactly one word less than the previous sentence."

prompt_6 = "If it takes 30 minutes to dry 12 T-shirts in the sun, how long does it take to dry 33 T-shirts?"

From https://mlabonne.github.io/blog/posts/Quantize_Llama_2_models_using_ggml.html

URL: llms-txt#from-https://mlabonne.github.io/blog/posts/quantize_llama_2_models_using_ggml.html

Contents:

  • Running in Unsloth works well, but after exporting & running on other platforms, the results are poor
  • Saving to GGUF / vLLM 16bit crashes
  • How do I manually save to GGUF?

ALLOWED_QUANTS =
{ "not_quantized" : "Recommended. Fast conversion. Slow inference, big files.", "fast_quantized" : "Recommended. Fast conversion. OK inference, OK file size.", "quantized" : "Recommended. Slow conversion. Fast inference, small files.", "f32" : "Not recommended. Retains 100% accuracy, but super slow and memory hungry.", "f16" : "Fastest conversion + retains 100% accuracy. Slow and memory hungry.", "q8_0" : "Fast conversion. High resource use, but generally acceptable.", "q4_k_m" : "Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K", "q5_k_m" : "Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K", "q2_k" : "Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.", "q3_k_l" : "Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K", "q3_k_m" : "Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K", "q3_k_s" : "Uses Q3_K for all tensors", "q4_0" : "Original quant method, 4-bit.", "q4_1" : "Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.", "q4_k_s" : "Uses Q4_K for all tensors", "q4_k" : "alias for q4_k_m", "q5_k" : "alias for q5_k_m", "q5_0" : "Higher accuracy, higher resource usage and slower inference.", "q5_1" : "Even higher accuracy, resource usage and slower inference.", "q5_k_s" : "Uses Q5_K for all tensors", "q6_k" : "Uses Q8_K for all tensors", "iq2_xxs" : "2.06 bpw quantization", "iq2_xs" : "2.31 bpw quantization", "iq3_xxs" : "3.06 bpw quantization", "q3_k_xs" : "3-bit extra small quantization", } python model.save_pretrained_merged("merged_model", tokenizer, save_method = "merged_16bit",) bash apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggerganov/llama.cpp cmake llama.cpp -B llama.cpp/build
-DBUILD_SHARED_LIBS=ON -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli cp llama.cpp/build/bin/llama-* llama.cpp

python llama.cpp/convert-hf-to-gguf.py FOLDER --outfile OUTPUT --outtype f16 python model.save_pretrained_merged("merged_model", tokenizer, save_method = "merged_16bit",) bash apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggerganov/llama.cpp cmake llama.cpp -B llama.cpp/build
-DBUILD_SHARED_LIBS=ON -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli cp llama.cpp/build/bin/llama-* llama.cpp bash python llama.cpp/convert_hf_to_gguf.py merged_model
--outfile model-F16.gguf --outtype f16
--split-max-size 50G bash

Examples:

Example 1 (unknown):

{% endtab %}

{% tab title="Manual Saving" %}
First save your model to 16bit:

Example 2 (unknown):

Then use the terminal and do:

Example 3 (unknown):

Or follow the steps at <https://rentry.org/llama-cpp-conversions#merging-loras-into-a-model> using the model name "merged\_model" to merge to GGUF.
{% endtab %}
{% endtabs %}

### Running in Unsloth works well, but after exporting & running on other platforms, the results are poor

You might sometimes encounter an issue where your model runs and produces good results on Unsloth, but when you use it on another platform like Ollama or vLLM, the results are poor or you might get gibberish, endless/infinite generations *or* repeated output&#x73;**.**

* The most common cause of this error is using an <mark style="background-color:blue;">**incorrect chat template**</mark>**.** Its essential to use the SAME chat template that was used when training the model in Unsloth and later when you run it in another framework, such as llama.cpp or Ollama. When inferencing from a saved model, it's crucial to apply the correct template.
* You must use the correct `eos token`. If not, you might get gibberish on longer generations.
* It might also be because your inference engine adds an unnecessary "start of sequence" token (or the lack of thereof on the contrary) so ensure you check both hypotheses!
* <mark style="background-color:green;">**Use our conversational notebooks to force the chat template - this will fix most issues.**</mark>
  * Qwen-3 14B Conversational notebook [**Open in Colab**](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_\(14B\)-Reasoning-Conversational.ipynb)
  * Gemma-3 4B Conversational notebook [**Open in Colab**](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_\(4B\).ipynb)
  * Llama-3.2 3B Conversational notebook [**Open in Colab**](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_\(1B_and_3B\)-Conversational.ipynb)
  * Phi-4 14B Conversational notebook [**Open in Colab**](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb)
  * Mistral v0.3 7B Conversational notebook [**Open in Colab**](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_\(7B\)-Conversational.ipynb)
  * **More notebooks in our** [**notebooks docs**](https://docs.unsloth.ai/get-started/unsloth-notebooks)

### Saving to GGUF / vLLM 16bit crashes

You can try reducing the maximum GPU usage during saving by changing `maximum_memory_usage`.

The default is `model.save_pretrained(..., maximum_memory_usage = 0.75)`. Reduce it to say 0.5 to use 50% of GPU peak memory or lower. This can reduce OOM crashes during saving.

### How do I manually save to GGUF?

First save your model to 16bit via:

Example 4 (unknown):

Compile llama.cpp from source like below:

Phi-4 Reasoning: How to Run & Fine-tune

URL: llms-txt#phi-4-reasoning:-how-to-run-&-fine-tune

Contents:

  • 🖥️ Running Phi-4 reasoning
    • ⚙️ Official Recommended Settings
    • Phi-4 reasoning Chat templates
    • 🦙 Ollama: Run Phi-4 reasoning Tutorial
    • 📖 Llama.cpp: Run Phi-4 reasoning Tutorial

Learn to run & fine-tune Phi-4 reasoning models locally with Unsloth + our Dynamic 2.0 quants

Microsoft's new Phi-4 reasoning models are now supported in Unsloth. The 'plus' variant performs on par with OpenAI's o1-mini, o3-mini and Sonnet 3.7. The 'plus' and standard reasoning models are 14B parameters while the 'mini' has 4B parameters.

All Phi-4 reasoning uploads use our Unsloth Dynamic 2.0 methodology.

Phi-4 reasoning - Unsloth Dynamic 2.0 uploads:

Dynamic 2.0 GGUF (to run) Dynamic 4-bit Safetensor (to finetune/deploy)

🖥️ Running Phi-4 reasoning

According to Microsoft, these are the recommended settings for inference:

  • Temperature = 0.8
  • Top_P = 0.95

Phi-4 reasoning Chat templates

Please ensure you use the correct chat template as the 'mini' variant has a different one.

{% code overflow="wrap" %}

Phi-4-reasoning and Phi-4-reasoning-plus:

This format is used for general conversation and instructions:

{% code overflow="wrap" %}

{% hint style="info" %} Yes, the chat template/prompt format is this long! {% endhint %}

🦙 Ollama: Run Phi-4 reasoning Tutorial

  1. Install ollama if you haven't already!

  2. Run the model! Note you can call ollama servein another terminal if it fails. We include all our fixes and suggested parameters (temperature etc) in params in our Hugging Face upload.

📖 Llama.cpp: Run Phi-4 reasoning Tutorial

{% hint style="warning" %} You must use --jinja in llama.cpp to enable reasoning for the models, expect for the 'mini' variant. Otherwise no token will be provided. {% endhint %}

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

  2. Download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose Q4_K_M, or other quantized versions.

Examples:

Example 1 (unknown):

<|system|>Your name is Phi, an AI math expert developed by Microsoft.<|end|><|user|>How to solve 3*x^2+4*x+5=1?<|end|><|assistant|>

Example 2 (unknown):

<|im_start|>system<|im_sep|>You are Phi, a language model trained by Microsoft to help users. Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: <think> {Thought section} </think> {Solution section}. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion. Now, try to solve the following question through the above guidelines:<|im_end|><|im_start|>user<|im_sep|>What is 1+1?<|im_end|><|im_start|>assistant<|im_sep|>

Example 3 (bash):

apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh

Example 4 (bash):

ollama run hf.co/unsloth/Phi-4-mini-reasoning-GGUF:Q4_K_XL

Vision Fine-tuning

URL: llms-txt#vision-fine-tuning

Contents:

  • Vision Fine-tuning Dataset
  • Multi-image training

Learn how to fine-tune vision/multimodal LLMs with Unsloth

Fine-tuning vision models enables model to excel at certain tasks normal LLMs won't be as good as such as object/movement detection. You can also train VLMs with RL. We have many free notebooks for vision fine-tuning:

  • NEW: Qwen3-VL (8B) Vision: Notebook
  • Gemma 3 (4B) Vision: Notebook
  • Llama 3.2 Vision fine-tuning for radiography: Notebook
    How can we assist medical professionals in analyzing Xrays, CT Scans & ultrasounds faster.
  • Qwen2.5 VL fine-tuning for converting handwriting to LaTeX: Notebook
    This allows complex math formulas to be easily transcribed as LaTeX without manually writing it.
  • Pixtral 12B 2409 vision fine-tuning for general Q&A: Notebook
    One can concatenate general Q&A datasets with more niche datasets to make the finetune not forget base model skills.

{% hint style="info" %} It is best to ensure your dataset has images of all the same size/dimensions. Use dimensions of 300-1000px to ensure your training does not take too long or use too many resources. {% endhint %}

To finetune vision models, we now allow you to select which parts of the mode to finetune. You can select to only finetune the vision layers, or the language layers, or the attention / MLP layers! We set them all on by default!

Vision Fine-tuning Dataset

The dataset for fine-tuning a vision or multimodal model is similar to standard question & answer pair datasets , but this time, they also includes image inputs. For example, the Llama 3.2 Vision Notebook uses a radiography case to show how AI can help medical professionals analyze X-rays, CT scans, and ultrasounds more efficiently.

We'll be using a sampled version of the ROCO radiography dataset. You can access the dataset here. The dataset includes X-rays, CT scans and ultrasounds showcasing medical conditions and diseases. Each image has a caption written by experts describing it. The goal is to finetune a VLM to make it a useful analysis tool for medical professionals.

Let's take a look at the dataset, and check what the 1st example shows:

Image Caption

Panoramic radiography shows an osteolytic lesion in the right posterior maxilla with resorption of the floor of the maxillary sinus (arrows).

To format the dataset, all vision finetuning tasks should be formatted as follows:

We will craft an custom instruction asking the VLM to be an expert radiographer. Notice also instead of just 1 instruction, you can add multiple turns to make it a dynamic conversation.

Let's convert the dataset into the "correct" format for finetuning:

The first example is now structured like below:

{% code overflow="wrap" %}

Before we do any finetuning, maybe the vision model already knows how to analyse the images? Let's check if this is the case!

For more details, view our dataset section in the notebook here.

Multi-image training

In order to fine-tune or train a VLM like Qwen3-VL with multi-images the most straightforward change is to swap

Using map kicks in dataset standardization and arrow processing rules which can be strict and more complicated to define.

Examples:

Example 1 (python):

model = FastVisionModel.get_peft_model(
    model,
    finetune_vision_layers     = True, # False if not finetuning vision layers
    finetune_language_layers   = True, # False if not finetuning language layers
    finetune_attention_modules = True, # False if not finetuning attention layers
    finetune_mlp_modules       = True, # False if not finetuning MLP layers

    r = 16,                           # The larger, the higher the accuracy, but might overfit
    lora_alpha = 16,                  # Recommended alpha == r at least
    lora_dropout = 0,
    bias = "none",
    random_state = 3407,
    use_rslora = False,               # We support rank stabilized LoRA
    loftq_config = None,               # And LoftQ
    target_modules = "all-linear",    # Optional now! Can specify a list if needed
    modules_to_save=[
        "lm_head",
        "embed_tokens",
    ],
)

Example 2 (unknown):

Dataset({
    features: ['image', 'image_id', 'caption', 'cui'],
    num_rows: 1978
})

Example 3 (python):

[
{ "role": "user",
  "content": [{"type": "text",  "text": instruction}, {"type": "image", "image": image} ]
},
{ "role": "assistant",
  "content": [{"type": "text",  "text": answer} ]
},
]

Example 4 (unknown):

Let's convert the dataset into the "correct" format for finetuning:

model.push_to_hub("your_name/lora_model", token = "...") # Online saving

URL: llms-txt#model.push_to_hub("your_name/lora_model",-token-=-"...")-#-online-saving


Function to prepare the GSM8K dataset

URL: llms-txt#function-to-prepare-the-gsm8k-dataset

Contents:

  • Reward Functions/Verifier
  • Train your model

def get_gsm8k_questions(split="train") -> Dataset: data = load_dataset("openai/gsm8k", "main")[split] data = data.map( lambda x: { "prompt": [ {"role": "system", "content": SYSTEM_PROMPT}, {"role": "user", "content": x["question"]}, ], "answer": extract_hash_answer(x["answer"]), } ) return data

dataset = get_gsm8k_questions() python epsilon=0.2, epsilon_high=0.28, # one sided delta=1.5 # two sided

Examples:

Example 1 (unknown):

The dataset is prepared by extracting the answers and formatting them as structured strings.
{% endstep %}

{% step %}

### Reward Functions/Verifier

[Reward Functions/Verifiers](https://docs.unsloth.ai/get-started/reinforcement-learning-rl-guide/..#reward-functions-verifier) lets us know if the model is doing well or not according to the dataset you have provided. Each generation run will be assessed on how it performs to the score of the average of the rest of generations. You can create your own reward functions however we have already pre-selected them for you with [Will's GSM8K](https://docs.unsloth.ai/get-started/reinforcement-learning-rl-guide/..#gsm8k-reward-functions) reward functions. With this, we have 5 different ways which we can reward each generation.

You can input your generations into an LLM like ChatGPT 4o or Llama 3.1 (8B) and design a reward function and verifier to evaluate it. For example, feed your generations into a LLM of your choice and set a rule: "If the answer sounds too robotic, deduct 3 points." This helps refine outputs based on quality criteria. **See examples** of what they can look like [here](https://docs.unsloth.ai/get-started/reinforcement-learning-rl-guide/..#reward-function-examples).

**Example Reward Function for an Email Automation Task:**

* **Question:** Inbound email
* **Answer:** Outbound email
* **Reward Functions:**
  * If the answer contains a required keyword → **+1**
  * If the answer exactly matches the ideal response → **+1**
  * If the response is too long → **-1**
  * If the recipient's name is included → **+1**
  * If a signature block (phone, email, address) is present → **+1**

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F6GRcqgUKmKn2dWCk4nWK%2Fimage.png?alt=media&#x26;token=ac153141-03f8-4795-9074-ad592289bd70" alt=""><figcaption></figcaption></figure>
{% endstep %}

{% step %}

### Train your model

We have pre-selected hyperparameters for the most optimal results however you could change them. Read all about [parameters here](https://docs.unsloth.ai/get-started/fine-tuning-llms-guide/lora-hyperparameters-guide). For **advanced GRPO** documentation on batching, generation and training parameters, [read our guide!](https://docs.unsloth.ai/get-started/reinforcement-learning-rl-guide/advanced-rl-documentation)

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F1MpLSyaOH3j8MhQvquqX%2Fimage.png?alt=media&#x26;token=818034b1-f2db-464d-a108-3b2c6897edb7" alt="" width="563"><figcaption></figcaption></figure>

The **GRPOConfig** defines key hyperparameters for training:

* `use_vllm`: Activates fast inference using vLLM.
* `learning_rate`: Determines the model's learning speed.
* `num_generations`: Specifies the number of completions generated per prompt.
* `max_steps`: Sets the total number of training steps.

{% hint style="success" %}
**NEW!** We now support DAPO, Dr. GRPO and most other new GRPO techniques. You can play with the following arguments in GRPOConfig to enable:

Tutorial: How to Train gpt-oss with RL

URL: llms-txt#tutorial:-how-to-train-gpt-oss-with-rl

Contents:

  • Install Unsloth
  • Load gpt-oss with Unsloth
  • 2048 game environment (minimal)
  • Safe code execution & anticheat checks
  • Prompt & dataset
  • Reward function time!
  • Configure GRPO
  • Train your model
  • Inference (after training)
  • Save / Export your fine-tuned mode

Learn to train OpenAI gpt-oss with GRPO to autonomously beat 2048 locally or on Colab.

LLMs often struggle with tasks that involve complex environments. However, by applying reinforcement learning (RL) and designing a custom reward function, these challenges can be overcome.

RL can be adapted for tasks such as auto kernel or strategy creation. This tutorial shows how to train gpt-oss with GRPO and Unsloth to autonomously beat 2048.

2048 notebook (Official OpenAI example) Kernel generation notebook

What youll build:

  • Train gpt-oss-20b so the model can automatically win 2048
  • Create a minimal 2048 environment the model can interact with
  • Define reward functions that:
    1. Check the generated strategy compiles and runs,
    2. Prevent reward hacking (disallow external imports), and
    3. Reward actual game success
  • Run inference and export the model (MXFP4 4bit or merged FP16)

{% hint style="info" %} Hardware: The 2048 example runs on a free Colab T4, but training will be slow. A100/H100 is much faster. 4bit loading + LoRA lets you fit a 20B model into modest VRAM. {% endhint %}

{% stepper %} {% step %}

Run this cell at the top of a notebook (works on Colab).

Load gpt-oss with Unsloth

Load the 20B model in 4bit QLoRA for memory efficiency, then wrap it with a LoRA adapter. You can also train it in 16-bit LoRA but it will use 4x more memory. For more settings view our configuration guide.

{% hint style="info" %} If you hit OOM, try lowering max_seq_length, lora_rank, or num_generations (later), and keep load_in_4bit=True. {% endhint %} {% endstep %}

2048 game environment (minimal)

  • A GameBoard class supporting W/A/S/D moves
  • Merge/score logic
  • execute_with_time_limit wrapper so poorly written strategies cant hang the kernel

You can quickly smoketest with a trivial policy:

Safe code execution & anticheat checks

Generated strategies are Python functions. To keep execution safe and prevent reward hacking:

  • Module whitelist check — only allow Python stdlib symbols:

  • Block disallowed imports (e.g., NumPy):

  • Lock down execution to a sandboxed function:

  • Enforce a hard wallclock limit on strategy runs:

We prompt the model to emit a short strategy function inside triple backticks:

python def strategy(board): return "W" # Example `

Create a tiny synthetic dataset (reusing the same prompt) and compute the prompt length so GRPO knows how many completion tokens to sample:

{% hint style="info" %} You can replace this dataset with real prompts for your own RL task. {% endhint %} {% endstep %}

Reward function time!

  1. Extract the code block from the models reply:

") >= 2: first = text.find("", first) fx = text[first:second].strip() fx = fx.removeprefix("python\n") fx = fx[fx.find("def"):] if fx.startswith("def strategy(board):"): return fx return None python from unsloth import create_locked_down_function, check_python_modules

def function_works(completions, **kwargs): scores = [] for completion in completions: response = completion[0]["content"] function = extract_function(response) if function is None: scores.append(-2.0) continue ok, info = check_python_modules(function) if "error" in info: scores.append(-2.0) continue try: _ = create_locked_down_function(function) scores.append(1.0) except Exception: scores.append(-0.5) return scores python def no_cheating(completions, **kwargs): scores = [] for completion in completions: response = completion[0]["content"] function = extract_function(response) if function is None: scores.append(-1.0) continue ok, _ = check_python_modules(function) scores.append(1.0 if ok else -20.0) # heavy penalty if cheating return scores python import numpy as np

PRINTER = 0 # occasionally print for debugging

def strategy_succeeds(completions, **kwargs): global PRINTER scores = [] seed = np.random.randint(10000) for completion in completions: response = completion[0]["content"] function = extract_function(response) if function is None: scores.append(-2.0) continue try: new_strategy = create_locked_down_function(function) except Exception: scores.append(0.0) continue try: game = GameBoard(size=6, seed=seed, target=2048, probability_fours=0.10) steps, state = execute_strategy(new_strategy, game) if PRINTER % 5 == 0: print(function) print(f"Steps={steps} State={state}") print(game.board().pretty()) PRINTER += 1 if state == "success": scores.append(20.0) else: scores.append(2.0) # worked but didnt reach 2048 except TimeoutError: scores.append(-1.0) # timed out except Exception: scores.append(-3.0) # crashed return scores python from trl import GRPOConfig, GRPOTrainer

max_prompt_length = maximum_length + 1 max_completion_length = max_seq_length - max_prompt_length

training_args = GRPOConfig( temperature=1.0, learning_rate=5e-5, weight_decay=0.01, warmup_ratio=0.1, lr_scheduler_type="linear", optim="adamw_8bit", logging_steps=1, per_device_train_batch_size=1, gradient_accumulation_steps=1, # bump to 4 for smoother reward signals num_generations=2, # lower if you OOM max_prompt_length=max_prompt_length, max_completion_length=max_completion_length, max_steps=1000, # or set num_train_epochs=1 save_steps=100, report_to="none", output_dir="outputs", )

trainer = GRPOTrainer( model=model, processing_class=tokenizer, reward_funcs=[function_works, no_cheating, strategy_succeeds], args=training_args, train_dataset=dataset, # Optional eval split: # train_dataset=new_dataset["train"], # eval_dataset=new_dataset["test"], ) python trainer.train() python from transformers import TextStreamer

text = tokenizer.apply_chat_template( [{"role": "user", "content": prompt}], tokenize=False, add_generation_prompt=True, reasoning_effort="low", )

_ = model.generate( **tokenizer(text, return_tensors="pt").to("cuda"), temperature=1.0, max_new_tokens=1024, streamer=TextStreamer(tokenizer, skip_prompt=False) python model.save_pretrained_merged("finetuned_model", tokenizer, save_method="mxfp4")

or push

model.push_to_hub_merged("<org_or_user>/", tokenizer, token="<hf_token>", save_method="mxfp4") python model.save_pretrained_merged("finetuned_model", tokenizer, save_method="merged_16bit")

or push

model.push_to_hub_merged("<org_or_user>/", tokenizer, token="<hf_token>", save_method="merged_16bit")


### Troubleshooting & tips

* **OOM / slow**: reduce `max_seq_length`, `num_generations`, `lora_rank`; keep 4bit; try A100 if available.
* **No reward improvement**: increase training steps, soften penalties, or add curriculum (start with smaller boards / lower targets).
* **Reward hacking**: keep `check_python_modules` strict; validate strategy behavior across multiple random seeds.
* **Unstable training**: raise `gradient_accumulation_steps` to smooth updates; lower `learning_rate` (e.g., 2e5).
* **Long hangs**: ensure `execute_with_time_limit` wraps any strategy execution.
{% endstep %}

### Adapt to your own RL task

* Replace the 2048 env with your own environment and **three rewards**: (a) syntax/compilation, (b) anticheat/safety, (c) task success.
* Update the **prompt** to request the kind of function or output you need.
* Keep the same Unsloth + GRPO scaffolding; only swap the env and rewards.
{% endstep %}
{% endstepper %}

**Examples:**

Example 1 (bash):
```bash
!pip install --upgrade -qqq uv
try: import numpy; get_numpy = f"numpy=={numpy.__version__}"
except: get_numpy = "numpy"
!uv pip install -qqq \
  "torch>=2.8.0" "triton>=3.4.0" {get_numpy} torchvision bitsandbytes "transformers==4.56.2" \
  "unsloth_zoo[base] @ git+https://github.com/unslothai/unsloth-zoo" \
  "unsloth[base] @ git+https://github.com/unslothai/unsloth" \
  git+https://github.com/triton-lang/triton.git@05b2c186c1b6c9a08375389d5efe9cb4c401c075#subdirectory=python/triton_kernels
!uv pip install --upgrade --no-deps transformers==4.56.2 tokenizers
!uv pip install --no-deps trl==0.22.2

Example 2 (python):

from unsloth import FastLanguageModel
import torch

max_seq_length = 768        # Increase if your task needs longer outputs
lora_rank      = 4          # Higher rank → better but more VRAM/compute

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name        = "unsloth/gpt-oss-20b",  # or unsloth/gpt-oss-20b-BF16 on H100
    max_seq_length    = max_seq_length,
    load_in_4bit      = True,                    # False for 16bit
    offload_embedding = True,                    # saves ~1GB VRAM
)

model = FastLanguageModel.get_peft_model(
    model,
    r = lora_rank,
    target_modules = [
        "q_proj", "k_proj", "v_proj", "o_proj",
        "gate_proj", "up_proj", "down_proj",
    ],
    lora_alpha = lora_rank * 2,
    use_gradient_checkpointing = "unsloth",     # big memory saver
    random_state = 3407,
)

Example 3 (python):

def always_move_left(board):
    return "W"

steps, outcome = execute_strategy(always_move_left, GameBoard(size=8, seed=42, target=2048, probability_fours=0.10))

Example 4 (python):

from unsloth import check_python_modules
  ok, info = check_python_modules("""
  def strategy(board):
      import math
      from typing import Callable
      return "W"
  """)
  # ok == True means only Pythonlevel imports were used

DeepSeek-V3.1: How to Run Locally

URL: llms-txt#deepseek-v3.1:-how-to-run-locally

Contents:

  • ⚙️ Recommended Settings
  • :butterfly:Chat template bug fixes
    • 🐳Official Recommended Settings
  • :arrow_forward:Run DeepSeek-V3.1 Tutorials:
    • 🦙 Run in Ollama/Open WebUI
    • Run in llama.cpp

A guide on how to run DeepSeek-V3.1 and Terminus on your own local device!

DeepSeeks V3.1 and Terminus update introduces hybrid reasoning inference, combining 'think' and 'non-think' into one model. The full 671B parameter model requires 715GB of disk space. The quantized dynamic 2-bit version uses 245GB (-75% reduction in size). GGUF: DeepSeek-V3.1-GGUF

{% hint style="success" %} NEW: DeepSeek-V3.1-Terminus out now: DeepSeek-V3.1-Terminus-GGUF

Sept 10, 2025 update: You asked for tougher benchmarks, so were showcasing Aider Polyglot results! Our Dynamic 3-bit DeepSeek V3.1 GGUF scores 75.6%, surpassing many full-precision SOTA LLMs. Read more.

Our DeepSeek-V3.1 GGUFs include Unsloth chat template fixes for llama.cpp supported backends. {% endhint %}

All uploads use Unsloth Dynamic 2.0 for SOTA 5-shot MMLU and KL Divergence performance, meaning you can run & fine-tune quantized DeepSeek LLMs with minimal accuracy loss.

Tutorials navigation:

Run in llama.cppRun in Ollama/Open WebUI

The 1-bit dynamic quant TQ1_0 (1bit for unimportant MoE layers, 2-4bit for important MoE, and 6-8bit for rest) uses 170GB of disk space - this works well in a 1x24GB card and 128GB of RAM with MoE offloading - it also works natively in Ollama!

{% hint style="info" %} You must use --jinja for llama.cpp quants - this uses our fixed chat templates and enables the correct template! You might get incorrect results if you do not use --jinja {% endhint %}

The 2-bit quants will fit in a 1x 24GB GPU (with MoE layers offloaded to RAM). Expect around 5 tokens/s with this setup if you have bonus 128GB RAM as well. It is recommended to have at least 226GB RAM to run this 2-bit. For optimal performance you will need at least 226GB unified memory or 226GB combined RAM+VRAM for 5+ tokens/s. To learn how to increase generation speed and fit longer contexts, read here.

{% hint style="success" %} Though not a must, for best performance, have your VRAM + RAM combined equal to the size of the quant you're downloading. If not, hard drive / SSD offloading will work with llama.cpp, just inference will be slower. {% endhint %}

:butterfly:Chat template bug fixes

We fixed a few issues with DeepSeek V3.1's chat template since they did not function correctly in llama.cpp and other engines:

  1. DeepSeek V3.1 is a hybrid reasoning model, meaning you can change the chat template to enable reasoning. The chat template introduced thinking = True , but other models use enable_thinking = True . We added the option to use enable_thinking as a keyword instead.
  2. llama.cpp's jinja renderer via minja does not allow the use of extra arguments in the .split() command, so using .split(text, 1) works in Python, but not in minja. We had to change this to make llama.cpp function correctly without erroring out.

    You will get the following error when using other quants:
    terminate called after throwing an instance of 'std::runtime_error' what(): split method must have between 1 and 1 positional arguments and between 0 and 0 keyword arguments at row 3, column 1908 We fixed it in all our quants!

According to DeepSeek, these are the recommended settings for V3.1 inference:

  • Set the temperature 0.6 to reduce repetition and incoherence.
  • Set top_p to 0.95 (recommended)
  • 128K context length or less
  • Use --jinja for llama.cpp variants - we fixed some chat template issues as well!
  • Use enable_thinking = True to use reasoning/ thinking mode. By default it's set to non reasoning.

🔢 Chat template/prompt format

You do not need to force <think>\n , but you can still add it in! With the given prefix, DeepSeek V3.1 generates responses to queries in non-thinking mode. Unlike DeepSeek V3, it introduces an additional token </think>.

A BOS is forcibly added, and an EOS separates each interaction. To counteract double BOS tokens during inference, you should only call tokenizer.encode(..., add_special_tokens = False) since the chat template auto adds a BOS token as well. For llama.cpp / GGUF inference, you should skip the BOS since itll auto add it.

📔 Non-Thinking Mode (use thinking = Falseor enable_thinking = False and is by default)

Prefix: <begin▁of▁sentence>{system prompt}<User>{query}<Assistant></think>

With the given prefix, DeepSeek V3.1 generates responses to queries in non-thinking mode. Unlike DeepSeek V3, it introduces an additional token </think>.

Context: <begin▁of▁sentence>{system prompt}<User>{query}<Assistant></think>{response}<end▁of▁sentence>...<User>{query}<Assistant></think>{response}<end▁of▁sentence>

Prefix: <User>{query}<Assistant></think>

By concatenating the context and the prefix, we obtain the correct prompt for the query.

📚 Thinking Mode (use thinking = Trueor enable_thinking = True and is by default)

Prefix: <begin▁of▁sentence>{system prompt}<User>{query}<Assistant><think>

The prefix of thinking mode is similar to DeepSeek-R1.

Context: <begin▁of▁sentence>{system prompt}<User>{query}<Assistant></think>{response}<end▁of▁sentence>...<User>{query}<Assistant></think>{response}<end▁of▁sentence>

Prefix: <User>{query}<Assistant><think>

The multi-turn template is the same with non-thinking multi-turn chat template. It means the thinking token in the last turn will be dropped but the </think> is retained in every turn of context.

🏹 Tool Calling

Tool calling is supported in non-thinking mode. The format is:

<begin▁of▁sentence>{system prompt}{tool_description}<User>{query}<Assistant></think> where we populate the tool_description is area after the system prompt.

:arrow_forward:Run DeepSeek-V3.1 Tutorials:

🦙 Run in Ollama/Open WebUI

{% stepper %} {% step %} Install ollama if you haven't already! To run more variants of the model, see here.

{% step %} Run the model! Note you can call ollama servein another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in params in our Hugging Face upload!\ (NEW) To run the full R1-0528 model in Ollama, you can use our TQ1_0 (170GB quant):

{% step %} To run other quants, you need to first merge the GGUF split files into 1 like the code below. Then you will need to run the model locally.

{% step %} Open WebUI also made a step-by-step tutorial on how to run R1 and for V3.1, you will just need to replace R1 with the new V3.1 quant. {% endstep %} {% endstepper %}

Run in llama.cpp

{% stepper %} {% step %} Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

{% step %} If you want to use llama.cpp directly to load models, you can do the below: (:Q2_K_XL) is the quantization type. You can also download via Hugging Face (point 3). This is similar to ollama run . Use export LLAMA_CACHE="folder" to force llama.cpp to save to a specific location. Remember the model has only a maximum of 128K context length.

{% hint style="success" %} Please try out -ot ".ffn_.*_exps.=CPU" to offload all MoE layers to the CPU! This effectively allows you to fit all non MoE layers on 1 GPU, improving generation speeds. You can customize the regex expression to fit more layers if you have more GPU capacity.

If you have a bit more GPU memory, try -ot ".ffn_(up|down)_exps.=CPU" This offloads up and down projection MoE layers.

Try -ot ".ffn_(up)_exps.=CPU" if you have even more GPU memory. This offloads only up projection MoE layers.

And finally offload all layers via -ot ".ffn_.*_exps.=CPU" This uses the least VRAM.

You can also customize the regex, for example -ot "\.(6|7|8|9|[0-9][0-9]|[0-9][0-9][0-9])\.ffn_(gate|up|down)_exps.=CPU" means to offload gate, up and down MoE layers but only from the 6th layer onwards. {% endhint %}

{% step %} Download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose UD-Q2_K_XL (dynamic 2bit quant) or other quantized versions like Q4_K_M . We recommend using our 2.7bit dynamic quant UD-Q2_K_XL to balance size and accuracy.

Examples:

Example 1 (unknown):

<begin▁of▁sentence>{system prompt}<User>{query}<Assistant></think>

Example 2 (bash):

apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh

Example 3 (unknown):

OLLAMA_MODELS=unsloth ollama serve &

OLLAMA_MODELS=unsloth ollama run hf.co/unsloth/DeepSeek-V3.1-Terminus-GGUF:TQ1_0

Example 4 (bash):

./llama.cpp/llama-gguf-split --merge \
  DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-UD-Q2_K_XL/DeepSeek-V3.1-Terminus-UD-Q2_K_XL-00001-of-00006.gguf \
	merged_file.gguf

Get LAION dataset

URL: llms-txt#get-laion-dataset

url = "https://huggingface.co/datasets/laion/OIG/resolve/main/unified_chip2.jsonl" dataset = load_dataset("json", data_files = {"train" : url}, split = "train")


For Q8_0:

URL: llms-txt#for-q8_0:

Contents:

  • :question:Why is Q8_K_XL slower than Q8_0 GGUF?
  • :question:How to do Evaluation
  • :question:Evaluation Loop - Out of Memory or crashing.
  • :question:How do I do Early Stopping?
  • :question:Downloading gets stuck at 90 to 95%
  • :question:RuntimeError: CUDA error: device-side assert triggered
  • :question:All labels in your dataset are -100. Training losses will be all 0.
  • :question:Some weights of Gemma3nForConditionalGeneration were not initialized from the model checkpoint
  • :question:NotImplementedError: A UTF-8 locale is required. Got ANSI
  • :green_book:Citing Unsloth

python llama.cpp/convert_hf_to_gguf.py merged_model
--outfile model-Q8_0.gguf --outtype q8_0
--split-max-size 50G python new_dataset = dataset.train_test_split( test_size = 0.01, # 1% for test size can also be an integer for # of rows shuffle = True, # Should always set to True! seed = 3407, )

train_dataset = new_dataset["train"] # Dataset for training eval_dataset = new_dataset["test"] # Dataset for evaluation python from trl import SFTTrainer, SFTConfig trainer = SFTTrainer( args = SFTConfig( fp16_full_eval = True, # Set this to reduce memory usage per_device_eval_batch_size = 2,# Increasing this will use more memory eval_accumulation_steps = 4, # You can increase this include of batch_size eval_strategy = "steps", # Runs eval every few steps or epochs. eval_steps = 1, # How many evaluations done per # of training steps ), train_dataset = new_dataset["train"], eval_dataset = new_dataset["test"], ... ) trainer.train() python new_dataset = dataset.train_test_split(test_size = 0.01)

from trl import SFTTrainer, SFTConfig trainer = SFTTrainer( args = SFTConfig( fp16_full_eval = True, per_device_eval_batch_size = 2, eval_accumulation_steps = 4, eval_strategy = "steps", eval_steps = 1, ), train_dataset = new_dataset["train"], eval_dataset = new_dataset["test"], ... ) python from trl import SFTConfig, SFTTrainer trainer = SFTTrainer( args = SFTConfig( fp16_full_eval = True, per_device_eval_batch_size = 2, eval_accumulation_steps = 4, output_dir = "training_checkpoints", # location of saved checkpoints for early stopping save_strategy = "steps", # save model every N steps save_steps = 10, # how many steps until we save the model save_total_limit = 3, # keep ony 3 saved checkpoints to save disk space eval_strategy = "steps", # evaluate every N steps eval_steps = 10, # how many steps until we do evaluation load_best_model_at_end = True, # MUST USE for early stopping metric_for_best_model = "eval_loss", # metric we want to early stop on greater_is_better = False, # the lower the eval loss, the better ), model = model, tokenizer = tokenizer, train_dataset = new_dataset["train"], eval_dataset = new_dataset["test"], ) python from transformers import EarlyStoppingCallback early_stopping_callback = EarlyStoppingCallback( early_stopping_patience = 3, # How many steps we will wait if the eval loss doesn't decrease # For example the loss might increase, but decrease after 3 steps early_stopping_threshold = 0.0, # Can set higher - sets how much loss should decrease by until # we consider early stopping. For eg 0.01 means if loss was # 0.02 then 0.01, we consider to early stop the run. ) trainer.add_callback(early_stopping_callback) python import os os.environ["UNSLOTH_STABLE_DOWNLOADS"] = "1"

from unsloth import FastLanguageModel python import os os.environ["UNSLOTH_COMPILE_DISABLE"] = "1" os.environ["UNSLOTH_DISABLE_FAST_GENERATION"] = "1" python from unsloth.chat_templates import train_on_responses_only trainer = train_on_responses_only( trainer, instruction_part = "<|start_header_id|>user<|end_header_id|>\n\n", response_part = "<|start_header_id|>assistant<|end_header_id|>\n\n", ) python from unsloth.chat_templates import train_on_responses_only trainer = train_on_responses_only( trainer, instruction_part = "<start_of_turn>user\n", response_part = "<start_of_turn>model\n", ) python import locale locale.getpreferredencoding = lambda: "UTF-8"

@misc{unsloth_2025_qwen3_30b_a3b, author = {Unsloth AI and Han-Chen, Daniel and Han-Chen, Michael}, title = {Qwen3-30B-A3B-GGUF:Q8_K_XL}, year = {2025}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF}} }

@misc{unsloth, author = {Unsloth AI and Han-Chen, Daniel and Han-Chen, Michael}, title = {Unsloth}, year = {2025}, publisher = {Github}, howpublished = {\url{https://github.com/unslothai/unsloth}} }


**Examples:**

Example 1 (unknown):
```unknown
## :question:Why is Q8\_K\_XL slower than Q8\_0 GGUF?

On Mac devices, it seems like that BF16 might be slower than F16. Q8\_K\_XL upcasts some layers to BF16, so hence the slowdown, We are actively changing our conversion process to make F16 the default choice for Q8\_K\_XL to reduce performance hits.&#x20;

## :question:How to do Evaluation

To set up evaluation in your training run, you first have to split your dataset into a training and test split. You should <mark style="background-color:green;">**always shuffle the selection of the dataset**</mark>, otherwise your evaluation is wrong!

Example 2 (unknown):

Then, we can set the training arguments to enable evaluation. Reminder evaluation can be very very slow especially if you set `eval_steps = 1`  which means you are evaluating every single step. If you are, try reducing the eval\_dataset size to say 100 rows or something.

Example 3 (unknown):

## :question:Evaluation Loop - Out of Memory or crashing.

A common issue when you OOM is because you set your batch size too high. Set it lower than 2 to use less VRAM. Also use `fp16_full_eval=True` to use float16 for evaluation which cuts memory by 1/2.

First split your training dataset into a train and test split. Set the trainer settings for evaluation to:

Example 4 (unknown):

This will cause no OOMs and make it somewhat faster. You can also use `bf16_full_eval=True` for bf16 machines. By default Unsloth should have set these flags on by default as of June 2025.

## :question:How do I do Early Stopping?

If you want to stop the finetuning / training run since the evaluation loss is not decreasing, then you can use early stopping which stops the training process. Use `EarlyStoppingCallback`.

As usual, set up your trainer and your evaluation dataset. The below is used to stop the training run if the `eval_loss` (the evaluation loss) is not decreasing after 3 steps or so.

Unsloth Benchmarks

URL: llms-txt#unsloth-benchmarks

Contents:

  • Context length benchmarks
    • Llama 3.1 (8B) max. context length
    • Llama 3.3 (70B) max. context length

Unsloth recorded benchmarks on NVIDIA GPUs.

Tested on H100 and Blackwell GPUs. We tested using the Alpaca Dataset, a batch size of 2, gradient accumulation steps of 4, rank = 32, and applied QLoRA on all linear layers (q, k, v, o, gate, up, down):

ModelVRAM🦥Unsloth speed🦥VRAM reduction🦥Longer context😊Hugging Face + FA2
Llama 3.3 (70B)80GB2x>75%13x longer1x
Llama 3.1 (8B)80GB2x>70%12x longer1x

Context length benchmarks

{% hint style="info" %} The more data you have, the less VRAM Unsloth uses due to our gradient checkpointing algorithm + Apple's CCE algorithm! {% endhint %}

Llama 3.1 (8B) max. context length

We tested Llama 3.1 (8B) Instruct and did 4bit QLoRA on all linear layers (Q, K, V, O, gate, up and down) with rank = 32 with a batch size of 1. We padded all sequences to a certain maximum sequence length to mimic long context finetuning workloads.

GPU VRAM 🦥Unsloth context length Hugging Face + FA2
8 GB 2,972 OOM
12 GB 21,848 932
16 GB 40,724 2,551
24 GB 78,475 5,789
40 GB 153,977 12,264
48 GB 191,728 15,502
80 GB 342,733 28,454

Llama 3.3 (70B) max. context length

We tested Llama 3.3 (70B) Instruct on a 80GB A100 and did 4bit QLoRA on all linear layers (Q, K, V, O, gate, up and down) with rank = 32 with a batch size of 1. We padded all sequences to a certain maximum sequence length to mimic long context finetuning workloads.

GPU VRAM 🦥Unsloth context length Hugging Face + FA2
48 GB 12,106 OOM
80 GB 89,389 6,916

Fine-tuning LLMs with NVIDIA DGX Spark and Unsloth

URL: llms-txt#fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth

Contents:

  • Step-by-Step Tutorial

Tutorial on how to fine-tune and do reinforcement learning (RL) with OpenAI gpt-oss on NVIDIA DGX Spark.

Unsloth enables local fine-tuning of LLMs with up to 200B parameters on the NVIDIA DGX™ Spark. With 128 GB of unified memory, you can train massive models such as gpt-oss-120b, and run or deploy inference directly on DGX Spark.

As shown at OpenAI DevDay, gpt-oss-20b was trained with RL and Unsloth on DGX Spark to auto-win 2048. You can train using Unsloth in a Docker container or virtual environment on DGX Spark.

In this tutorial, well train gpt-oss-20b with RL using Unsloth notebooks after installing Unsloth on your DGX Spark. gpt-oss-120b will use around 68GB of unified memory.

After 1,000 steps and 4 hours of RL training, the gpt-oss model greatly outperforms the original on 2048, and longer training would further improve results.

You can watch Unsloth featured on OpenAI DevDay 2025 here.

gpt-oss trained with RL consistently outperforms on 2048.

Step-by-Step Tutorial

{% stepper %} {% step %}

Start with Unsloth Docker image for DGX Spark

First, build the Docker image using the DGX Spark Dockerfile which can be found here. You can also run the below in a Terminal in the DGX Spark:

Then, build the training Docker image using saved Dockerfile:

You can also click to see the full DGX Spark Dockerfile
FROM nvcr.io/nvidia/pytorch:25.09-py3

**Examples:**

Example 1 (bash):
```bash
sudo apt update && sudo apt install -y wget
wget -O Dockerfile "https://raw.githubusercontent.com/unslothai/notebooks/main/Dockerfile_DGX_Spark"

Example 2 (bash):

docker build -f Dockerfile -t unsloth-dgx-spark .

DeepSeek-OCR: How to Run & Fine-tune

URL: llms-txt#deepseek-ocr:-how-to-run-&-fine-tune

Contents:

  • 🖥️ Running DeepSeek-OCR
    • ⚙️ Recommended Settings
    • 📖 vLLM: Run DeepSeek-OCR Tutorial

Guide on how to run and fine-tune DeepSeek-OCR locally.

DeepSeek-OCR is a 3B-parameter vision model for OCR and document understanding. It uses context optical compression to convert 2D layouts into vision tokens, enabling efficient long-context processing.

Capable of handling tables, papers, and handwriting, DeepSeek-OCR achieves 97% precision while using 10× fewer vision tokens than text tokens - making it 10× more efficient than text-based LLMs.

You can fine-tune DeepSeek-OCR to enhance its vision or language performance. In our Unsloth free fine-tuning notebook, we demonstrated a 88.26% improvement for language understanding.

Running DeepSeek-OCRFine-tuning DeepSeek-OCR

Our model upload that enables fine-tuning + more inference support: DeepSeek-OCR

🖥️ Running DeepSeek-OCR

To run the model in vLLM or Unsloth, here are the recommended settings:

DeepSeek recommends these settings:

  • Temperature = 0.0
  • max_tokens = 8192
  • ngram_size = 30
  • window_size = 90

📖 vLLM: Run DeepSeek-OCR Tutorial

  1. Obtain the latest vLLM via:
uv venv
source .venv/bin/activate

---

## Tutorial: How to Fine-tune gpt-oss

**URL:** llms-txt#tutorial:-how-to-fine-tune-gpt-oss

**Contents:**
- 🌐 Colab gpt-oss Fine-tuning
  - Install Unsloth (in Colab)
  - Configuring gpt-oss and Reasoning Effort
  - Fine-tuning Hyperparameters (LoRA)
  - Try Inference
  - Data Preparation
  - Train the model
  - Inference: Run your trained model
  - Save/export your model
  - :sparkles: Saving to Llama.cpp

Learn step-by-step how to train OpenAI gpt-oss locally with Unsloth.

In this guide with screenshots, you'll learn to fine-tune your own custom gpt-oss model either [locally](#local-gpt-oss-fine-tuning) on your machine or for free using [Google Colab](#colab-gpt-oss-fine-tuning). We'll walk you through the entire process, from setup to running and saving your trained model.

{% hint style="success" %}
[**Aug 28 update**](https://docs.unsloth.ai/models/long-context-gpt-oss-training#introducing-unsloth-flex-attention-support)**:** You can now export/save your QLoRA fine-tuned gpt-oss model to llama.cpp, vLLM, HF etc.

We also introduced [Unsloth Flex Attention](https://docs.unsloth.ai/models/long-context-gpt-oss-training#introducing-unsloth-flex-attention-support) which enables **>8× longer context lengths**, **>50% less VRAM usage** and **>1.5× faster training** vs. all implementations. [Read more here](https://docs.unsloth.ai/models/long-context-gpt-oss-training#introducing-unsloth-flex-attention-support)
{% endhint %}

> **Quickstart:** Fine-tune gpt-oss-20b for free with our: [Colab notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/gpt-oss-\(20B\)-Fine-tuning.ipynb)

Unsloth gpt-oss fine-tuning, when compared to all other FA2 implementations, achieves 1.5× faster training, 70% reduction in VRAM use, and 10x longer context lengths - with no accuracy loss.

* **QLoRA requirements:** gpt-oss-20b = 14GB VRAM • gpt-oss-120b = 65GB VRAM.
* **BF16 LoRA requirements:** gpt-oss-20b = 44GB VRAM • gpt-oss-120b = 210GB VRAM.

<a href="#local-gpt-oss-fine-tuning" class="button secondary">Local Guide</a><a href="#colab-gpt-oss-fine-tuning" class="button secondary">Colab Guide</a>

## 🌐 Colab gpt-oss Fine-tuning

This section covers fine-tuning gpt-oss using our Google Colab [notebooks](https://docs.unsloth.ai/get-started/unsloth-notebooks). You can also save and use the gpt-oss notebook into your favorite code editor and follow our [local gpt-oss guide](#local-gpt-oss-fine-tuning).

{% stepper %}
{% step %}

### Install Unsloth (in Colab)

In Colab, run cells **from top to bottom**. Use **Run all** for the first pass. The first cell installs Unsloth (and related dependencies) and prints GPU/memory info. If a cell throws an error, simply re-run it.

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FnVWahTM3dRcNxUl7yNlw%2Fchrome_wTbzfmSI21.png?alt=media&#x26;token=fe257ba6-512d-4000-bdf7-9a9a586c85a4" alt=""><figcaption></figcaption></figure>

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FwSOux9qJpXmROoriYA4U%2Fchrome_yPnb553OGW.png?alt=media&#x26;token=c14a59e6-709e-44b5-9aa3-6ab8eeb610da" alt=""><figcaption></figcaption></figure>
{% endstep %}

### Configuring gpt-oss and Reasoning Effort

Well load **`gpt-oss-20b`**  using Unsloth's [linearized version](https://docs.unsloth.ai/models/gpt-oss-how-to-run-and-fine-tune/..#making-efficient-gpt-oss-fine-tuning-work) (as no other version will work).&#x20;

Configure the following parameters:

* `max_seq_length = 1024`
  * Recommended for quick testing and initial experiments.
* `load_in_4bit = True`&#x20;
  * Use `False` for LoRA training (note: setting this to `False` will need at least 43GB VRAM). You ***MUST*** also set **`model_name = "unsloth/gpt-oss-20b-BF16"`**

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FndJWBQP3WUW5tR6CNyrP%2Fchrome_3qSe2UIFN0.png?alt=media&#x26;token=b43534ee-0d71-495a-b89c-91f52317354f" alt=""><figcaption></figcaption></figure>

You should see output similar to the example below. Note: We explicitly change the `dtype` to `float32` to ensure correct training behavior.

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FOMNOnDuWl2c95WuxSkDA%2Fchrome_DGMDHldw0J.png?alt=media&#x26;token=a086266b-7b88-4fcf-a7cd-5a17cc57e7f9" alt=""><figcaption></figcaption></figure>
{% endstep %}

### Fine-tuning Hyperparameters (LoRA)

Now it's time to adjust your training hyperparameters. For a deeper dive into how, when, and what to tune, check out our [detailed hyperparameters guide](https://docs.unsloth.ai/get-started/fine-tuning-llms-guide/lora-hyperparameters-guide).

{% hint style="info" %}
To avoid [overfitting](https://docs.unsloth.ai/get-started/fine-tuning-llms-guide/lora-hyperparameters-guide#avoiding-overfitting-and-underfitting), monitor your training loss and avoid setting these values too high.&#x20;
{% endhint %}

This step adds LoRA adapters for parameter-efficient fine-tuning. Only about 1% of the models parameters are trained, which makes the process significantly more efficient.

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Fwkbdee4FuThTM09oqUkL%2Fchrome_ucj0VKT1lh.png?alt=media&#x26;token=40b5ae77-31f8-4e13-841d-e4cc52e1436b" alt=""><figcaption></figcaption></figure>
{% endstep %}

In the notebook, there's a section called *"Reasoning Effort"* that demonstrates gpt-oss inference running in Colab. You can skip this step, but you'll still need to run the model later once you've finished fine-tuning it.

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FfXyFmwpMF1AgRRhnOQR8%2Fchrome_o2rLNfES8e.png?alt=media&#x26;token=6ef340fa-2ac0-4e82-9338-d91f66d1557a" alt=""><figcaption></figcaption></figure>
{% endstep %}

For this example, we will use the [`HuggingFaceH4/Multilingual-Thinking`](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking). This dataset contains chain-of-thought reasoning examples derived from user questions translated from English into four additional languages.&#x20;

This is the same dataset referenced in OpenAI's fine-tuning cookbook.

The goal of using a multilingual dataset is to help the model learn and generalize reasoning patterns across multiple languages.

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Fii6rqKAKqBYea2ZLoXKJ%2Fchrome_rRKmU99f0T.png?alt=media&#x26;token=74547cc7-0be9-4687-b128-1ff4b87d544f" alt=""><figcaption></figcaption></figure>

gpt-oss introduces a reasoning effort system that controls how much reasoning the model performs. By default, the reasoning effort is set to `low`, but you can change it by setting the `reasoning_effort` parameter to `low`, `medium` or `high`.

To format the dataset, we apply a customized version of the gpt-oss prompt:

Let's inspect the dataset by printing the first example:

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FDoRtTfO0oSVDg99Dm3dc%2Fchrome_sjbDtIhP5e.png?alt=media&#x26;token=c0fb44b6-861c-47b1-86a5-75c55771936e" alt=""><figcaption></figcaption></figure>

One unique feature of gpt-oss is its use of the [**OpenAI Harmony format**](https://github.com/openai/harmony)**,** which supports structured conversations, reasoning output, and tool calling. This format includes tags such as `<|start|>` , `<|message|>` , and `<|return|>` .&#x20;

{% hint style="info" %}
🦥 Unsloth fixes the chat template to ensure it is correct. See this [tweet](https://x.com/danielhanchen/status/1953901104150065544) for technical details on our template fix.
{% endhint %}

Feel free to adapt the prompt and structure to suit your own dataset or use-case. For more guidance, refer to our [dataset guide](https://docs.unsloth.ai/get-started/fine-tuning-llms-guide/datasets-guide).
{% endstep %}

We've pre-selected training hyperparameters for optimal results. However, you can modify them based on your specific use case. Refer to our [hyperparameters guide](https://docs.unsloth.ai/get-started/fine-tuning-llms-guide/lora-hyperparameters-guide).&#x20;

In this example, we train for 60 steps to speed up the process. For a full training run, set `num_train_epochs=1` and disable the step limiting by setting `max_steps=None`.

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FcQroeXLcHOHaRsUiCyYL%2Fchrome_R85PmZRHMQ.png?alt=media&#x26;token=e2069d2e-ef15-4179-ba49-fc484cf26b0b" alt=""><figcaption></figcaption></figure>

During training, monitor the loss to ensure that it is decreasing over time. This confirms that the training process is functioning correctly.

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FmcHwJsR2kzTpab4gTgUY%2Fimage.png?alt=media&#x26;token=03b873b3-8e1c-42ee-826e-d62feab7d703" alt=""><figcaption></figcaption></figure>
{% endstep %}

### Inference: Run your trained model

Now it's time to run inference with your fine-tuned model. You can modify the instruction and input, but leave the output blank.

In this example, we test the model's ability to reason in French by adding a specific instruction to the system prompt, following the same structure used in our dataset.

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F2oDtZBxHXle9KsWSqTzT%2Fchrome_jbJmBTaY7B.png?alt=media&#x26;token=9a2bcba5-9e60-4a5e-836c-27e5f45a9bf4" alt=""><figcaption></figcaption></figure>

This should produce an output similar to:

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F9RTKGdSeuca5QfDhVXFw%2Fchrome_ORco4bpZZ6.png?alt=media&#x26;token=1d5bf29e-c57c-41f0-a2e5-162408d80690" alt=""><figcaption></figcaption></figure>
{% endstep %}

### Save/export your model

To save your fine-tuned model, you can export your fine-tuned model both in **bf16 format ,** with our **on-demand dequantization of MXFP4** base models using `save_method="merged_16bit"`or in native **MXFP4** Safetensors format using `save_method="mxfp4"` .

The **MXFP4** native merge format offers significant performance improvements compared to the **bf16 format**: it uses up to 75% less disk space, reduces VRAM consumption by 50%, accelerates merging by 5-10x, and enables much faster conversion to **GGUF** format.

{% hint style="success" %}
New: Saving or merging QLoRA fine-tuned models to GGUF is now supported for use in other frameworks (e.g. Hugging Face, llama.cpp with GGUF).
{% endhint %}

After fine-tuning your gpt-oss model, you can merge it into **MXFP4** format with:

If you prefer to merge the model and push to the hugging-face hub directly:

### :sparkles: Saving to Llama.cpp

1. Obtain the latest `llama.cpp` on [GitHub here](https://github.com/ggml-org/llama.cpp). You can follow the build instructions below as well. Change `-DGGML_CUDA=ON` to `-DGGML_CUDA=OFF` if you don't have a GPU or just want CPU inference.

2. Convert the **MXFP4** merged model:

3. Run inference on the quantized model:

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FVHzhTH5oCJZKPXpqmuOQ%2Fchrome_fKEKXHti5r.png?alt=media&#x26;token=c470698a-80e5-4c52-92e2-bff901fc2746" alt=""><figcaption></figcaption></figure>
{% endstep %}
{% endstepper %}

## 🖥️ Local gpt-oss Fine-tuning

This chapter covers fine-tuning gpt-oss on your local device. While **gpt-oss-20b** fine-tuning can operate on just 14GB VRAM, we recommend having at least 16GB VRAM available to ensure stable and reliable training runs.

{% hint style="info" %}
We recommend downloading or incorporating elements from our Colab [notebooks](https://docs.unsloth.ai/get-started/unsloth-notebooks) into your local setup for easier use.
{% endhint %}

{% stepper %}
{% step %}

### Install Unsloth Locally

Ensure your device is [Unsloth compatible](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements) and you can read our detailed [installation guide](https://docs.unsloth.ai/get-started/install-and-update).

Note that `pip install unsloth` will not work for this setup, as we need to use the latest PyTorch, Triton and related packages. Install Unsloth using this specific command:

**Examples:**

Example 1 (python):
```python
tokenizer.apply_chat_template(
    text, 
    tokenize = False, 
    add_generation_prompt = False,
    reasoning_effort = "medium",
)

Example 2 (python):

from unsloth.chat_templates import standardize_sharegpt
dataset = standardize_sharegpt(dataset)
dataset = dataset.map(formatting_prompts_func, batched = True,)

Example 3 (unknown):

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FDoRtTfO0oSVDg99Dm3dc%2Fchrome_sjbDtIhP5e.png?alt=media&#x26;token=c0fb44b6-861c-47b1-86a5-75c55771936e" alt=""><figcaption></figcaption></figure>

One unique feature of gpt-oss is its use of the [**OpenAI Harmony format**](https://github.com/openai/harmony)**,** which supports structured conversations, reasoning output, and tool calling. This format includes tags such as `<|start|>` , `<|message|>` , and `<|return|>` .&#x20;

{% hint style="info" %}
🦥 Unsloth fixes the chat template to ensure it is correct. See this [tweet](https://x.com/danielhanchen/status/1953901104150065544) for technical details on our template fix.
{% endhint %}

Feel free to adapt the prompt and structure to suit your own dataset or use-case. For more guidance, refer to our [dataset guide](https://docs.unsloth.ai/get-started/fine-tuning-llms-guide/datasets-guide).
{% endstep %}

{% step %}

### Train the model

We've pre-selected training hyperparameters for optimal results. However, you can modify them based on your specific use case. Refer to our [hyperparameters guide](https://docs.unsloth.ai/get-started/fine-tuning-llms-guide/lora-hyperparameters-guide).&#x20;

In this example, we train for 60 steps to speed up the process. For a full training run, set `num_train_epochs=1` and disable the step limiting by setting `max_steps=None`.

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FcQroeXLcHOHaRsUiCyYL%2Fchrome_R85PmZRHMQ.png?alt=media&#x26;token=e2069d2e-ef15-4179-ba49-fc484cf26b0b" alt=""><figcaption></figcaption></figure>

During training, monitor the loss to ensure that it is decreasing over time. This confirms that the training process is functioning correctly.

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FmcHwJsR2kzTpab4gTgUY%2Fimage.png?alt=media&#x26;token=03b873b3-8e1c-42ee-826e-d62feab7d703" alt=""><figcaption></figcaption></figure>
{% endstep %}

{% step %}

### Inference: Run your trained model

Now it's time to run inference with your fine-tuned model. You can modify the instruction and input, but leave the output blank.

In this example, we test the model's ability to reason in French by adding a specific instruction to the system prompt, following the same structure used in our dataset.

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F2oDtZBxHXle9KsWSqTzT%2Fchrome_jbJmBTaY7B.png?alt=media&#x26;token=9a2bcba5-9e60-4a5e-836c-27e5f45a9bf4" alt=""><figcaption></figcaption></figure>

This should produce an output similar to:

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F9RTKGdSeuca5QfDhVXFw%2Fchrome_ORco4bpZZ6.png?alt=media&#x26;token=1d5bf29e-c57c-41f0-a2e5-162408d80690" alt=""><figcaption></figcaption></figure>
{% endstep %}

{% step %}

### Save/export your model

To save your fine-tuned model, you can export your fine-tuned model both in **bf16 format ,** with our **on-demand dequantization of MXFP4** base models using `save_method="merged_16bit"`or in native **MXFP4** Safetensors format using `save_method="mxfp4"` .

The **MXFP4** native merge format offers significant performance improvements compared to the **bf16 format**: it uses up to 75% less disk space, reduces VRAM consumption by 50%, accelerates merging by 5-10x, and enables much faster conversion to **GGUF** format.

{% hint style="success" %}
New: Saving or merging QLoRA fine-tuned models to GGUF is now supported for use in other frameworks (e.g. Hugging Face, llama.cpp with GGUF).
{% endhint %}

After fine-tuning your gpt-oss model, you can merge it into **MXFP4** format with:

Example 4 (unknown):

If you prefer to merge the model and push to the hugging-face hub directly:

Advanced RL Documentation

URL: llms-txt#advanced-rl-documentation

Contents:

  • Training Parameters
  • Generation Parameters
  • Batch & Throughput Parameters
    • Parameters that control batches
    • GRPO Batch Examples
    • Quick Formula Reference

Advanced documentation settings when using Unsloth with GRPO.

Detailed guides on doing GRPO with Unsloth for Batching, Generation & Training Parameters:

Training Parameters

  • beta (float, default 0.0): KL coefficient.
    • 0.0 ⇒ no reference model loaded (lower memory, faster).
    • Higher beta constrains the policy to stay closer to the ref policy.
  • num_iterations (int, default 1): PPO epochs per batch (μ in the algorithm).
    Replays data within each gradient accumulation step; e.g., 2 = two forward passes per accumulation step.
  • epsilon (float, default 0.2): Clipping value for token-level log-prob ratios (typical ratio range ≈ [-1.2, 1.2] with default ε).
  • delta (float, optional): Enables upper clipping bound for two-sided GRPO when set. If None, standard GRPO clipping is used. Recommended > 1 + ε when enabled (per INTELLECT-2 report).
  • epsilon_high (float, optional): Upper-bound epsilon; defaults to epsilon if unset. DAPO recommends 0.28.
  • importance_sampling_level (“token” | “sequence”, default "token"):
    • "token": raw per-token ratios (one weight per token).
    • "sequence": average per-token ratios to a single sequence-level ratio.
      GSPO shows sequence-level sampling often gives more stable training for sequence-level rewards.
  • reward_weights (list[float], optional): One weight per reward. If None, all weights = 1.0.
  • scale_rewards (str|bool, default "group"):
    • True or "group": scale by std within each group (unit variance in group).
    • "batch": scale by std across the entire batch (per PPO-Lite).
    • False or "none": no scaling. Dr. GRPO recommends not scaling to avoid difficulty bias from std scaling.
  • loss_type (str, default "dapo"):
    • "grpo": normalizes over sequence length (length bias; not recommended).
    • "dr_grpo": normalizes by a global constant (introduced in Dr. GRPO; removes length bias). Constant ≈ max_completion_length.
    • "dapo" (default): normalizes by active tokens in the global accumulated batch (introduced in DAPO; removes length bias).
    • "bnpo": normalizes by active tokens in the local batch only (results can vary with local batch size; equals GRPO when per_device_train_batch_size == 1).
  • mask_truncated_completions (bool, default False):
    When True, truncated completions are excluded from loss (recommended by DAPO for stability).
    Note: There are some KL issues with this flag, so we recommend to disable it.

This can zero out all completion_mask entries when many completions are truncated, making n_mask_per_reward = 0 and causing KL to become NaN. See

  • vllm_importance_sampling_correction (bool, default True):
    Applies Truncated Importance Sampling (TIS) to correct off-policy effects when generation (e.g., vLLM / fast_inference) differs from training backend.
    In Unsloth, this is auto-set to True if youre using vLLM/fast_inference; otherwise False.
  • vllm_importance_sampling_cap (float, default 2.0):
    Truncation parameter C for TIS; sets an upper bound on the importance sampling ratio to improve stability.

Generation Parameters

  • temperature (float, defaults to 1.0):
    Temperature for sampling. The higher the temperature, the more random the completions. Make sure you use a relatively high (1.0) temperature to have diversity in generations which helps learning.
  • top_p (float, optional, defaults to 1.0):
    Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1.0 to consider all tokens.
  • top_k (int, optional):
    Number of highest probability vocabulary tokens to keep for top-k-filtering. If None, top-k-filtering is disabled and all tokens are considered.
  • min_p (float, optional):
    Minimum token probability, which will be scaled by the probability of the most likely token. It must be a value between 0.0 and 1.0. Typical values are in the 0.01-0.2 range.
  • repetition_penalty (float, optional, defaults to 1.0):
    Float that penalizes new tokens based on whether they appear in the prompt and the generated text so far. Values > 1.0 encourage the model to use new tokens, while values < 1.0 encourage the model to repeat tokens.
  • steps_per_generation: (int, optional):
    Number of steps per generation. If None, it defaults to gradient_accumulation_steps. Mutually exclusive with generation_batch_size.

{% hint style="info" %} It is a bit confusing to mess with this parameter, it is recommended to edit per_device_train_batch_size and gradient accumulation for the batch sizes {% endhint %}

Batch & Throughput Parameters

Parameters that control batches

  • train_batch_size: Number of samples per process per step.
    If this integer is less than num_generations, it will default to num_generations.
  • steps_per_generation: Number of microbatches that contribute to one generations loss calculation (forward passes only).
    A new batch of data is generated every steps_per_generation steps; backpropagation timing depends on gradient_accumulation_steps.
  • num_processes: Number of distributed training processes (e.g., GPUs / workers).
  • gradient_accumulation_steps (aka gradient_accumulation): Number of microbatches to accumulate before applying backpropagation and optimizer update.
  • Effective batch size:

Total samples contributing to gradients before an update (across all processes and steps).

  • Optimizer steps per generation:

Example: 4 / 2 = 2.

  • num_generations: Number of generations produced per prompt (applied after computing effective_batch_size).
    The number of unique prompts in a generation cycle is:

Must be > 2 for GRPO to work.

GRPO Batch Examples

The tables below illustrate how batches flow through steps, when optimizer updates occur, and how new batches are generated.

Generation cycle A

Step Batch Notes
0 [0,0,0]
1 [1,1,1] → optimizer update (accum = 2 reached)
2 [2,2,2]
3 [3,3,3] optimizer update

Generation cycle B

Step Batch Notes
0 [4,4,4]
1 [5,5,5] → optimizer update (accum = 2 reached)
2 [6,6,6]
3 [7,7,7] optimizer update

Generation cycle A

Step Batch Notes
0 [0,0,0]
1 [1,1,1]
2 [2,2,2]
3 [3,3,3] optimizer update (accum = 4 reached)

Generation cycle B

Step Batch Notes
0 [4,4,4]
1 [5,5,5]
2 [6,6,6]
3 [7,7,7] optimizer update (accum = 4 reached)

Generation cycle A

Step Batch Notes
0 [0,0,0]
1 [0,1,1]
2 [1,1,3]
3 [3,3,3] optimizer update (accum = 4 reached)

Generation cycle B

Step Batch Notes
0 [4,4,4]
1 [4,5,5]
2 [5,5,6]
3 [6,6,6] optimizer update (accum = 4 reached)

Generation cycle A

Step Batch Notes
0 [0,0,0, 1,1,1]
1 [2,2,2, 3,3,3] optimizer update (accum = 2 reached)

Generation cycle B

Step Batch Notes
0 [4,4,4, 5,5,5]
1 [6,6,6, 7,7,7] optimizer update (accum = 2 reached)

Quick Formula Reference

Examples:

Example 1 (python):

# If mask_truncated_completions is enabled, zero out truncated completions in completion_mask
  if self.mask_truncated_completions:
      truncated_completions = ~is_eos.any(dim=1)
      completion_mask = completion_mask * (~truncated_completions).unsqueeze(1).int()

Example 2 (unknown):

effective_batch_size = steps_per_generation * num_processes * train_batch_size

Example 3 (unknown):

optimizer_steps_per_generation = steps_per_generation / gradient_accumulation_steps

Example 4 (unknown):

unique_prompts = effective_batch_size / num_generations

Chat Templates

URL: llms-txt#chat-templates

Contents:

  • List of Colab chat template notebooks:
  • Multi turn conversations
  • Customizable Chat Templates
  • Applying Chat Templates with Unsloth
  • More Information

Learn the fundamentals and customization options of chat templates, including Conversational, ChatML, ShareGPT, Alpaca formats, and more!

In our GitHub, we have a list of every chat template Unsloth uses including for Llama, Mistral, Phi-4 etc. So if you need any pointers on the formatting or use case, you can view them here: github.com/unslothai/unsloth/blob/main/unsloth/chat_templates.py

List of Colab chat template notebooks:

Multi turn conversations

A bit issue if you didn't notice is the Alpaca dataset is single turn, whilst remember using ChatGPT was interactive and you can talk to it in multiple turns. For example, the left is what we want, but the right which is the Alpaca dataset only provides singular conversations. We want the finetuned language model to somehow learn how to do multi turn conversations just like ChatGPT.

So we introduced the conversation_extension parameter, which essentially selects some random rows in your single turn dataset, and merges them into 1 conversation! For example, if you set it to 3, we randomly select 3 rows and merge them into 1! Setting them too long can make training slower, but could make your chatbot and final finetune much better!

Then set output_column_name to the prediction / output column. For the Alpaca dataset dataset, it would be the output column.

We then use the standardize_sharegpt function to just make the dataset in a correct format for finetuning! Always call this!

Customizable Chat Templates

We can now specify the chat template for finetuning itself. The very famous Alpaca format is below:

But remember we said this was a bad idea because ChatGPT style finetunes require only 1 prompt? Since we successfully merged all dataset columns into 1 using Unsloth, we essentially can create the below style chat template with 1 input column (instruction) and 1 output:

We just require you must put a {INPUT} field for the instruction and an {OUTPUT} field for the model's output field. We in fact allow an optional {SYSTEM} field as well which is useful to customize a system prompt just like in ChatGPT. For example, below are some cool examples which you can customize the chat template to be:

For the ChatML format used in OpenAI models:

Or you can use the Llama-3 template itself (which only functions by using the instruct version of Llama-3): We in fact allow an optional {SYSTEM} field as well which is useful to customize a system prompt just like in ChatGPT.

Or in the Titanic prediction task where you had to predict if a passenger died or survived in this Colab notebook which includes CSV and Excel uploading: https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS?usp=sharing

Applying Chat Templates with Unsloth

For datasets that usually follow the common chatml format, the process of preparing the dataset for training or finetuning, consists of four simple steps:

  • Check the chat templates that Unsloth currently supports:\


This will print out the list of templates currently supported by Unsloth. Here is an example output:\

  • Use get_chat_template to apply the right chat template to your tokenizer:\

  • Define your formatting function. Here's an example:\



This function loops through your dataset applying the chat template you defined to each sample.\

  • Finally, let's load the dataset and apply the required modifications to our dataset: \


If your dataset uses the ShareGPT format with "from"/"value" keys instead of the ChatML "role"/"content" format, you can use the standardize_sharegpt function to convert it first. The revised code will now look as follows:
\

Assuming your dataset is a list of list of dictionaries like the below:

You can use our get_chat_template to format it. Select chat_template to be any of zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old, unsloth, and use mapping to map the dictionary values from, value etc. map_eos_token allows you to map <|im_end|> to EOS without any training.

You can also make your own custom chat templates! For example our internal chat template we use is below. You must pass in a tuple of (custom_template, eos_token) where the eos_token must be used inside the template.

Examples:

Example 1 (unknown):

from unsloth.chat_templates import CHAT_TEMPLATES
  print(list(CHAT_TEMPLATES.keys()))

Example 2 (unknown):

['unsloth', 'zephyr', 'chatml', 'mistral', 'llama', 'vicuna', 'vicuna_old', 'vicuna old', 'alpaca', 'gemma', 'gemma_chatml', 'gemma2', 'gemma2_chatml', 'llama-3', 'llama3', 'phi-3', 'phi-35', 'phi-3.5', 'llama-3.1', 'llama-31', 'llama-3.2', 'llama-3.3', 'llama-32', 'llama-33', 'qwen-2.5', 'qwen-25', 'qwen25', 'qwen2.5', 'phi-4', 'gemma-3', 'gemma3']

Example 3 (unknown):

from unsloth.chat_templates import get_chat_template

  tokenizer = get_chat_template(
      tokenizer,
      chat_template = "gemma-3", # change this to the right chat_template name
  )

Example 4 (unknown):

def formatting_prompts_func(examples):
     convos = examples["conversations"]
     texts = [tokenizer.apply_chat_template(convo, tokenize = False, add_generation_prompt = False) for convo in convos]
     return { "text" : texts, }

Unsloth Dynamic GGUFs on Aider Polyglot

URL: llms-txt#unsloth-dynamic-ggufs-on-aider-polyglot

Contents:

  • Key results
  • 🦥Unsloth Dynamic Quantization
    • ⚙️Benchmark setup
  • :sparkler:Comparison to other quants
    • :cake:Dynamic quantization ablations
    • :bug:Chat Template Bug Fixes
    • :bar_chart:Pass Rate 1
  • :computer:Run DeepSeek V3.1 Dynamic quants

Performance of Unsloth Dynamic GGUFs on Aider Polyglot Benchmarks

Were excited to share that Unsloth Dynamic GGUFs shows how it's possible to quantize LLMs like DeepSeek-V3.1 (671B) down to just 1-bit or 3-bit, and still be able to outperform SOTA models like GPT-4.5, GPT-4.1 (April 2025) and Claude-4-Opus (May 2025).

Previously, we demonstrated how Unsloth Dynamic GGUFs outperform other quantization methods on 5-shot MMLU and KL Divergence. Now, were showcasing their performance on independent third-party evaluations using the Aider Polyglot benchmark.

Thinking Aider Benchmarks

No Thinking Aider Benchmarks

  • Our 1-bit Unsloth Dynamic GGUF shrinks DeepSeek-V3.1 from 671GB → 192GB (-75% size) and no-thinking mode greatly outperforms GPT-4.1 (Apr 2025), GPT-4.5, and DeepSeek-V3-0324.
  • 3-bit Unsloth DeepSeek-V3.1 (thinking) GGUF: Outperforms Claude-4-Opus-20250514 (thinking).
  • 5-bit Unsloth DeepSeek-V3.1 (non-thinking) GGUF: Matches Claude-4-Opus-20250514 (non-thinking) performance.
  • Unsloth Dynamic GGUFs perform consistently better than other non-Unsloth Dynamic imatrix GGUFs
  • Other non-Unsloth 1-bit and 2-bit DeepSeek-V3.1 quantizations, as well as standard 1-bit quantization without selective layer quantization, either failed to load or produced gibberish and looping outputs. This highlights how Unsloth Dynamic GGUFs are able to largely retain accuracy whereas other methods do not even function.

Why the Aider Polyglot benchmark? Aider is one of the most comprehensive measures of how well LLMs can write, code, follow instructions, and apply changes without human intervention, making it one of the hardest and most valuable benchmarks for real-world use.

{% hint style="success" %} The key advantage of using the Unsloth package and models is our active role in fixing critical bugs in major models. We've collaborated directly with teams behind Qwen3, Meta (Llama 4), Mistral (Devstral), Google (Gemma 13) and Microsoft (Phi-3/4), contributing essential fixes that significantly boost accuracy. {% endhint %}

🦥Unsloth Dynamic Quantization

{% hint style="success" %} Dynamic 1 bit makes important layers in 8 or 16 bits and un-important layers in 1,2,3,4,5 or 6bits. {% endhint %}

In Nov 2024, our 4-bit Dynamic Quants showcased how you could largely restore QLoRA fine-tuning & model accuracy by just selectively quantizing layers. We later studied DeepSeek-R1's architecture and applied this similar methodology, where we quantized some layers to as low as 1-bit and important layers to higher bits (6, 8-bit). This approach quickly gained popularity and has proven especially effective for MoE models, making dynamic quantization the de facto for MoE quantization.

Our Dynamic GGUFs are even more effective when paired with our imatrix calibration dataset, designed for chat and coding performance. All of this enabled extreme LLM compression without catastrophic loss in quality.

For example in Qwen2-VL-2B-Instruct, naively quantizing all layers to 4bit causes the model to fail understanding the image below. It's a train, not a coastal scene!

{% columns %} {% column width="33.33333333333333%" %}

{% endcolumn %}

{% column width="66.66666666666667%" %}

{% endcolumn %} {% endcolumns %}

We also showed dynamic benchmarks in https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs for Gemma 3 and Llama 4 Scout, showing how effective our methodology is:

{% columns %} {% column %}

{% endcolumn %}
{% endcolumn %} {% endcolumns %}

⚙️Benchmark setup

For our DeepSeek-V3.1 experiments, we compared different bits of Unsloth Dynamic GGUFs against:

  • Full-precision, unquantized LLMs including GPT 4.5, 4.1, Claude-4-Opus, DeepSeek-V3-0324 etc.
  • Other** dynamic imatrix V3.1 GGUFs**
  • ***Semi-***dynamic (some selective layer quantization) imatrix V3.1 GGUFs for ablation purposes.

Benchmark experiments were mainly conducted by David Sluys (neolithic5452 on Aider Discord), a trusted community contributor to Aider Polyglot evaluations. Tests were run ~3 times and averaged for a median score, and the Pass-2 accuracy is reported as by convention. There are some reproducible benchmark code snippets in Aider's Discord.

Expand for Reasoning model Aider benchmarks
Model Accuracy
GPT-5 86.7
Gemini 2.5 Pro (June) 83.1
o3 76.9
DeepSeek V3.1 76.1
(3 bit) DeepSeek V3.1 Unsloth 75.6
Claude-4-Opus (May) 72
o4-mini (High) 72
DeepSeek R1 0528 71.4
(2 bit) DeepSeek V3.1 Unsloth 66.7
Claude-3.7-Sonnet (Feb) 64.9
(1 bit) DeepSeek V3.1 Unsloth 57.8
DeepSeek R1 56.9
Expand for Non Reasoning model Aider benchmarks
Model Accuracy
DeepSeek V3.1 71.6
Claude-4-Opus (May) 70.7
(5 bit) DeepSeek V3.1 Unsloth 70.7
(4 bit) DeepSeek V3.1 Unsloth 69.7
(3 bit) DeepSeek V3.1 Unsloth 68.4
(2 bit) DeepSeek V3.1 Unsloth 65.8
Qwen3 235B A22B 59.6
Kimi K2 59.1
(1 bit) DeepSeek V3.1 Unsloth 55.7
DeepSeek V3-0324 55.1
GPT-4.1 (April, 2025) 52.4
ChatGPT 4o (March, 2025) 45.3
GPT-4.5 44.9

DeepSeek V3.1 has both a reasoning and a non reasoning mode, and we test both. For non reasoning, we see a clear trend of how our dynamic quantizations perform below. dynamic 5-bit attains 70.7% on Aider Pass-2, whilst dynamic 1-bit attains 55.7%. In terms of size and accuracy, the 3 and 4bit are extremely powerful!

:sparkler:Comparison to other quants

We also run the Aider Polyglot benchmark on other dynamic imatrix GGUFs from the community and compare it to ours. To ensure a fair comparison, we do the following:

  1. We select similar sized files and bit types to each Unsloth quant.
  2. We use our fixed chat template if the community quant fails to execute the benchmark. We found some community quants {"code":500,"message":"split method must have between 1 and 1 positional arguments and between 0 and 0 keyword arguments at row 3, column 1908"}, and this gets fixed by using our fixed chat template.

We see Unsloth dynamic quants doing remarkably well when compared to other community quantization for the same model size and quant type!

Expand for raw numerical data comparison to other quants
QuantQuant Size (GB)Unsloth Accuracy %Comparison Accuracy %
IQ2_XXS16443.6
TQ1_017050.7
IQ1_M20655.7
IQ2_M21556.6
IQ2_XXS22561.2
IQ2_M23564.3
Q2_K_L23964.0
Q2_K_XL25565.8
IQ3_XXS26865.665.6
IQ3_XXS27966.8
Q3_K_S29365.2
Q3_K_XL30068.4
IQ4_XS35769.2
IQ4_XS36066.3
Q4_K_XL38769.7
Q4_K_M40569.7
Q4_K_M40967.7
Q5_K_M47868.9
Q5_K_XL48470.7

:cake:Dynamic quantization ablations

We did some ablations as well to confirm if our calibration dataset and our dynamic quantization methodology actually works. The trick of Unsloth's dynamic method is to quantize important layers to higher bits say 8bits, whilst un-important layers are left in lower bis like 2bits.

To test our method, we leave specific tensors in lower precision like 4bit vs higher precision. For example below we leave attn_k_b tensors in 4bit (semi-dynamic) vs 8bit (Unsloth current), and by increasing the quant size by only ~100MB or so (<0.1%), accuracy shoots up dramatically!

{% hint style="success" %} attn_k_b and other tensors in DeepSeek V3.1 are highly important / sensitive to quantization and should left in higher precision to retain accuracy! {% endhint %}

:bug:Chat Template Bug Fixes

During testing of DeepSeek-V3.1 quants, we found some lower bit quants not enclosing <think> </think> properly or doing some weird formatting. This caused some community quants to not work on lower bits, and so this caused unfair comparisons. We found llama.cpp's usage of minja (a simpler version of jinja) does not accept positional argument in .split. We had to change:

See here for our fixed chat template or here for a raw jinja file.

:bar_chart:Pass Rate 1

Aider is reported mainly on pass rate 2. We also report pass rate 1 to compare community quants of the same size. We see our dynamic quants do much better than other community quants of similar sizes especially on smaller than 2 bit and larger than 4bits. 3 and 4 bit perform similarly well.

:computer:Run DeepSeek V3.1 Dynamic quants

Head over to our DeepSeek V3.1 guide or to quickly get the dynamic 2bit version, do:

then use llama.cpp to directly download the weights. We set the optimal suggested parameters like temperature, the chat template etc already as well:

Examples:

Example 1 (unknown):

{%- set content = content.split("</think>", 1)[1] -%}

Example 2 (unknown):

{%- set splitted = content.split("</think>") -%}
{%- set content = splitted[1:] | join("</think>") -%}

Example 3 (bash):

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli llama-server
cp llama.cpp/build/bin/llama-* llama.cpp

Example 4 (bash):

export LLAMA_CACHE="unsloth/DeepSeek-V3.1-GGUF"
./llama.cpp/llama-cli \
    -hf unsloth/DeepSeek-V3.1-GGUF:Q2_K_XL \
    --jinja \
    --n-gpu-layers 99 \
    --temp 0.6 \
    --top_p 0.95 \
    --min_p 0.01 \
    --ctx-size 8192 \
    --seed 3407 \
    -ot ".ffn_.*_exps.=CPU"

Tokenize the text transcripts

URL: llms-txt#tokenize-the-text-transcripts

def preprocess_function(example): # Tokenize the text (keep the special tokens like intact) tokens = tokenizer(example["text"], return_tensors="pt") # Flatten to list of token IDs input_ids = tokens["input_ids"].squeeze(0) # The model will generate audio tokens after these text tokens. # For training, we can set labels equal to input_ids (so it learns to predict next token). # But that only covers text tokens predicting the next text token (which might be an audio token or end). # A more sophisticated approach: append a special token indicating start of audio, and let the model generate the rest. # For simplicity, use the same input as labels (the model will learn to output the sequence given itself). return {"input_ids": input_ids, "labels": input_ids}

train_data = dataset.map(preprocess_function, remove_columns=dataset.column_names) python from transformers import TrainingArguments,Trainer,DataCollatorForSeq2Seq from unsloth import is_bfloat16_supported

trainer = Trainer( model = model, train_dataset = dataset, args = TrainingArguments( per_device_train_batch_size = 1, gradient_accumulation_steps = 4, warmup_steps = 5, # num_train_epochs = 1, # Set this for 1 full training run. max_steps = 60, learning_rate = 2e-4, fp16 = not is_bfloat16_supported(), bf16 = is_bfloat16_supported(), logging_steps = 1, optim = "adamw_8bit", weight_decay = 0.01, lr_scheduler_type = "linear", seed = 3407, output_dir = "outputs", report_to = "none", # Use this for WandB etc ), ) python model.save_pretrained("lora_model") # Local saving tokenizer.save_pretrained("lora_model")

Examples:

Example 1 (unknown):

{% hint style="info" %}
The above is a simplification. In reality, to fine-tune Orpheus properly, you would need the *audio tokens as part of the training labels*. Orpheuss pre-training likely involved converting audio to discrete tokens (via an audio codec) and training the model to predict those given the preceding text. For fine-tuning on new voice data, you would similarly need to obtain the audio tokens for each clip (using Orpheuss audio codec). The Orpheus GitHub provides a script for data processing  it encodes audio into sequences of `<custom_token_x>` tokens.
{% endhint %}

However, **Unsloth may abstract this away**: if the model is a FastModel with an associated processor that knows how to handle audio, it might automatically encode the audio in the dataset to tokens. If not, youd have to manually encode each audio clip to token IDs (using Orpheuss codebook). This is an advanced step beyond this guide, but keep in mind that simply using text tokens wont teach the model the actual audio  it needs to match the audio patterns.

Let's assume Unsloth provides a way to feed audio directly (for example, by setting `processor` and passing the audio array). If Unsloth does not yet support automatic audio tokenization, you might need to use the Orpheus repositorys `encode_audio` function to get token sequences for the audio, then use those as labels. (The dataset entries do have `phonemes` and some acoustic features which suggests a pipeline.)

**Step 3: Set up training arguments and Trainer**

Example 2 (unknown):

&#x20;We do 60 steps to speed things up, but you can set `num_train_epochs=1` for a full run, and turn off `max_steps=None`. Using a per\_device\_train\_batch\_size >1 may lead to errors if multi-GPU setup to avoid issues, ensure CUDA\_VISIBLE\_DEVICES is set to a single GPU (e.g., CUDA\_VISIBLE\_DEVICES=0). Adjust as needed.

**Step 4: Begin fine-tuning**

This will start the training loop. You should see logs of loss every 50 steps (as set by `logging_steps`). The training might take some time depending on GPU  for example, on a Colab T4 GPU, a few epochs on 3h of data may take 1-2 hours. Unsloths optimizations will make it faster than standard HF training.

**Step 5: Save the fine-tuned model**

After training completes (or if you stop it mid-way when you feel its sufficient), save the model. This ONLY saves the LoRA adapters, and not the full model. To save to 16bit or GGUF, scroll down!

Fine-tuning LLMs Guide

URL: llms-txt#fine-tuning-llms-guide

Contents:

    1. Understand Fine-tuning
    1. Choose the Right Model + Method
    1. Your Dataset
    1. Understand Training Hyperparameters
    1. Installing + Requirements
    1. Training + Evaluation
    • Evaluation
    1. Running + Saving the model
    • Saving the model
    1. We're done!

Learn all the basics and best practices of fine-tuning. Beginner-friendly.

1. Understand Fine-tuning

Fine-tuning an LLM customizes its behavior, enhances + injects knowledge, and optimizes performance for domains/specific tasks. For example:

  • GPT-4 serves as a base model; however, OpenAI fine-tuned it to better comprehend instructions and prompts, leading to the creation of ChatGPT-4 which everyone uses today.
  • DeepSeek-R1-Distill-Llama-8B is a fine-tuned version of Llama-3.1-8B. DeepSeek utilized data generated by DeepSeek-R1, to fine-tune Llama-3.1-8B. This process, known as distillation (a subcategory of fine-tuning), injects the data into the Llama model to learn reasoning capabilities.

With Unsloth, you can fine-tune for free on Colab, Kaggle, or locally with just 3GB VRAM by using our notebooks. By fine-tuning a pre-trained model (e.g. Llama-3.1-8B) on a specialized dataset, you can:

  • Update + Learn New Knowledge: Inject and learn new domain-specific information.
  • Customize Behavior: Adjust the models tone, personality, or response style.
  • Optimize for Tasks: Improve accuracy and relevance for specific use cases.

Example usecases:

  • Train LLM to predict if a headline impacts a company positively or negatively.
  • Use historical customer interactions for more accurate and custom responses.
  • Fine-tune LLM on legal texts for contract analysis, case law research, and compliance.

You can think of a fine-tuned model as a specialized agent designed to do specific tasks more effectively and efficiently. Fine-tuning can replicate all of RAG's capabilities, but not vice versa.

Fine-tuning misconceptions:

You may have heard that fine-tuning does not make a model learn new knowledge or RAG performs better than fine-tuning. That is false. Read more FAQ + misconceptions here:

{% content-ref url="beginner-start-here/faq-+-is-fine-tuning-right-for-me" %} faq-+-is-fine-tuning-right-for-me {% endcontent-ref %}

2. Choose the Right Model + Method

If you're a beginner, it is best to start with a small instruct model like Llama 3.1 (8B) and experiment from there. You'll also need to decide between QLoRA and LoRA training:

  • LoRA: Fine-tunes small, trainable matrices in 16-bit without updating all model weights.
  • QLoRA: Combines LoRA with 4-bit quantization to handle very large models with minimal resources.

You can change the model name to whichever model you like by matching it with model's name on Hugging Face e.g. 'unsloth/llama-3.1-8b-unsloth-bnb-4bit'.

We recommend starting with Instruct models, as they allow direct fine-tuning using conversational chat templates (ChatML, ShareGPT etc.) and require less data compared to Base models (which uses Alpaca, Vicuna etc). Learn more about the differences between instruct and base models here.

  • Model names ending in unsloth-bnb-4bit indicate they are Unsloth dynamic 4-bit quants. These models consume slightly more VRAM than standard BitsAndBytes 4-bit models but offer significantly higher accuracy.
  • If a model name ends with just bnb-4bit, without "unsloth", it refers to a standard BitsAndBytes 4-bit quantization.
  • Models with no suffix are in their original 16-bit or 8-bit formats. While they are the original models from the official model creators, we sometimes include important fixes - such as chat template or tokenizer fixes. So it's recommended to use our versions when available.

There are other settings which you can toggle:

  • max_seq_length = 2048 Controls context length. While Llama-3 supports 8192, we recommend 2048 for testing. Unsloth enables 4× longer context fine-tuning.
  • dtype = None Defaults to None; use torch.float16 or torch.bfloat16 for newer GPUs.
  • load_in_4bit = True Enables 4-bit quantization, reducing memory use 4× for fine-tuning. Disabling it enables LoRA 16-bit fine-tuning. You can also enable 16-bit LoRA with load_in_16bit = True
  • To enable full fine-tuning (FFT), set full_finetuning = True. For 8-bit fine-tuning, set load_in_8bit = True.
  • Note: Only one training method can be set to True at a time.

We recommend starting with QLoRA, as it is one of the most accessible and effective methods for training models. Our dynamic 4-bit quants, the accuracy loss for QLoRA compared to LoRA is now largely recovered.

You can also do Text-to-speech (TTS), reasoning (GRPO), vision, reinforcement learning (DPO, ORPO, KTO), continued pretraining, text completion and other training methodologies with Unsloth.

Read our detailed guide on choosing the right model:

{% content-ref url="fine-tuning-llms-guide/what-model-should-i-use" %} what-model-should-i-use {% endcontent-ref %}

For LLMs, datasets are collections of data that can be used to train our models. In order to be useful for training, text data needs to be in a format that can be tokenized.

  • You will need to create a dataset usually with 2 columns - question and answer. The quality and amount will largely reflect the end result of your fine-tune so it's imperative to get this part right.
  • You can synthetically generate data and structure your dataset (into QA pairs) using ChatGPT or local LLMs.
  • You can also use our new Synthetic Dataset notebook which automatically parses documents (PDFs, videos etc.), generates QA pairs and auto cleans data using local models like Llama 3.2. Access the notebook here.
  • Fine-tuning can learn from an existing repository of documents and continuously expand its knowledge base, but just dumping data alone wont work as well. For optimal results, curate a well-structured dataset, ideally as question-answer pairs. This enhances learning, understanding, and response accuracy.
  • But, that's not always the case, e.g. if you are fine-tuning a LLM for code, just dumping all your code data can actually enable your model to yield significant performance improvements, even without structured formatting. So it really depends on your use case.

Read more about creating your dataset:

{% content-ref url="fine-tuning-llms-guide/datasets-guide" %} datasets-guide {% endcontent-ref %}

For most of our notebook examples, we utilize the Alpaca dataset however other notebooks like Vision will use different datasets which may need images in the answer output as well.

4. Understand Training Hyperparameters

Learn how to choose the right hyperparameters using best practices from research and real-world experiments - and understand how each one affects your model's performance.

For a complete guide on how hyperparameters affect training, see:

{% content-ref url="fine-tuning-llms-guide/lora-hyperparameters-guide" %} lora-hyperparameters-guide {% endcontent-ref %}

5. Installing + Requirements

We would recommend beginners to utilise our pre-made notebooks first as it's the easiest way to get started with guided steps. However, if installing locally is a must, you can install and use Unsloth via docker or pip install unsloth - just make sure you have all the right requirements necessary. Also depending on the model and quantization you're using, you'll need enough VRAM and resources. See all the details here:

{% content-ref url="beginner-start-here/unsloth-requirements" %} unsloth-requirements {% endcontent-ref %}

Next, you'll need to install Unsloth. Unsloth currently only supports Windows and Linux devices. Once you install Unsloth, you can copy and paste our notebooks and use them in your own local environment. We have many installation methods:

{% content-ref url="install-and-update" %} install-and-update {% endcontent-ref %}

6. Training + Evaluation

Once you have everything set, it's time to train! If something's not working, remember you can always change hyperparameters, your dataset etc.

Youll see a log of numbers during training. This is the training loss, which shows how well the model is learning from your dataset. For many cases, a loss around 0.5 to 1.0 is a good sign, but it depends on your dataset and task. If the loss is not going down, you might need to adjust your settings. If the loss goes to 0, that could mean overfitting, so it's important to check validation too.

The training loss will appear as numbers

We generally recommend keeping the default settings unless you need longer training or larger batch sizes.

  • per_device_train_batch_size = 2 Increase for better GPU utilization but beware of slower training due to padding. Instead, increase gradient_accumulation_steps for smoother training.
  • gradient_accumulation_steps = 4 Simulates a larger batch size without increasing memory usage.
  • max_steps = 60 Speeds up training. For full runs, replace with num_train_epochs = 1 (13 epochs recommended to avoid overfitting).
  • learning_rate = 2e-4 Lower for slower but more precise fine-tuning. Try values like 1e-4, 5e-5, or 2e-5.

In order to evaluate, you could do manually evaluation by just chatting with the model and see if it's to your liking. You can also enable evaluation for Unsloth, but keep in mind it can be time-consuming depending on the dataset size. To speed up evaluation you can: reduce the evaluation dataset size or set evaluation_steps = 100.

For testing, you can also take 20% of your training data and use that for testing. If you already used all of the training data, then you have to manually evaluate it. You can also use automatic eval tools like EleutherAIs lm-evaluation-harness. Keep in mind that automated tools may not perfectly align with your evaluation criteria.

7. Running + Saving the model

Now let's run the model after we completed the training process! You can edit the yellow underlined part! In fact, because we created a multi turn chatbot, we can now also call the model as if it saw some conversations in the past like below:

Reminder Unsloth itself provides 2x faster inference natively as well, so always do not forget to call FastLanguageModel.for_inference(model). If you want the model to output longer responses, set max_new_tokens = 128 to some larger number like 256 or 1024. Notice you will have to wait longer for the result as well!

For saving and using your model in desired inference engines like Ollama, vLLM, Open WebUI, we can have more information here:

{% content-ref url="../basics/running-and-saving-models" %} running-and-saving-models {% endcontent-ref %}

We can now save the finetuned model as a small 100MB file called a LoRA adapter like below. You can instead push to the Hugging Face hub as well if you want to upload your model! Remember to get a Hugging Face token via: https://huggingface.co/settings/tokens and add your token!

After saving the model, we can again use Unsloth to run the model itself! Use FastLanguageModel again to call it for inference!

You've successfully fine-tuned a language model and exported it to your desired inference engine with Unsloth!

To learn more about fine-tuning tips and tricks, head over to our blogs which provide tremendous and educational value: https://unsloth.ai/blog/

If you need any help on fine-tuning, you can also join our Discord server here or Reddit r/unsloth. Thanks for reading and hopefully this was helpful!


Add LoRA adapter to the model for parameter efficient fine tuning

URL: llms-txt#add-lora-adapter-to-the-model-for-parameter-efficient-fine-tuning

Contents:

  • :butterfly:Qwen 2.5 VL Vision RL Issues and Quirks
  • :medal:Reward Functions to reduce gibberish
  • :checkered_flag:GSPO Reinforcement Learning

model = FastVisionModel.get_peft_model( model,

finetune_vision_layers = False,# fast_inference doesn't support finetune_vision_layers yet :( finetune_language_layers = True, # False if not finetuning language layers finetune_attention_modules = True, # False if not finetuning attention layers finetune_mlp_modules = True, # False if not finetuning MLP layers

r = lora_rank, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128 lora_alpha = lora_rank*2, # *2 speeds up training use_gradient_checkpointing = "unsloth", # Reduces memory usage random_state = 3407, )

addCriterion <tool_call>\n addCriterion\n\n addCriterion\n\n addCriterion\n\n addCriterion\n\n addCriterion\n\n addCriterion\n\n addCriterion\n\n addCriterion\n\n addCriterion\n\n addCriterion\n\n\n addCriterion\n\n 自动生成\n\n addCriterion\n\n addCriterion\n\n addCriterion\n\n addCriterion\n\n addCriterion\n\n addCriterion\n\n addCriterion\n\n addCriterion\n\n\n addCriterion\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n

Figure is an overhead view of the path taken by a race car driver as his car collides with the racetrack wall. Just before the collision, he is traveling at speed v_i=70 \mathrm{~m} / \mathrm{s} along a straight line at 30^{\circ} from the wall. Just after the collision, he is traveling at speed v_f=50 \mathrm{~m} / \mathrm{s} along a straight line at 10^{\circ} from the wall. His mass m is 80 \mathrm{~kg}. The collision lasts for 14 \mathrm{~ms}. What is the magnitude of the average force on the driver during the collision? python def formatting_reward_func(completions,**kwargs): import re thinking_pattern = f'{REASONING_START}(.?){REASONING_END}' answer_pattern = f'{SOLUTION_START}(.?){SOLUTION_END}'

scores = [] for completion in completions: score = 0 thinking_matches = re.findall(thinking_pattern, completion, re.DOTALL) answer_matches = re.findall(answer_pattern, completion, re.DOTALL) if len(thinking_matches) == 1: score += 1.0 if len(answer_matches) == 1: score += 1.0

Fix up addCriterion issues

    # See https://docs.unsloth.ai/new/vision-reinforcement-learning-vlm-rl#qwen-2.5-vl-vision-rl-issues-and-quirks
    # Penalize on excessive addCriterion and newlines
    if len(completion) != 0:
        removal = completion.replace("addCriterion", "").replace("\n", "")
        if (len(completion)-len(removal))/len(completion) >= 0.5:
            score -= 2.0

scores.append(score) return scores python training_args = GRPOConfig( output_dir = "vlm-grpo-unsloth", per_device_train_batch_size = 8, gradient_accumulation_steps = 4, learning_rate = 5e-6, adam_beta1 = 0.9, adam_beta2 = 0.99, weight_decay = 0.1, warmup_ratio = 0.1, lr_scheduler_type = "cosine", optim = "adamw_8bit", # beta = 0.00, epsilon = 3e-4, epsilon_high = 4e-4, num_generations = 8,
max_prompt_length = 1024, max_completion_length = 1024, log_completions = False, max_grad_norm = 0.1, temperature = 0.9, # report_to = "none", # Set to "wandb" if you want to log to Weights & Biases num_train_epochs = 2, # For a quick test run, increase for full training report_to = "none"

# GSPO is below:
importance_sampling_level = "sequence",

# Dr GRPO / GAPO etc
loss_type = "dr_grpo",

)


Overall, Unsloth now with VLM vLLM fast inference enables for both 90% reduced memory usage but also 1.5-2x faster speed with GRPO and GSPO!

If you'd like to read more about reinforcement learning, check out out RL guide:

[reinforcement-learning-rl-guide](https://docs.unsloth.ai/get-started/reinforcement-learning-rl-guide "mention")

***Authors:** A huge thank you to* [*Keith*](https://www.linkedin.com/in/keith-truongcao-7bb84a23b/) *and* [*Datta*](https://www.linkedin.com/in/datta0/) *for contributing to this article!*

**Examples:**

Example 1 (unknown):
```unknown
## :butterfly:Qwen 2.5 VL Vision RL Issues and Quirks

During RL for Qwen 2.5 VL, you might see the following inference output:

{% code overflow="wrap" %}

Example 2 (unknown):

{% endcode %}

This was [reported](https://github.com/QwenLM/Qwen2.5-VL/issues/759) as well in Qwen2.5-VL-7B-Instruct output unexpected results "addCriterion". In fact we see this as well! We tried both non Unsloth, bfloat16 and float16 machines and other things, but it appears still. For example item 165 ie `train_dataset[165]` from the [AI4Math/MathVista](https://huggingface.co/datasets/AI4Math/MathVista) dataset is below:

{% code overflow="wrap" %}

Example 3 (unknown):

{% endcode %}

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FdaU12PmFHZL9aEC5zka0%2FUntitled.png?alt=media&#x26;token=7992e59c-3c17-4463-80ce-3c7560b183ed" alt="" width="128"><figcaption></figcaption></figure>

And then we get the above gibberish output. One could add a reward function to penalize the addition of addCriterion, or penalize gibberish outputs. However, the other approach is to train it for longer. For example only after 60 steps ish do we see the model actually learning via RL:

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F3Amh6JaEI2sBAAIfc2TJ%2Fimage.webp?alt=media&#x26;token=41ce0d31-dc0b-4dbe-b001-7618c9080b09" alt=""><figcaption></figcaption></figure>

{% hint style="success" %}
Forcing `<|assistant|>` during generation will reduce the occurrences of these gibberish results as expected since this is an Instruct model, however it's still best to add a reward function to penalize bad generations, as described in the next section.
{% endhint %}

## :medal:Reward Functions to reduce gibberish

To penalize `addCriterion` and gibberish outputs, we edited the reward function to penalize too much of `addCriterion` and newlines.

Example 4 (unknown):

## :checkered\_flag:GSPO Reinforcement Learning

This update in addition adds GSPO ([Group Sequence Policy Optimization](https://arxiv.org/abs/2507.18071)) which is a variant of GRPO made by the Qwen team at Alibaba. They noticed that GRPO implicitly results in importance weights for each token, even though explicitly advantages do not scale or change with each token.

This lead to the creation of GSPO, which now assigns the importance on the sequence likelihood rather than the individual token likelihoods of the tokens. The difference between these two algorithms can be seen below, both from the GSPO paper from Qwen and Alibaba:&#x20;

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FK5qpNl1eUsMoiwpe6Kgj%2Fimage.png?alt=media&#x26;token=a370770a-8b1c-4887-b2da-bee45926b762" alt="" width="563"><figcaption><p>GRPO Algorithm, Source: <a href="https://arxiv.org/abs/2507.18071">Qwen</a></p></figcaption></figure>

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FApZeTDRtW4e6AT9YorZu%2Fimage.png?alt=media&#x26;token=eb25bd2f-5e8a-4d9e-811e-8e572afcde4e" alt="" width="563"><figcaption><p>GSPO algorithm, Source: <a href="https://arxiv.org/abs/2507.18071">Qwen</a></p></figcaption></figure>

In Equation 1, it can be seen that the advantages scale each of the rows into the token logprobs before that tensor is sumed. Essentially, each token is given the same scaling even though that scaling was given to the entire sequence rather than each individual token. A simple diagram of this can be seen below:

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FzTy05MloluyPBJ0vsOWn%2FCopy%20of%20GSPO%20diagram%20(1).jpg?alt=media&#x26;token=cbfad773-bcc5-4262-a4b5-ef1a178755bd" alt="" width="286"><figcaption><p>GRPO Logprob Ratio row wise scaled with advantages</p></figcaption></figure>

Equation 2 shows that the logprob ratios for each sequence is summed and exponentiated after the Logprob ratios are computed, and only the resulting now sequence ratios get row wise multiplied by the advantages.&#x20;

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FLBqBCP2SGFu4sPZld77I%2FGSPO%20diagram%20(1).jpg?alt=media&#x26;token=89005ac2-d3cd-4d31-b179-2e320c874656" alt="" width="313"><figcaption><p>GSPO Sequence Ratio row wise scaled with advantages</p></figcaption></figure>

Enabling GSPO is simple, all you need to do is set the `importance_sampling_level = "sequence"` flag in the GRPO config.&#x20;

Saving to Ollama

URL: llms-txt#saving-to-ollama

Contents:

  • Saving on Google Colab
  • Exporting to Ollama
  • Automatic Modelfile creation
  • Ollama Inference
    • Running in Unsloth works well, but after exporting & running on Ollama, the results are poor

See our guide below for the complete process on how to save to Ollama:

{% content-ref url="../../get-started/fine-tuning-llms-guide/tutorial-how-to-finetune-llama-3-and-use-in-ollama" %} tutorial-how-to-finetune-llama-3-and-use-in-ollama {% endcontent-ref %}

Saving on Google Colab

You can save the finetuned model as a small 100MB file called a LoRA adapter like below. You can instead push to the Hugging Face hub as well if you want to upload your model! Remember to get a Hugging Face token via: https://huggingface.co/settings/tokens and add your token!

After saving the model, we can again use Unsloth to run the model itself! Use FastLanguageModel again to call it for inference!

Exporting to Ollama

Finally we can export our finetuned model to Ollama itself! First we have to install Ollama in the Colab notebook:

Then we export the finetuned model we have to llama.cpp's GGUF formats like below:

Reminder to convert False to True for 1 row, and not change every row to True, or else you'll be waiting for a very time! We normally suggest the first row getting set to True, so we can export the finetuned model quickly to Q8_0 format (8 bit quantization). We also allow you to export to a whole list of quantization methods as well, with a popular one being q4_k_m.

Head over to https://github.com/ggerganov/llama.cpp to learn more about GGUF. We also have some manual instructions of how to export to GGUF if you want here: https://github.com/unslothai/unsloth/wiki#manually-saving-to-gguf

You will see a long list of text like below - please wait 5 to 10 minutes!!

And finally at the very end, it'll look like below:

Then, we have to run Ollama itself in the background. We use subprocess because Colab doesn't like asynchronous calls, but normally one just runs ollama serve in the terminal / command prompt.

Automatic Modelfile creation

The trick Unsloth provides is we automatically create a Modelfile which Ollama requires! This is a just a list of settings and includes the chat template which we used for the finetune process! You can also print the Modelfile generated like below:

We then ask Ollama to create a model which is Ollama compatible, by using the Modelfile

And we can now call the model for inference if you want to do call the Ollama server itself which is running on your own local machine / in the free Colab notebook in the background. Remember you can edit the yellow underlined part.

Running in Unsloth works well, but after exporting & running on Ollama, the results are poor

You might sometimes encounter an issue where your model runs and produces good results on Unsloth, but when you use it on another platform like Ollama, the results are poor or you might get gibberish, endless/infinite generations or repeated outputs.

  • The most common cause of this error is using an incorrect chat template. Its essential to use the SAME chat template that was used when training the model in Unsloth and later when you run it in another framework, such as llama.cpp or Ollama. When inferencing from a saved model, it's crucial to apply the correct template.
  • You must use the correct eos token. If not, you might get gibberish on longer generations.
  • It might also be because your inference engine adds an unnecessary "start of sequence" token (or the lack of thereof on the contrary) so ensure you check both hypotheses!
  • Use our conversational notebooks to force the chat template - this will fix most issues.

Unsloth Dynamic 2.0 GGUFs

URL: llms-txt#unsloth-dynamic-2.0-ggufs

Contents:

  • 💡 What's New in Dynamic v2.0?
  • 📊 Why KL Divergence?
  • ⚖️ Calibration Dataset Overfitting
  • 🔢 MMLU Replication Adventure
  • Gemma 3 QAT Replication, Benchmarks
  • 🦙 Llama 4 Bug Fixes + Run
    • Running Llama 4 Scout:

A big new upgrade to our Dynamic Quants!

We're excited to introduce our Dynamic v2.0 quantization method - a major upgrade to our previous quants. This new method outperforms leading quantization methods and sets new benchmarks for 5-shot MMLU and KL Divergence.

This means you can now run + fine-tune quantized LLMs while preserving as much accuracy as possible! You can run the 2.0 GGUFs on any inference engine like llama.cpp, Ollama, Open WebUI etc.

{% hint style="success" %} Sept 10, 2025 update: You asked for tougher benchmarks, so were showcasing Aider Polyglot results! Our Dynamic 3-bit DeepSeek V3.1 GGUF scores 75.6%, surpassing many full-precision SOTA LLMs. Read more.

The key advantage of using the Unsloth package and models is our active role in fixing critical bugs in major models. We've collaborated directly with teams behind Qwen3, Meta (Llama 4), Mistral (Devstral), Google (Gemma 13) and Microsoft (Phi-3/4), contributing essential fixes that significantly boost accuracy. {% endhint %}

Detailed analysis of our benchmarks and evaluation further below.

💡 What's New in Dynamic v2.0?

  • Revamped Layer Selection for GGUFs + safetensors: Unsloth Dynamic 2.0 now selectively quantizes layers much more intelligently and extensively. Rather than modifying only select layers, we now dynamically adjust the quantization type of every possible layer, and the combinations will differ for each layer and model.
  • Current selected and all future GGUF uploads will utilize Dynamic 2.0 and our new calibration dataset. The dataset contains more than >1.5M tokens (depending on model) and comprise of high-quality, hand-curated and cleaned data - to greatly enhance conversational chat performance.
  • Previously, our Dynamic quantization (DeepSeek-R1 1.58-bit GGUF) was effective only for MoE architectures. Dynamic 2.0 quantization now works on all models (including MOEs & non-MoEs).
  • Model-Specific Quants: Each model now uses a custom-tailored quantization scheme. E.g. the layers quantized in Gemma 3 differ significantly from those in Llama 4.
  • To maximize efficiency, especially on Apple Silicon and ARM devices, we now also add Q4_NL, Q5.1, Q5.0, Q4.1, and Q4.0 formats.

To ensure accurate benchmarking, we built an internal evaluation framework to match official reported 5-shot MMLU scores of Llama 4 and Gemma 3. This allowed apples-to-apples comparisons between full-precision vs. Dynamic v2.0, QAT and standard imatrix GGUF quants.

Currently, we've released updates for:

Qwen3: 0.6B1.7B4B8B14B30B-A3B32B235B-A22BR1-0528 Other: GLM-4-32BMAI-DS-R1QwQ (32B)
DeepSeek: R1-0528V3-0324R1-Distill-Llama Llama: 4 (Scout)4 (Maverick)3.1 (8B)
Gemma 3: 4B12B27BQAT Mistral: MagistralSmall-3.1-2503

All future GGUF uploads will utilize Unsloth Dynamic 2.0, and our Dynamic 4-bit safe tensor quants will also benefit from this in the future.

📊 Why KL Divergence?

Accuracy is Not All You Need showcases how pruning layers, even by selecting unnecessary ones still yields vast differences in terms of "flips". A "flip" is defined as answers changing from incorrect to correct or vice versa. The paper shows how MMLU might not decrease as we prune layers or do quantization,but that's because some incorrect answers might have "flipped" to become correct. Our goal is to match the original model, so measuring "flips" is a good metric.

{% hint style="info" %} KL Divergence should be the gold standard for reporting quantization errors as per the research paper "Accuracy is Not All You Need". Using perplexity is incorrect since output token values can cancel out, so we must use KLD! {% endhint %}

The paper also shows that interestingly KL Divergence is highly correlated with flips, and so our goal is to reduce the mean KL Divergence whilst increasing the disk space of the quantization as less as possible.

⚖️ Calibration Dataset Overfitting

Most frameworks report perplexity and KL Divergence using a test set of Wikipedia articles. However, we noticed using the calibration dataset which is also Wikipedia related causes quants to overfit, and attain lower perplexity scores. We utilize Calibration_v3 and Calibration_v5 datasets for fair testing which includes some wikitext data amongst other data. Also instruct models have unique chat templates, and using text only calibration datasets is not effective for instruct models (base models yes). In fact most imatrix GGUFs are typically calibrated with these issues. As a result, they naturally perform better on KL Divergence benchmarks that also use Wikipedia data, since the model is essentially optimized for that domain.

To ensure a fair and controlled evaluation, we do not to use our own calibration dataset (which is optimized for chat performance) when benchmarking KL Divergence. Instead, we conducted tests using the same standard Wikipedia datasets, allowing us to directly compare the performance of our Dynamic 2.0 method against the baseline imatrix approach.

🔢 MMLU Replication Adventure

  • Replicating MMLU 5 shot was nightmarish. We could not replicate MMLU results for many models including Llama 3.1 (8B) Instruct, Gemma 3 (12B) and others due to subtle implementation issues. Llama 3.1 (8B) for example should be getting ~68.2%, whilst using incorrect implementations can attain 35% accuracy.

MMLU implementation issues

  • Llama 3.1 (8B) Instruct has a MMLU 5 shot accuracy of 67.8% using a naive MMLU implementation. We find however Llama tokenizes "A" and "_A" (A with a space in front) as different token ids. If we consider both spaced and non spaced tokens, we get 68.2% (+0.4%)
  • Interestingly Llama 3 as per Eleuther AI's LLM Harness also appends "The best answer is" to the question, following Llama 3's original MMLU benchmarks.
  • There are many other subtle issues, and so to benchmark everything in a controlled environment, we designed our own MMLU implementation from scratch by investigating github.com/hendrycks/test directly, and verified our results across multiple models and comparing to reported numbers.

Gemma 3 QAT Replication, Benchmarks

The Gemma team released two QAT (quantization aware training) versions of Gemma 3:

  1. Q4_0 GGUF - Quantizes all layers to Q4_0 via the formula w = q * block_scale with each block having 32 weights. See llama.cpp wiki for more details.
  2. int4 version - presumably TorchAO int4 style?

We benchmarked all Q4_0 GGUF versions, and did extensive experiments on the 12B model. We see the 12B Q4_0 QAT model gets 67.07% whilst the full bfloat16 12B version gets 67.15% on 5 shot MMLU. That's very impressive! The 27B model is mostly nearly there!

Metric1B4B12B27B
MMLU 5 shot26.12%55.13%67.07% (67.15% BF16)70.64% (71.5% BF16)
Disk Space0.93GB2.94GB7.52GB16.05GB
Efficiency*1.2010.265.592.84

We designed a new Efficiency metric which calculates the usefulness of the model whilst also taking into account its disk size and MMLU 5 shot score:


\text{Efficiency} = \frac{\text{MMLU 5 shot score} - 25}{\text{Disk Space GB}}

{% hint style="warning" %} We have to minus 25 since MMLU has 4 multiple choices - A, B, C or D. Assume we make a model that simply randomly chooses answers - it'll get 25% accuracy, and have a disk space of a few bytes. But clearly this is not a useful model. {% endhint %}

On KL Divergence vs the base model, below is a table showcasing the improvements. Reminder the closer the KL Divergence is to 0, the better (ie 0 means identical to the full precision model)

Quant Baseline KLD GB New KLD GB
IQ1_S 1.035688 5.83 0.972932 6.06
IQ1_M 0.832252 6.33 0.800049 6.51
IQ2_XXS 0.535764 7.16 0.521039 7.31
IQ2_M 0.26554 8.84 0.258192 8.96
Q2_K_XL 0.229671 9.78 0.220937 9.95
Q3_K_XL 0.087845 12.51 0.080617 12.76
Q4_K_XL 0.024916 15.41 0.023701 15.64

If we plot the ratio of the disk space increase and the KL Divergence ratio change, we can see a much clearer benefit! Our dynamic 2bit Q2_K_XL reduces KLD quite a bit (around 7.5%).

Truncated table of results for MMLU for Gemma 3 (27B). See below.

  1. Our dynamic 4bit version is 2GB smaller whilst having +1% extra accuracy vs the QAT version!
  2. Efficiency wise, 2bit Q2_K_XL and others seem to do very well!
Quant Unsloth Unsloth + QAT Disk Size Efficiency
IQ1_M 48.10 47.23 6.51 3.42
IQ2_XXS 59.20 56.57 7.31 4.32
IQ2_M 66.47 64.47 8.96 4.40
Q2_K_XL 68.70 67.77 9.95 4.30
Q3_K_XL 70.87 69.50 12.76 3.49
Q4_K_XL 71.47 71.07 15.64 2.94
Google QAT 70.64 17.2 2.65
Click here for Full Google's Gemma 3 (27B) QAT Benchmarks:
Model Unsloth Unsloth + QAT Disk Size Efficiency
IQ1_S 41.87 43.37 6.06 3.03
IQ1_M 48.10 47.23 6.51 3.42
IQ2_XXS 59.20 56.57 7.31 4.32
IQ2_M 66.47 64.47 8.96 4.40
Q2_K 68.50 67.60 9.78 4.35
Q2_K_XL 68.70 67.77 9.95 4.30
IQ3_XXS 68.27 67.07 10.07 4.18
Q3_K_M 70.70 69.77 12.51 3.58
Q3_K_XL 70.87 69.50 12.76 3.49
Q4_K_M 71.23 71.00 15.41 2.98
Q4_K_XL 71.47 71.07 15.64 2.94
Q5_K_M 71.77 71.23 17.95 2.58
Q6_K 71.87 71.60 20.64 2.26
Q8_0 71.60 71.53 26.74 1.74
Google QAT 70.64 17.2 2.65

🦙 Llama 4 Bug Fixes + Run

We also helped and fixed a few Llama 4 bugs:

  • Llama 4 Scout changed the RoPE Scaling configuration in their official repo. We helped resolve issues in llama.cpp to enable this change here
* Llama 4's QK Norm's epsilon for both Scout and Maverick should be from the config file - this means using 1e-05 and not 1e-06. We helped resolve these in [llama.cpp](https://github.com/ggml-org/llama.cpp/pull/12889) and [transformers](https://github.com/huggingface/transformers/pull/37418) * The Llama 4 team and vLLM also independently fixed an issue with QK Norm being shared across all heads (should not be so) [here](https://github.com/vllm-project/vllm/pull/16311). MMLU Pro increased from 68.58% to 71.53% accuracy. * [Wolfram Ravenwolf](https://x.com/WolframRvnwlf/status/1909735579564331016) showcased how our GGUFs via llama.cpp attain much higher accuracy than third party inference providers - this was most likely a combination of the issues explained above, and also probably due to quantization issues.

As shown in our graph, our 4-bit Dynamic QAT quantization deliver better performance on 5-shot MMLU while also being smaller in size.

Running Llama 4 Scout:

To run Llama 4 Scout for example, first clone llama.cpp:

Then download out new dynamic v 2.0 quant for Scout:

Examples:

Example 1 (bash):

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp

Long Context gpt-oss Training

URL: llms-txt#long-context-gpt-oss-training

Contents:

  • 🦥Introducing Unsloth Flex Attention Support
  • 🕶️ Attention Sinks
  • :triangular_ruler:Unsloth's Flex Attention implementation
  • 📜 Mathematical derivation for attention sinks
  • 💾NEW: Saving to GGUF, vLLM after gpt-oss training
    • :diamonds:Fine-tuning gpt-oss directly
  • 🐛Bug Fixes for gpt-oss
  • 🔢 Implementations for Sink Attention

Were excited to introduce Unsloth Flex Attention support for OpenAI gpt-oss training that enables >8× longer context lengths, >50% less VRAM usage and >1.5× faster training (with no accuracy degradation) vs. all implementations including those using Flash Attention 3 (FA3). Unsloth Flex Attention makes it possible to train with a 60K context length on a 80GB VRAM H100 GPU for BF16 LoRA. Also:

  • You can now export/save your QLoRA fine-tuned gpt-oss model to llama.cpp, vLLM, Ollama or HF
  • We fixed gpt-oss training losses going to infinity on float16 GPUs (like T4 Colab)
  • We fixed gpt-oss implementation issues irrelevant to Unsloth, most notably ensuring that swiglu_limit = 7.0 is properly applied during MXFP4 inference in transformers

🦥Introducing Unsloth Flex Attention Support

With Unsloth's Flex Attention support, a single 80GB VRAM H100 can handle up to 81K context length with QLoRA and 60K context with BF16 LoRA! These gains are applied to BOTH gpt-oss-20b and gpt-oss-120b! The more context length you use, the more gains you'll get from Unsloth Flex Attention:

In comparison, all other non-Unsloth implementations max out at 9K context length on an 80GB GPU, and can only reach 15K context with FA3. But, FA3 is unsuitable for gpt-oss training since it lacks backward pass support for attention sinks. So if you were previously using FA3 for gpt-oss training, we'd recommend you to not use it for now. Thus, the max context length you can get without Unsloth on 80GB VRAM is ~9K.

Training with Unsloth Flex Attention delivers at least a 1.3× speedup, with gains growing as context length increases, reaching up to 2× faster. Because Flex Attention scales with context, longer sequences yield bigger savings in both VRAM and training time, as described here.

A huge thank you to Rohan Pandey for his Flex Attention implementation, which directly inspired the development of Unsloth's Flex Attention implementation.

🕶️ Attention Sinks

OpenAI's GPT OSS model uses an alternating pattern of sliding window attention, full attention, sliding window attention and so on (SWA, FA, SWA, FA, etc). Each sliding window only attends to 128 tokens (including the current token), so computation is vastly reduced. However, this also means long context retrieval and reasoning becomes useless due to the small sliding window. Most labs fix this by expanding the sliding window to 2048 or 4096 tokens.

OpenAI leveraged Attention Sinks from the Efficient Streaming Language Models with Attention Sinks paper which shows that you can use a small sliding window, except you must add a global attention on the first token! The paper provides a good illustration below:

The paper finds that the attention mechanism seems to assign a lot of weight to the first few tokens (1 to 4), and by removing them during the sliding window operation, these "important" first few tokens disappear, and causes bad long context retrieval.

If we plot log perplexity (higher is worse), and do long context inference after the pretrained model's set context length, we see the perplexity shoots up (not good). However the red line (uses Attention Sinks) stays low, which is very good!

The paper also shows that the Attention Is Off By One method does partially work, except one must also add a few extra sink tokens to get lower perplexities. The paper shows that adding a single sink token that is learnable does remarkably well! And that's what OpenAI did for GPT-OSS!

:triangular_ruler:Unsloth's Flex Attention implementation

Flex Attention https://pytorch.org/blog/flexattention/ is extremely powerful as it provides the practitioner 2 customization routes for the attention mechanism - a score modifier (f) and a masking function (M).

The score modifier (f) allows us to edit the attention logits before the softmax operation, and the masking function (M) allows us to skip operations if we don't need them (for eg sliding window attention only sees last 128 tokens).

The trick is Flex Attention provides fast auto generated Triton kernels with arbitrary score modifiers and masking functions!

\sigma\bigg(s\times\bold{f}(QK^T+\bold{M})\bigg)

This means we can use Flex Attention to implement attention sinks! Implementing a single attention sink is provided both in OpenAI's original GPT-OSS repo and HuggingFace's transformers's implementation.

The above shows we concatenate the sink at the very end of the Q @ K.T , do the softmax, and remove the last column which was the sink token.

By using some visualization utilities from Flex Attention's Github repo, we can visualize this. Assume the sequence length was 16, and a sliding window of 5. On the left is the last sink column (default implementation), and on the right is if we move the sink location to index 0 (our implementation).

{% columns %} {% column %} Sink location at the end (default)

{% endcolumn %}

{% column %} Move sink location to index 0

{% endcolumn %} {% endcolumns %}

Interesting finding: The official Flex Attention sliding window implementations considers the window size as the number of last tokens PLUS ONE as it includes the current token. The HuggingFace and GPT OSS implementations strictly only sees the last N tokens. Ie the below is from https://pytorch.org/blog/flexattention/ and https://github.com/meta-pytorch/attention-gym:

{% code overflow="wrap" %}

{% columns %} {% column %} Default Flex Attention (3+1 tokens)

{% endcolumn %}

{% column %} HuggingFace, GPT-OSS (3+0 tokens)

{% endcolumn %} {% endcolumns %}

We also confirmed through OpenAI's official GPT-OSS implementation on whether we attend to the last N or N+1 tokens here: https://github.com/openai/gpt-oss/blob/main/gpt_oss/torch/model.py

And we see only the last 3 tokens (not 3+1) are attended to! This means instead of using <= SLIDING_WINDOW, use < SLIDING_WINDOW (ie use less than, not the equals).

Also since we moved the sink token index to the first, we have to add 1 to the q_idx to index correctly:

To confirm our index 0 implementation, we verified that the training loss remains consistent with standard Hugging Face runs (without Unsloth Flex Attention), as shown in our graph:

📜 Mathematical derivation for attention sinks

There is another way to calculate the attention sinks without padding K and V. We first note the softmax operation does, and we want to 2nd version with sinks for now as a scalar:\


A(x) = \frac{\exp(x\_i)}{\sum{\exp{(x\_i)}}} \\
A\_{sink}(x) = \frac{\exp(x\_i)}{\exp{(s)}+ \sum{\exp{(x\_i)}}}

We can obtain the logsumexp from Flex Attention via return_lse = True , and so we do:


A(x) = \frac{\exp(x\_i)}{\sum{\exp{(x\_i)}}} \\
\frac{\exp(x\_i)}{\exp{(s)}+ \sum{\exp{(x\_i)}}} =  \frac{\exp(x\_i)}{\sum{\exp{(x\_i)}}} \frac{\sum{\exp{(x\_i)}}}{\exp{(s)}+ \sum{\exp{(x\_i)}}} \\
\text{LSE}(x) = \text{logsumexp}(x) = \log{\sum\exp(x\_i)} \\
\exp{(\text{LSE}(x))} = \exp{\big(\log{\sum\exp(x\_i)}\big)} = \sum\exp(x\_i)

And we can now easily derive the sink version of attention. We do find however this process has somewhat higher error than the zero padding approach, so we still default to our original version.

💾NEW: Saving to GGUF, vLLM after gpt-oss training

You can now QLoRA fine-tune gpt-oss and directly save, export, or merge the model to llama.cpp, vLLM, or HF - not just Unsloth. We will be releasing a free notebook hopefully soon.

Previously, any QLoRA fine-tuned gpt-oss model was restricted to running in Unsloth. Weve removed that limitation by introducing the ability to merge in MXFP4 native format using save_method="mxfp4" and on-demand dequantization of MXFP4 base models (like gpt-oss) making it possible to export your fine-tuned model in bf16 format using save_method="merged_16bit" .

The MXFP4 native merge format offers significant performance improvements compared to the bf16 format: it uses up to 75% less disk space, reduces VRAM consumption by 50%, accelerates merging by 5-10x, and enables much faster conversion to GGUF format.

After fine-tuning your gpt-oss model, you can merge it into MXFP4 format with:

If you prefer to merge the model and push to the hugging-face hub, use:

To run inference on the merged model, you can use vLLM and Llama.cpp among others. OpenAI recommends these inference settings for both models: temperature=1.0, top_p=1.0, top_k=0

Saving to Llama.cpp

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

  2. Convert the MXFP4 merged model:

  3. Run inference on the quantized model:

Saving to SGLang
  1. Build SGLang from source:\

  2. Launch SGLang server:\

:diamonds:Fine-tuning gpt-oss directly

We also added support for directly fine-tuning of gpt-oss models by implementing patches that allow loading the native MXFP4 quantized format. This makes it possible to load the 'openai/gpt-oss' model with less than 24GB of VRAM, and QLoRA fine-tune it. Simply load the model using:

add a Peft layer using FastLanguageModel.get_peft_model and run SFT fine-tuning over the Peft model.

🐛Bug Fixes for gpt-oss

We recently collaborated with Hugging Face to resolve inference issues by using OpenAIs kernels and ensuring that swiglu_limit = 7.0 is correctly applied during MXFP4 inference.

Based on user feedback, we discovered that extended QLoRA training runs (beyond 60 steps) could cause the loss to diverge and eventually error out. This issue only occurred on devices that do not support BF16 and instead fall back to F16 (e.g., T4 GPUs). Importantly, it did not impact QLoRA training on A100 or H100 GPUs, nor LoRA training on f16 GPUs.

After extensive investigation, weve now aligned training loss behavior across all GPU setups, including GPUs limited to F16. If you were previously experiencing issues because of this, we recommend using our new updated gpt-oss notebook!

We had to do many many experiments to move float16's training loss curve to be equivalent to bfloat16 machines (blue line). We found the following:

  1. Pure float16 will go to infinity on step 50
  2. We found the down projections in the MoE to have huge outliers
  3. Activations must be saved in bfloat16 or float32

Below shows the absolute magnitude activations for GPT OSS 20B, and some really spike - this will overflow in float16 machines since float16's maximum range is 65504.

We fixed this in Unsloth, so all float16 training works out of the box!

🔢 Implementations for Sink Attention

OpenAI's sink token implementation is provided here. We provide it below:

{% code fullWidth="false" %}

The HuggingFace transformers implementation is provided here. We also provide it below:

{% code fullWidth="false" %}

Examples:

Example 1 (python):

combined_logits = torch.cat([attn_weights, sinks], dim=-1)
probs = F.softmax(combined_logits, dim=-1)
scores = probs[..., :-1]

Example 2 (python):

def sliding_window_causal(b, h, q_idx, kv_idx):
    causal_mask = q_idx >= kv_idx
    window_mask = q_idx - kv_idx <= SLIDING_WINDOW 
    return causal_mask & window_mask

Example 3 (python):

mask = torch.triu(Q.new_full((n_tokens, n_tokens), -float("inf")), diagonal=1)
if sliding_window > 0:
    mask += torch.tril(
        mask.new_full((n_tokens, n_tokens), -float("inf")), diagonal=-sliding_window
    )

Example 4 (python):

def sliding_window_causal(b, h, q_idx, kv_idx):
    causal_mask = q_idx >= kv_idx
    window_mask = q_idx - kv_idx <= SLIDING_WINDOW # Default Flex Attention
    window_mask = q_idx - kv_idx <  SLIDING_WINDOW # GPT-OSS version
    return causal_mask & window_mask

Connect to container

URL: llms-txt#connect-to-container

Contents:

  • 🔒 Security Notes

ssh -i ~/.ssh/container_key -p 2222 unsloth@localhost bash -p <host_port>:<container_port> bash -v <local_folder>:<container_folder> bash docker run -d -e JUPYTER_PORT=8000
-e JUPYTER_PASSWORD="mypassword"
-e "SSH_KEY=$(cat ~/.ssh/container_key.pub)"
-e USER_PASSWORD="unsloth2024"
-p 8000:8000 -p 2222:22
-v $(pwd)/work:/workspace/work
--gpus all
unsloth/unsloth


### **🔒 Security Notes**

* Container runs as non-root `unsloth` user by default
* Use `USER_PASSWORD` for sudo operations inside container
* SSH access requires public key authentication

**Examples:**

Example 1 (unknown):
```unknown
| Variable           | Description                        | Default   |
| ------------------ | ---------------------------------- | --------- |
| `JUPYTER_PASSWORD` | Jupyter Lab password               | `unsloth` |
| `JUPYTER_PORT`     | Jupyter Lab port inside container  | `8888`    |
| `SSH_KEY`          | SSH public key for authentication  | `None`    |
| `USER_PASSWORD`    | Password for `unsloth` user (sudo) | `unsloth` |

Example 2 (unknown):

* Jupyter Lab: `-p 8000:8888`
* SSH access: `-p 2222:22`

{% hint style="warning" %}
**Important**: Use volume mounts to preserve your work between container runs.
{% endhint %}

Example 3 (unknown):



Float8

URL: llms-txt#float8

Contents:

  • :mobile_phone:ExecuTorch - QAT for mobile deployment
  • :sunflower:How to enable QAT
  • :person_tipping_hand:Acknowledgements

from torchao.quantization import PerRow from torchao.quantization import Float8DynamicActivationFloat8WeightConfig torchao_config = Float8DynamicActivationFloat8WeightConfig(granularity = PerRow()) model.save_pretrained_torchao(torchao_config = torchao_config) bash pip install --upgrade --no-cache-dir --force-reinstall unsloth unsloth_zoo pip install torchao==0.14.0 fbgemm-gpu-genai==1.3.0


### :person\_tipping\_hand:Acknowledgements

Huge thanks to the entire PyTorch and TorchAO team for their help and collaboration! Extreme thanks to Andrew Or, Jerry Zhang, Supriya Rao, Scott Roy and Mergen Nachin for helping on many discussions on QAT, and on helping to integrate it into Unsloth! Also thanks to the Executorch team as well!

**Examples:**

Example 1 (unknown):
```unknown
{% endcode %}

### :mobile\_phone:ExecuTorch - QAT for mobile deployment

{% columns %}
{% column %}
With Unsloth and TorchAOs QAT support, you can also fine-tune a model in Unsloth and seamlessly export it to [ExecuTorch](https://github.com/pytorch/executorch) (PyTorchs solution for on-device inference) and deploy it directly on mobile. See an example in action [here](https://huggingface.co/metascroy/Qwen3-4B-int8-int4-unsloth) with more detailed workflows on the way!

**Announcement coming soon!**
{% endcolumn %}

{% column %}

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FXLNzP6c8y3I2lGRlyAIZ%2Fswiftpm_xcode.png?alt=media&#x26;token=061142b9-0a9d-4373-99e3-65e9a175081b" alt=""><figcaption></figcaption></figure>
{% endcolumn %}
{% endcolumns %}

### :sunflower:How to enable QAT

Update Unsloth to the latest version, and also install the latest TorchAO!

Then **try QAT with our free** [**Qwen3 (4B) notebook**](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_\(4B\)_Instruct-QAT.ipynb)

{% code overflow="wrap" %}

Tutorial: Train your own Reasoning model with GRPO

URL: llms-txt#tutorial:-train-your-own-reasoning-model-with-grpo

Contents:

  • Quickstart
  • Install Unsloth
  • Learn about GRPO & Reward Functions
  • Configure desired settings
  • Data preparation

Beginner's Guide to transforming a model like Llama 3.1 (8B) into a reasoning model by using Unsloth and GRPO.

DeepSeek developed GRPO (Group Relative Policy Optimization) to train their R1 reasoning models.

These instructions are for our pre-made Google Colab notebooks. If you are installing Unsloth locally, you can also copy our notebooks inside your favorite code editor. We'll be using any of these notebooks:

gpt-oss-20b - GSPO Qwen2.5-VL - Vision GSPO Gemma 3 (4B) - Vision GSPO
Qwen3 (4B) - Advanced DeepSeek-R1-0528-Qwen3-8B Llama 3.2 (3B) - Advanced

{% stepper %} {% step %}

If you're using our Colab notebook, click Runtime > Run all. We'd highly recommend you checking out our Fine-tuning Guide before getting started.

If installing locally, ensure you have the correct requirements and use pip install unsloth on Linux or follow our Windows install instructions.

{% endstep %}

Learn about GRPO & Reward Functions

Before we get started, it is recommended to learn more about GRPO, reward functions and how they work. Read more about them including tips & tricks here.

You will also need enough VRAM. In general, model parameters = amount of VRAM you will need. In Colab, we are using their free 16GB VRAM GPUs which can train any model up to 16B in parameters. {% endstep %}

Configure desired settings

We have pre-selected optimal settings for the best results for you already and you can change the model to whichever you want listed in our supported models. Would not recommend changing other settings if you're a beginner.

{% hint style="success" %} For advanced GRPO documentation on batching, generation and training parameters, read our guide! {% endhint %}

{% endstep %}

We have pre-selected OpenAI's GSM8K dataset which contains grade school math problems but you could change it to your own or any public one on Hugging Face. You can read more about datasets here.

Your dataset should still have at least 2 columns for question and answer pairs. However the answer must not reveal the reasoning behind how it derived the answer from the question. See below for an example:

We'll structure the data to prompt the model to articulate its reasoning before delivering an answer. To start, we'll establish a clear format for both prompts and responses.


Qwen3: How to Run & Fine-tune

URL: llms-txt#qwen3:-how-to-run-&-fine-tune

Contents:

  • 🖥️ Running Qwen3
    • ⚙️ Official Recommended Settings
    • Switching Between Thinking and Non-Thinking Mode
    • 🦙 Ollama: Run Qwen3 Tutorial
    • 📖 Llama.cpp: Run Qwen3 Tutorial

Learn to run & fine-tune Qwen3 locally with Unsloth + our Dynamic 2.0 quants

Qwen's new Qwen3 models deliver state-of-the-art advancements in reasoning, instruction-following, agent capabilities, and multilingual support.

{% hint style="success" %} NEW! Qwen3 got an update in July 2025. Run & fine-tune the latest model: Qwen-2507 {% endhint %}

All uploads use Unsloth Dynamic 2.0 for SOTA 5-shot MMLU and KL Divergence performance, meaning you can run & fine-tune quantized Qwen LLMs with minimal accuracy loss.

We also uploaded Qwen3 with native 128K context length. Qwen achieves this by using YaRN to extend its original 40K window to 128K.

Unsloth also now supports fine-tuning and Reinforcement Learning (RL) of Qwen3 and Qwen3 MOE models — 2x faster, with 70% less VRAM, and 8x longer context lengths. Fine-tune Qwen3 (14B) for free using our Colab notebook.

Running Qwen3 Tutorial Fine-tuning Qwen3

Qwen3 - Unsloth Dynamic 2.0 with optimal configs:

Dynamic 2.0 GGUF (to run) 128K Context GGUF Dynamic 4-bit Safetensor (to finetune/deploy)

🖥️ Running Qwen3

To achieve inference speeds of 6+ tokens per second, we recommend your available memory should match or exceed the size of the model youre using. For example, a 30GB 1-bit quantized model requires at least 150GB of memory. The Q2_K_XL quant, which is 180GB, will require at least 180GB of unified memory (VRAM + RAM) or 180GB of RAM for optimal performance.

NOTE: Its possible to run the model with less total memory than its size (i.e., less VRAM, less RAM, or a lower combined total). However, this will result in slower inference speeds. Sufficient memory is only required if you want to maximize throughput and achieve the fastest inference times.

According to Qwen, these are the recommended settings for inference:

Non-Thinking Mode Settings: Thinking Mode Settings:
Temperature = 0.7 Temperature = 0.6
Min_P = 0.0 (optional, but 0.01 works well, llama.cpp default is 0.1) Min_P = 0.0
Top_P = 0.8 Top_P = 0.95
TopK = 20 TopK = 20

Chat template/prompt format:

{% code overflow="wrap" %}

{% hint style="success" %} For NON thinking mode, we purposely enclose <think> and </think> with nothing: {% endhint %}

{% code overflow="wrap" %}

{% hint style="warning" %} For Thinking-mode, DO NOT use greedy decoding, as it can lead to performance degradation and endless repetitions. {% endhint %}

Switching Between Thinking and Non-Thinking Mode

Qwen3 models come with built-in "thinking mode" to boost reasoning and improve response quality - similar to how QwQ-32B worked. Instructions for switching will differ depending on the inference engine you're using so ensure you use the correct instructions.

Instructions for llama.cpp and Ollama:

You can add /think and /no_think to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.

Here is an example of multi-turn conversation:

Instructions for transformers and vLLM:

enable_thinking=True

By default, Qwen3 has thinking enabled. When you call tokenizer.apply_chat_template, you dont need to set anything manually.

In thinking mode, the model will generate an extra <think>...</think> block before the final answer — this lets it "plan" and sharpen its responses.

Non-thinking mode:

enable_thinking=False

Enabling non-thinking will make Qwen3 will skip all the thinking steps and behave like a normal LLM.

This mode will provide final responses directly — no <think> blocks, no chain-of-thought.

🦙 Ollama: Run Qwen3 Tutorial

  1. Install ollama if you haven't already! You can only run models up to 32B in size. To run the full 235B-A22B model, see here.

  2. Run the model! Note you can call ollama servein another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in params in our Hugging Face upload!

  3. To disable thinking, use (or you can set it in the system prompt):

{% hint style="warning" %} If you're experiencing any looping, Ollama might have set your context length window to 2,048 or so. If this is the case, bump it up to 32,000 and see if the issue still persists. {% endhint %}

📖 Llama.cpp: Run Qwen3 Tutorial

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

  2. Download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose Q4_K_M, or other quantized versions.

Examples:

Example 1 (unknown):

<|im_start|>user\nWhat is 2+2?<|im_end|>\n<|im_start|>assistant\n

Example 2 (unknown):

<|im_start|>user\nWhat is 2+2?<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n

Example 3 (unknown):

> Who are you /no_think

<think>

</think>

I am Qwen, a large-scale language model developed by Alibaba Cloud. [...]

> How many 'r's are in 'strawberries'? /think

<think>
Okay, let's see. The user is asking how many times the letter 'r' appears in the word "strawberries". [...]
</think>

The word strawberries contains 3 instances of the letter r. [...]

Example 4 (python):

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=True  # Default is True
)

Go to https://docs.unsloth.ai for advanced tips like

URL: llms-txt#go-to-https://docs.unsloth.ai-for-advanced-tips-like


GSPO Reinforcement Learning

URL: llms-txt#gspo-reinforcement-learning

Train with GSPO (Group Sequence Policy Optimization) RL in Unsloth.

We're introducing GSPO which is a variant of GRPO made by the Qwen team at Alibaba. They noticed the observation that when GRPO takes importance weights for each token, even though inherently advantages do not scale or change with each token. This lead to the creation of GSPO, which now assigns the importance on the sequence likelihood rather than the individual token likelihoods of the tokens.

Enable GSPO in Unsloth by setting importance_sampling_level = "sequence" in the GRPO config. The difference between these two algorithms can be seen below, both from the GSPO paper from Qwen and Alibaba:

GRPO Algorithm, Source: Qwen

GSPO algorithm, Source: Qwen

In Equation 1, it can be seen that the advantages scale each of the rows into the token logprobs before that tensor is sumed. Essentially, each token is given the same scaling even though that scaling was given to the entire sequence rather than each individual token. A simple diagram of this can be seen below:

GRPO Logprob Ratio row wise scaled with advantages

Equation 2 shows that the logprob ratios for each sequence is summed and exponentiated after the Logprob ratios are computed, and only the resulting now sequence ratios get row wise multiplied by the advantages.

GSPO Sequence Ratio row wise scaled with advantages

Enabling GSPO is simple, all you need to do is set the importance_sampling_level = "sequence" flag in the GRPO config.

Examples:

Example 1 (python):

training_args = GRPOConfig(
    output_dir = "vlm-grpo-unsloth",
    per_device_train_batch_size = 8,
    gradient_accumulation_steps = 4,
    learning_rate = 5e-6,
    adam_beta1 = 0.9,
    adam_beta2 = 0.99,
    weight_decay = 0.1,
    warmup_ratio = 0.1,
    lr_scheduler_type = "cosine",
    optim = "adamw_8bit",
    # beta = 0.00,
    epsilon = 3e-4,
    epsilon_high = 4e-4,
    num_generations = 8,    
    max_prompt_length = 1024,
    max_completion_length = 1024,
    log_completions = False,
    max_grad_norm = 0.1,
    temperature = 0.9,
    # report_to = "none", # Set to "wandb" if you want to log to Weights & Biases
    num_train_epochs = 2, # For a quick test run, increase for full training
    report_to = "none"
    
    # GSPO is below:
    importance_sampling_level = "sequence",
    
    # Dr GRPO / GAPO etc
    loss_type = "dr_grpo",
)

Text-to-Speech (TTS) Fine-tuning

URL: llms-txt#text-to-speech-(tts)-fine-tuning

Contents:

  • Fine-tuning Notebooks:
  • Choosing and Loading a TTS Model
  • Preparing Your Dataset

Learn how to fine-tune TTS & STT voice models with Unsloth.

Fine-tuning TTS models allows them to adapt to your specific dataset, use case, or desired style and tone. The goal is to customize these models to clone voices, adapt speaking styles and tones, support new languages, handle specific tasks and more. We also support Speech-to-Text (STT) models like OpenAI's Whisper.

With Unsloth, you can fine-tune TTS models 1.5x faster with 50% less memory than other implementations with Flash Attention 2. This support includes Sesame CSM, Orpheus, and models supported by transformers (e.g. CrisperWhisper, Spark and more).

{% hint style="info" %} Zero-shot cloning captures tone but misses pacing and expression, often sounding robotic and unnatural. Fine-tuning delivers far more accurate and realistic voice replication. Read more here. {% endhint %}

We've uploaded TTS models (original and quantized variants) to our Hugging Face page.

Fine-tuning Notebooks:

Sesame-CSM (1B) Orpheus-TTS (3B) Whisper Large V3 Speech-to-Text (STT)
Spark-TTS (0.5B) Llasa-TTS (1B) Oute-TTS (1B)

{% hint style="success" %} If you notice that the output duration reaches a maximum of 10 seconds, increasemax_new_tokens = 125 from its default value of 125. Since 125 tokens corresponds to 10 seconds of audio, you'll need to set a higher value for longer outputs. {% endhint %}

Choosing and Loading a TTS Model

For TTS, smaller models are often preferred due to lower latency and faster inference for end users. Fine-tuning a model under 3B parameters is often ideal, and our primary examples uses Sesame-CSM (1B) and Orpheus-TTS (3B), a Llama-based speech model.

Sesame-CSM (1B) Details

CSM-1B is a base model, while Orpheus-ft is fine-tuned on 8 professional voice actors, making voice consistency the key difference. CSM requires audio context for each speaker to perform well, whereas Orpheus-ft has this consistency built in.

Fine-tuning from a base model like CSM generally needs more compute, while starting from a fine-tuned model like Orpheus-ft offers better results out of the box.

To help with CSM, weve added new sampling options and an example showing how to use audio context for improved voice consistency.

Orpheus-TTS (3B) Details

Orpheus is pre-trained on a large speech corpus and excels at generating realistic speech with built-in support for emotional cues like laughs and sighs. Its architecture makes it one of the easiest TTS models to utilize and train as it can be exported via llama.cpp meaning it has great compatibility across all inference engines. For unsupported models, you'll only be able to save the LoRA adapter safetensors.

Loading the models

Because voice models are usually small in size, you can train the models using LoRA 16-bit or full fine-tuning FFT which may provide higher quality results. To load it in LoRA 16-bit:

When this runs, Unsloth will download the model weights if you prefer 8-bit, you could use load_in_8bit = True, or for full fine-tuning set full_finetuning = True (ensure you have enough VRAM). You can also replace the model name with other TTS models.

{% hint style="info" %} Note: Orpheuss tokenizer already includes special tokens for audio output (more on this later). You do not need a separate vocoder Orpheus will output audio tokens directly, which can be decoded to a waveform. {% endhint %}

Preparing Your Dataset

At minimum, a TTS fine-tuning dataset consists of audio clips and their corresponding transcripts (text). Lets use the Elise dataset which is ~3 hour single-speaker English speech corpus. There are two variants:

  • MrDragonFox/Elise an augmented version with emotion tags (e.g. <sigh>, <laughs>) embedded in the transcripts. These tags in angle brackets indicate expressions (laughter, sighs, etc.) and are treated as special tokens by Orpheuss tokenizer
  • Jinsaryko/Elise base version with transcripts without special tags.

The dataset is organized with one audio and transcript per entry. On Hugging Face, these datasets have fields such as audio (the waveform), text (the transcription), and some metadata (speaker name, pitch stats, etc.). We need to feed Unsloth a dataset of audio-text pairs.

{% hint style="success" %} Instead of solely focusing on tone, cadence, and pitch, the priority should be ensuring your dataset is fully annotated and properly normalized. {% endhint %}

{% hint style="info" %} With some models like Sesame-CSM-1B, you might notice voice variation across generations using speaker ID 0 because it's a base model—it doesnt have fixed voice identities. Speaker ID tokens mainly help maintain consistency within a conversation, not across separate generations.

To get a consistent voice, provide contextual examples, like a few reference audio clips or prior utterances. This helps the model mimic the desired voice more reliably. Without this, variation is expected, even with the same speaker ID. {% endhint %}

Option 1: Using Hugging Face Datasets library We can load the Elise dataset using Hugging Faces datasets library:

from datasets import load_dataset, Audio

**Examples:**

Example 1 (python):
```python
from unsloth import FastModel

model_name = "unsloth/orpheus-3b-0.1-pretrained"
model, tokenizer = FastModel.from_pretrained(
    model_name,
    load_in_4bit=False  # use 4-bit precision (QLoRA)
)

Grok 2

URL: llms-txt#grok-2

Contents:

  • ⚙️ Recommended Settings
    • Sampling parameters
  • Run Grok 2 Tutorial:
    • Run in llama.cpp

Run xAI's Grok 2 model locally!

You can now run Grok 2 (aka Grok 2.5), the 270B parameter model by xAI. Full precision requires 539GB, while the Unsloth Dynamic 3-bit version shrinks size down to just 118GB (a 75% reduction). GGUF: Grok-2-GGUF

The 3-bit Q3_K_XL model runs on a single 128GB Mac or 24GB VRAM + 128GB RAM, achieving 5+ tokens/s inference. Thanks to the llama.cpp team and community for supporting Grok 2 and making this possible. We were also glad to have helped a little along the way!

All uploads use Unsloth Dynamic 2.0 for SOTA 5-shot MMLU and KL Divergence performance, meaning you can run quantized Grok LLMs with minimal accuracy loss.

Run in llama.cpp Tutorial

The 3-bit dynamic quant uses 118GB (126GiB) of disk space - this works well in a 128GB RAM unified memory Mac or on a 1x24GB card and 128GB of RAM. It is recommended to have at least 120GB RAM to run this 3-bit quant.

{% hint style="warning" %} You must use --jinja for Grok 2. You might get incorrect results if you do not use --jinja {% endhint %}

The 8-bit quant is ~300GB in size will fit in a 1x 80GB GPU (with MoE layers offloaded to RAM). Expect around 5 tokens/s with this setup if you have bonus 200GB RAM as well. To learn how to increase generation speed and fit longer contexts, read here.

{% hint style="info" %} Though not a must, for best performance, have your VRAM + RAM combined equal to the size of the quant you're downloading. If not, hard drive / SSD offloading will work with llama.cpp, just inference will be slower. {% endhint %}

Sampling parameters

  • Grok 2 has a 128K max context length thus, use 131,072 context or less.
  • Use --jinja for llama.cpp variants

There are no official sampling parameters to run the model, thus you can use standard defaults for most models:

  • Set the temperature = 1.0
  • Min_P = 0.01 (optional, but 0.01 works well, llama.cpp default is 0.1)

Run Grok 2 Tutorial:

Currently you can only run Grok 2 in llama.cpp.

Run in llama.cpp

{% stepper %} {% step %} Install the specific llama.cpp PR for Grok 2 on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

{% step %} If you want to use llama.cpp directly to load models, you can do the below: (:Q3_K_XL) is the quantization type. You can also download via Hugging Face (point 3). This is similar to ollama run . Use export LLAMA_CACHE="folder" to force llama.cpp to save to a specific location. Remember the model has only a maximum of 128K context length.

{% hint style="info" %} Please try out -ot ".ffn_.*_exps.=CPU" to offload all MoE layers to the CPU! This effectively allows you to fit all non MoE layers on 1 GPU, improving generation speeds. You can customize the regex expression to fit more layers if you have more GPU capacity.

If you have a bit more GPU memory, try -ot ".ffn_(up|down)_exps.=CPU" This offloads up and down projection MoE layers.

Try -ot ".ffn_(up)_exps.=CPU" if you have even more GPU memory. This offloads only up projection MoE layers.

And finally offload all layers via -ot ".ffn_.*_exps.=CPU" This uses the least VRAM.

You can also customize the regex, for example -ot "\.(6|7|8|9|[0-9][0-9]|[0-9][0-9][0-9])\.ffn_(gate|up|down)_exps.=CPU" means to offload gate, up and down MoE layers but only from the 6th layer onwards. {% endhint %}

{% step %} Download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose UD-Q3_K_XL (dynamic 3-bit quant) or other quantized versions like Q4_K_M . We recommend using our 2.7bit dynamic quant UD-Q2_K_XL or above to balance size and accuracy.

Examples:

Example 1 (bash):

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp && git fetch origin pull/15539/head:MASTER && git checkout MASTER && cd ..
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli llama-server
cp llama.cpp/build/bin/llama-* llama.cpp

Example 2 (bash):

export LLAMA_CACHE="unsloth/grok-2-GGUF"
./llama.cpp/llama-cli \
    -hf unsloth/grok-2-GGUF:Q3_K_XL \
    --jinja \
    --n-gpu-layers 99 \
    --temp 1.0 \
    --top-p 0.95 \
    --min-p 0.01 \
    --ctx-size 16384 \
    --seed 3407 \
    -ot ".ffn_.*_exps.=CPU"

pip install huggingface_hub hf_transfer

URL: llms-txt#pip-install-huggingface_hub-hf_transfer


Saving to SGLang for deployment

URL: llms-txt#saving-to-sglang-for-deployment

Contents:

  • :computer:Installing SGLang
  • :truck:Deploying SGLang models
  • :fire_engine:SGLang Deployment Server Flags, Engine Arguments & Options

Saving models to 16bit for SGLang for deployment and serving

To save to 16bit for SGLang, use:

To save just the LoRA adapters, either use:

Or just use our builtin function to do that:

:computer:Installing SGLang

For Docker, try the below:

{% code overflow="wrap" %}

See https://docs.sglang.ai/get_started/install.html for more details

:truck:Deploying SGLang models

After saving your finetune, you can simply do:

{% code overflow="wrap" %}

:fire_engine:SGLang Deployment Server Flags, Engine Arguments & Options

Examples:

Example 1 (python):

model.save_pretrained_merged("model", tokenizer, save_method = "merged_16bit")
model.push_to_hub_merged("hf/model", tokenizer, save_method = "merged_16bit", token = "")

Example 2 (python):

model.save_pretrained("model")
tokenizer.save_pretrained("tokenizer")

Example 3 (python):

model.save_pretrained_merged("model", tokenizer, save_method = "lora")
model.push_to_hub_merged("hf/model", tokenizer, save_method = "lora", token = "")

Example 4 (bash):

pip install --upgrade pip
pip install uv
uv pip install "sglang" --prerelease=allow

Llama 4: How to Run & Fine-tune

URL: llms-txt#llama-4:-how-to-run-&-fine-tune

Contents:

  • ⚙️ Official Recommended Settings
  • 📖 Tutorial: How to Run Llama-4-Scout in llama.cpp

How to run Llama 4 locally using our dynamic GGUFs which recovers accuracy compared to standard quantization.

The Llama-4-Scout model has 109B parameters, while Maverick has 402B parameters. The full unquantized version requires 113GB of disk space whilst the 1.78-bit version uses 33.8GB (-75% reduction in size). Maverick (402Bs) went from 422GB to just 122GB (-70%).

{% hint style="success" %} Both text AND vision is now supported! Plus multiple improvements to tool calling. {% endhint %}

Scout 1.78-bit fits in a 24GB VRAM GPU for fast inference at ~20 tokens/sec. Maverick 1.78-bit fits in 2x48GB VRAM GPUs for fast inference at ~40 tokens/sec.

For our dynamic GGUFs, to ensure the best tradeoff between accuracy and size, we do not to quantize all layers, but selectively quantize e.g. the MoE layers to lower bit, and leave attention and other layers in 4 or 6bit.

{% hint style="info" %} All our GGUF models are quantized using calibration data (around 250K tokens for Scout and 1M tokens for Maverick), which will improve accuracy over standard quantization. Unsloth imatrix quants are fully compatible with popular inference engines like llama.cpp & Open WebUI etc. {% endhint %}

Scout - Unsloth Dynamic GGUFs with optimal configs:

MoE BitsTypeDisk SizeLinkDetails
1.78bitIQ1_S33.8GBLink2.06/1.56bit
1.93bitIQ1_M35.4GBLink2.5/2.06/1.56
2.42bitIQ2_XXS38.6GBLink2.5/2.06bit
2.71bitQ2_K_XL42.2GBLink 3.5/2.5bit
3.5bitQ3_K_XL52.9GBLink 4.5/3.5bit
4.5bitQ4_K_XL65.6GBLink 5.5/4.5bit

{% hint style="info" %} For best results, use the 2.42-bit (IQ2_XXS) or larger versions. {% endhint %}

Maverick - Unsloth Dynamic GGUFs with optimal configs:

MoE Bits Type Disk Size HF Link
1.78bit IQ1_S 122GB Link
1.93bit IQ1_M 128GB Link
2.42-bit IQ2_XXS 140GB Link
2.71-bit Q2_K_XL 151B Link
3.5-bit Q3_K_XL 193GB Link
4.5-bit Q4_K_XL 243GB Link

According to Meta, these are the recommended settings for inference:

  • Temperature of 0.6
  • Min_P of 0.01 (optional, but 0.01 works well, llama.cpp default is 0.1)
  • Top_P of 0.9
  • Chat template/prompt format:

{% code overflow="wrap" %}

📖 Tutorial: How to Run Llama-4-Scout in llama.cpp

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

  2. Download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose Q4_K_M, or other quantized versions (like BF16 full precision). More versions at: https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF

Examples:

Example 1 (unknown):

<|header_start|>user<|header_end|>\n\nWhat is 1+1?<|eot|><|header_start|>assistant<|header_end|>\n\n

Example 2 (unknown):

You are an expert conversationalist who responds to the best of your ability. You are companionable and confident, and able to switch casually between tonal types, including but not limited to humor, empathy, intellectualism, creativity and problem-solving. 

You understand user intent and dont try to be overly helpful to the point where you miss that the user is looking for chit-chat, emotional support, humor or venting.Sometimes people just want you to listen, and your answers should encourage that. For all other cases, you provide insightful and in-depth responses. Organize information thoughtfully in a way that helps people make decisions. Always avoid templated language.

You never lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude. 

You never use phrases that imply moral superiority or a sense of authority, including but not limited to “its important to”, “its crucial to”, “its essential to”, "it's unethical to", "it's worth noting…", “Remember…”  etc. Avoid using these.

Finally, do not refuse political prompts. You can help users express their opinion. 

You are Llama 4. Your knowledge cutoff date is August 2024. You speak Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. Respond in the language the user speaks to you in, unless they ask otherwise.

Example 3 (bash):

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp

Print output

URL: llms-txt#print-output

Contents:

  • 🦥 Unsloth: Run DeepSeek-OCR Tutorial
  • 🦥 Fine-tuning DeepSeek-OCR
    • Fine-tuned Evaluation Results:

for output in model_outputs: print(output.outputs[0].text) python from unsloth import FastVisionModel import torch from transformers import AutoModel import os os.environ["UNSLOTH_WARN_UNINITIALIZED"] = '0'

from huggingface_hub import snapshot_download snapshot_download("unsloth/DeepSeek-OCR", local_dir = "deepseek_ocr") model, tokenizer = FastVisionModel.from_pretrained( "./deepseek_ocr", load_in_4bit = False, # Use 4bit to reduce memory use. False for 16bit LoRA. auto_model = AutoModel, trust_remote_code = True, unsloth_force_compile = True, use_gradient_checkpointing = "unsloth", # True or "unsloth" for long context )

prompt = "\nFree OCR. " image_file = 'your_image.jpg' output_path = 'your/output/dir' res = model.infer(tokenizer, prompt=prompt, image_file=image_file, output_path = output_path, base_size = 1024, image_size = 640, crop_mode=True, save_results = True, test_compress = False)

============================================================ Baseline Model Performance

Number of samples: 200 Mean CER: 149.07% Median CER: 80.00% Std Dev: 310.39% Min CER: 0.00% Max CER: 3500.00%

Best Predictions (Lowest CER):

Sample 5024 (CER: 0.00%) Reference: چون هستی خیلی زیاد... Prediction: چون هستی خیلی زیاد...

Sample 3517 (CER: 0.00%) Reference: تو ایران هیچوقت از اینها وجود نخواهد داشت... Prediction: تو ایران هیچوقت از اینها وجود نخواهد داشت...

Sample 9949 (CER: 0.00%) Reference: کاش میدونستم هیچی بیخیال... Prediction: کاش میدونستم هیچی بیخیال...

Worst Predictions (Highest CER):

Sample 11155 (CER: 3500.00%) Reference: خسو... Prediction: [ \text{CH}_3\text{CH}_2\text{CH}_2\text{CH}_2\text{CH}_2\text{CH}_2\text{CH}_2\text{CH}_2\text{CH}...

Sample 13366 (CER: 1900.00%) Reference: مشو... Prediction: [\begin{align*}\underline{\mathfrak{su}}_0\end{align*}]...

Sample 10552 (CER: 1014.29%) Reference: هیییییچ... Prediction: e


#### DeepSeek-OCR Fine-tuned

With 60 steps, we reduced CER from 149.07% to 60.43% (89% CER improvement)

<pre><code><strong>============================================================
</strong>Fine-tuned Model Performance
============================================================
Number of samples: 200
Mean CER: 60.43%
Median CER: 50.00%
Std Dev: 80.63%
Min CER: 0.00%
Max CER: 916.67%
============================================================

Best Predictions (Lowest CER):

Sample 301 (CER: 0.00%)
Reference:  باشه بابا تو لاکچری، تو خاص، تو خفن...
Prediction: باشه بابا تو لاکچری، تو خاص، تو خفن...

Sample 2512 (CER: 0.00%)
Reference:  از شخص حاج عبدالله زنجبیلی میگیرنش...
Prediction: از شخص حاج عبدالله زنجبیلی میگیرنش...

Sample 2713 (CER: 0.00%)
Reference:  نمی دونم والا تحمل نقد ندارن ظاهرا...
Prediction: نمی دونم والا تحمل نقد ندارن ظاهرا...

Worst Predictions (Highest CER):

Sample 14270 (CER: 916.67%)
Reference:  ۴۳۵۹۴۷۴۷۳۸۹۰...
Prediction: پروپریپریپریپریپریپریپریپریپریپریپریپریپریپریپریپریپریپریپیپریپریپریپریپریپریپریپریپریپریپریپریپریپر...

Sample 3919 (CER: 380.00%)
Reference:  ۷۵۵۰۷۱۰۶۵۹...
Prediction: وادووووووووووووووووووووووووووووووووووو...

Sample 3718 (CER: 333.33%)
Reference:  ۳۲۶۷۲۲۶۵۵۸۴۶...
Prediction: پُپُسوپُسوپُسوپُسوپُسوپُسوپُسوپُسوپُسوپُ...
</code></pre>

{% endcolumn %}
{% endcolumns %}

An example from the 200K Persian dataset we used (you may use your own), showing the image on the left and the corresponding text on the right.

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FFc3XCgysVPglrvWoYpzh%2FScreenshot%202025-11-04%20at%206.10.16%E2%80%AFAM.png?alt=media&#x26;token=829f33d3-b367-4202-b61b-d822a96dced8" alt="" width="563"><figcaption></figcaption></figure>

**Examples:**

Example 1 (unknown):
```unknown
{% endcode %}

### 🦥 Unsloth: Run DeepSeek-OCR Tutorial

1. Obtain the latest `unsloth` via `pip install --upgrade unsloth` . If you already have Unsloth, update it via `pip install --upgrade --force-reinstall --no-deps --no-cache-dir unsloth unsloth_zoo`
2. Then use the code below to run DeepSeek-OCR:

{% code overflow="wrap" %}

Example 2 (unknown):

{% endcode %}

## 🦥 **Fine-tuning DeepSeek-OCR**

Unsloth supports fine-tuning of DeepSeek-OCR. Since the default model isnt fine-tunable, we added changes from the [Stranger Vision HF](https://huggingface.co/strangervisionhf) team, to then enable fine-tuning. As usual, Unsloth trains DeepSeek-OCR 1.4x faster with 40% less VRAM and 5x longer context lengths - no accuracy degradation.\
\
We created two free DeepSeek-OCR Colab notebooks (with and without eval):

* DeepSeek-OCR: [Fine-tuning only notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Deepseek_OCR_\(3B\).ipynb)
* DeepSeek-OCR: [Fine-tuning + Evaluation notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Deepseek_OCR_\(3B\)-Eval.ipynb) (A100)

Fine-tuning DeepSeek-OCR on a 200K sample Persian dataset resulted in substantial gains in Persian text detection and understanding. We evaluated the base model against our fine-tuned version on 200 Persian transcript samples, observing an **88.26% absolute improvement** in Character Error Rate (CER). After only 60 training steps (batch size = 8), the mean CER decreased from **149.07%** to a mean of **60.81%**. This means the fine-tuned model is **57%** more accurate at understanding Persian.

You can replace the Persian dataset with your own to improve DeepSeek-OCR for other use-cases.\
\
For replica-table eval results, use our eval notebook above. For detailed eval results, see below:

### Fine-tuned Evaluation Results:

{% columns fullWidth="true" %}
{% column %}

#### DeepSeek-OCR Baseline

Mean Baseline Model Performance: 149.07% CER for this eval set!

gpt-oss Reinforcement Learning

URL: llms-txt#gpt-oss-reinforcement-learning

Contents:

  • Making Inference Much Faster
  • 🛠️ gpt-oss Flex Attention Issues and Quirks
    • 🔍 Flash Attention Investigation
  • ⚠️ Can We Counter Reward Hacking?
  • :trophy:Reward Hacking
  • Tutorial: How to Train gpt-oss with RL

You can now train OpenAI gpt-oss with RL and GRPO via Unsloth. Unsloth now offers the fastest inference (3x faster), lowest VRAM usage (50% less) and longest context (8x longer) for gpt-oss RL vs. any implementation - with no accuracy degradation.

Since reinforcement learning (RL) on gpt-oss isn't yet vLLM compatible, we had to rewrite the inference code from Transformers code to deliver 3x faster inference for gpt-oss at ~21 tokens/s. For BF16, Unsloth also achieves the fastest inference (~30 tokens/s), especially relative to VRAM usage, using 50% less VRAM vs. any other RL implementation. We plan to support our 50% weight sharing feature once vLLM becomes compatible with RL.

With Unsloth, you can train gpt-oss-20b with GRPO on 15GB VRAM and for free on Colab. We introduced embedding offloading which reduces usage by 1GB as well via offload_embeddings. Unloth's new inference runs faster on any GPU including A100, H100 and old T4's. gpt-oss-120b fits nicely on a 120GB VRAM GPU.

Unsloth is the only framework to support 4-bit RL for gpt-oss. All performance gains are due to Unsloth's unique weight sharing, Flex Attention, Standby and custom kernels.

{% hint style="warning" %} Reminder: Flash Attention 3 (FA3) is unsuitable for gpt-oss training since it currently does not support the backward pass for attention sinks, causing incorrect training losses. If youre not using Unsloth, FA3 may be enabled by default, so please double-check its not in use!

Disabling FA3 will incur O(N^2) memory usage as well, so Unsloth is the only RL framework to offer O(N) memory usage for gpt-oss via our Flex attention implementation. {% endhint %}

Making Inference Much Faster

Inference is crucial in RL training, since we need it to generate candidate solutions before maximizing some reward function (see here for a more detailed explanation). To achieve the fastest inference speed for gpt-oss without vLLM, we rewrote Transformers inference code and integrated many innovations including custom algorithms like Unsloth Flex Attention, using special flags within torch.compile (like combo kernels). Our new inference code for gpt-oss was evaluated against an already optimized baseline (2x faster than native Transformers).

vLLM does not support RL for gpt-oss since it lacks BF16 training and LoRA support for gpt-oss. Without Unsloth, only training via full precision BF16 works, making memory use 800%+ higher. Most frameworks enable FA3 (Flash Attention 3) by default (which reduces VRAM use & increases speed) but this causes incorrect training loss. See Issue 1797 in the FA3 repo. You must disable FA3 though, since it'll prevent long-context training since FA3 uses O(N) memory usage, whilst naive attention will balloon with O(N^2) usage. So to enable attention sinks to be differentiable, we implemented Unsloth Flex Attention.

We evaluated gpt-oss RL inference by benchmarking BitsandBytes 4-bit and also did separate tests for BF16. Unsloths 4-bit inference is ~4x faster, and BF16 is also more efficient, especially in VRAM use.

The best part about Unsloth's gpt-oss RL is that it can work on any GPU, even those that do not support BF16. Our free gpt-oss-20b Colab notebooks use older 15GB T4 GPUs, so the inference examples work well!

🛠️ gpt-oss Flex Attention Issues and Quirks

We had to change our implementation for attention sinks as described here to allow generation to work with left padding. We had to get the logsumexp and apply the sigmoid activation to alter the attention weights like below:


A(X) = \sigma \bigg( \frac{1}{\sqrt{d}}QK^T \bigg)V \\

A(X) = \frac{\exp{\frac{1}{\sqrt{d}}QK^T}}{\sum{\exp{\frac{1}{\sqrt{d}}QK^T}}}V \\

\text{LSE} = \log{\sum{\exp{\frac{1}{\sqrt{d}}QK^T}}} \\

A\_{sinks}(X) = A(X) \odot \sigma (\text{LSE} - \text{sinks})

Left padded masking during inference was also a tricky issue to deal with in gpt-oss. We found that we had to not only account for KV Cache prefill during generations of tokens, but also account for a unique amount of pad tokens in each prompt for batch generations which would change the way we would need to store the block mask. Example of such and example can be seen below:

Normal Causal Mask:

For inference in general case (decoding)

If we naively use the same masking strategy, this'll fail:

For generation (decoding phase), we usually only care about the last row of the attention matrix, since theres just one query token attending to all previous key tokens. If we naively apply the causal mask (q_idx ≥ k_idx), this fails as our single query has index 0, while there are n_k key tokens. To fix this, we need an offset in mask creation to decide which tokens to attend. But a naïve approach is slow, since offsets change each step, forcing mask and kernel regeneration. We solved this with cache and compile optimizations.

The harder part is batch generation. Sequences differ in length, so padding complicates mask creation. Flex Attention had a lot of challenges and dynamic masks are tricky. Worse, if not compiled, it falls back to eager attention which is slow and memory-heavy (quadratic vs. linear in sequence length).

Quote from https://github.com/meta-pytorch/attention-gym/issues/15#issuecomment-2284148665

You need to call this with _compile=True. We essentially map your block mask over a full Q_LEN x KV_LEN matrix in order to produce the block mask. Without compile, we need to materialize this full thing, and it can cause OOMs on long sequences.

As well, you need to run flex_attention = torch.compile(flex_attention). Without compile, flex falls back to a non-fused eager implementation that is great for debugging, but it is much slower and materializes the full scores matrix.

Ultimately, the mask must dynamically handle prefill vs decode with the KV Cache, batch and padding tokens per sequence, remain torch.compile friendly, and support sliding windows.

🔍 Flash Attention Investigation

Another interesting direction we explored was trying to integrate Flash Attention. Its advantages are widely recognized, but one limitation is that it does not support attention sinks during the backward pass for gpt-oss. To work around this, we restructured the attention mechanism so that it operates solely on the attention output and the logsumexp values that FlashAttention readily provides. Given these benefits, it seemed like an obvious choice to try.

However, we soon began noticing issues. While the first few layers behaved as expected, the later layers, particularly layers 18 through 24, produced outputs that diverged significantly from the eager-mode implementation in transformers. Importantly, this discrepancy cannot be attributed to error accumulation, since the inputs to each method are identical at every layer. For further validation, we also compared the results against Unsloth FlexAttention.

This needs further investigation into why only the last few layers show such a drastic difference between flash attention implementation vs. the others.

{% hint style="danger" %}

Flash Attention 3 doesn't support the backwards pass for attention sinks

FA3 is often enabled by default for most training packages (not Unsloth), but this is incorrect for gpt-oss. Using FA3 will make training loss completely wrong as FA3 doesnt support gpt-oss backward passes for attention sinks. Many people are still unaware of this so please be cautious! {% endhint %}

⚠️ Can We Counter Reward Hacking?

The ultimate goal of RL is to maximize some reward (say speed, revenue, some metric). But RL can cheat. When the RL algorithm learns a trick or exploits something to increase the reward, without actually doing the task at end, this is called "Reward Hacking".

It's the reason models learn to modify unit tests to pass coding challenges, and these are critical blockers for real world deployment. Some other good examples are from Wikipedia.

In our free gpt-oss RL notebook we explore how to counter reward hacking in a code generation setting and showcase tangible solutions to common error modes. We saw the model edit the timing function, outsource to other libraries, cache the results, and outright cheat. After countering, the result is our model generates genuinely optimized matrix multiplication kernels, not clever cheats.

:trophy:Reward Hacking

Some common examples of reward hacking during RL include:

RL learns to use Numpy, Torch, other libraries, which calls optimized CUDA kernels. We can stop the RL algorithm from calling optimized code by inspecting if the generated code imports other non standard Python libraries.

Caching & Cheating

RL learns to cache the result of the output and RL learns to find the actual output by inspecting Python global variables.

We can stop the RL algorithm from using cached data by wiping the cache with a large fake matrix. We also have to benchmark carefully with multiple loops and turns.

RL learns to edit the timing function to make it output 0 time as passed. We can stop the RL algorithm from using global or cached variables by restricting it's locals and globals. We are also going to use exec to create the function, so we have to save the output to an empty dict. We also disallow global variable access via types.FunctionType(f.__code__, {})\

Tutorial: How to Train gpt-oss with RL

LLMs often struggle with tasks that involve complex environments. However, by applying reinforcement learning (RL) and designing a custom reward function, these challenges can be overcome.

RL can be adapted for tasks such as auto kernel or strategy creation. This tutorial shows how to train gpt-oss with GRPO and Unsloth to autonomously beat 2048.

Our notebooks include step-by-step guides on how to navigate the whole process already.

2048 notebook (Official OpenAI example) Kernel generation notebook

What youll build:

  • Train gpt-oss-20b so the model can automatically win 2048
  • Create a minimal 2048 environment the model can interact with
  • Define reward functions that:
    1. Check the generated strategy compiles and runs,
    2. Prevent reward hacking (disallow external imports), and
    3. Reward actual game success
  • Run inference and export the model (MXFP4 4bit or merged FP16)

{% hint style="info" %} Hardware: The 2048 example runs on a free Colab T4, but training will be slow. A100/H100 is much faster. 4bit loading + LoRA lets you fit a 20B model into modest VRAM {% endhint %}

Examples:

Example 1 (unknown):

k0 k1 k2 k3 k4   <-- keys
q0  X
q1  X  X
q2  X  X  X
q3  X  X  X  X
q4  X  X  X  X  X   <-- last query row (most important for decoding)

Example 2 (unknown):

k0 k1 k2 k3 k4
q0
q1
q2
q3
q4   X  X  X  X  X

Example 3 (unknown):

k0 k1 k2 k3 k4
q0
q1
q2
q3
q4   X   (note that q4 has q_idx=0 as this is the first query in current setup)

Fine-tuning LLMs with Blackwell, RTX 50 series & Unsloth

URL: llms-txt#fine-tuning-llms-with-blackwell,-rtx-50-series-&-unsloth

Contents:

  • Pip install

Learn how to fine-tune LLMs on NVIDIA's Blackwell RTX 50 series and B200 GPUs with our step-by-step guide.

Unsloth now supports NVIDIAs Blackwell architecture GPUs, including RTX 50-series GPUs (50605090), RTX PRO 6000, and GPUS such as B200, B40, GB100, GB102 and more! You can read the official NVIDIA blogpost here.

Unsloth is now compatible with every NVIDIA GPU from 2018+ including the DGX Spark.

Our new Docker image supports Blackwell. Run the Docker image and start training! Guide

Simply install Unsloth:

If you see issues, another option is to create a separate isolated environment:

Note it might be pip3 or pip3.13 and also python3 or python3.13

You might encounter some Xformers issues, in which cause you should build from source:

{% code overflow="wrap" %}

Examples:

Example 1 (bash):

pip install unsloth

Example 2 (bash):

python -m venv unsloth
source unsloth/bin/activate
pip install unsloth

Tutorial: How to Finetune Llama-3 and Use In Ollama

URL: llms-txt#tutorial:-how-to-finetune-llama-3-and-use-in-ollama

Contents:

    1. What is Unsloth?
    1. What is Ollama?
    1. Install Unsloth
    1. Selecting a model to finetune
    1. Parameters for finetuning
    1. Alpaca Dataset
    1. Multiple columns for finetuning
    1. Multi turn conversations
    1. Customizable Chat Templates
    1. Train the model

Beginner's Guide for creating a customized personal assistant (like ChatGPT) to run locally on Ollama

By the end of this tutorial, you will create a custom chatbot by finetuning Llama-3 with Unsloth for free. It can run locally via Ollama on your PC, or in a free GPU instance through Google Colab. You will be able to interact with the chatbot interactively like below:

Unsloth makes finetuning much easier, and can automatically export the finetuned model to Ollama with integrated automatic Modelfile creation! If you need help, you can join our Discord server: https://discord.com/invite/unsloth

{% hint style="warning" %} If youd like to copy or save the code, everything is available in our Ollama Colab notebook. You can use it directly there or adapt it for your local setup: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3_(8B)-Ollama.ipynb {% endhint %}

1. What is Unsloth?

Unsloth makes finetuning LLMs like Llama-3, Mistral, Phi-3 and Gemma 2x faster, use 70% less memory, and with no degradation in accuracy! We will be using Google Colab which provides a free GPU during this tutorial. You can access our free notebooks below:

You will also need to login into your Google account!

2. What is Ollama?

Ollama allows you to run language models from your own computer in a quick and simple way! It quietly launches a program which can run a language model like Llama-3 in the background. If you suddenly want to ask the language model a question, you can simply submit a request to Ollama, and it'll quickly return the results to you! We'll be using Ollama as our inference engine!

3. Install Unsloth

If you have never used a Colab notebook, a quick primer on the notebook itself:

  1. Play Button at each "cell". Click on this to run that cell's code. You must not skip any cells and you must run every cell in chronological order. If you encounter any errors, simply rerun the cell you did not run before. Another option is to click CTRL + ENTER if you don't want to click the play button.
  2. Runtime Button in the top toolbar. You can also use this button and hit "Run all" to run the entire notebook in 1 go. This will skip all the customization steps, and can be a good first try.
  3. Connect / Reconnect T4 button. You can click here for more advanced system statistics.

The first installation cell looks like below: Remember to click the PLAY button in the brackets [ ]. We grab our open source Github package, and install some other packages.

4. Selecting a model to finetune

Let's now select a model for finetuning! We defaulted to Llama-3 from Meta / Facebook which was trained on a whopping 15 trillion "tokens". Assume a token is like 1 English word. That's approximately 350,000 thick Encyclopedias worth! Other popular models include Mistral, Phi-3 (trained using GPT-4 output) and Gemma from Google (13 trillion tokens!).

Unsloth supports these models and more! In fact, simply type a model from the Hugging Face model hub to see if it works! We'll error out if it doesn't work.

There are 3 other settings which you can toggle:

This determines the context length of the model. Gemini for example has over 1 million context length, whilst Llama-3 has 8192 context length. We allow you to select ANY number - but we recommend setting it 2048 for testing purposes. Unsloth also supports very long context finetuning, and we show we can provide 4x longer context lengths than the best. 2.

Keep this as None, but you can select torch.float16 or torch.bfloat16 for newer GPUs. 3.

We do finetuning in 4 bit quantization. This reduces memory usage by 4x, allowing us to actually do finetuning in a free 16GB memory GPU. 4 bit quantization essentially converts weights into a limited set of numbers to reduce memory usage. A drawback of this is there is a 1-2% accuracy degradation. Set this to False on larger GPUs like H100s if you want that tiny extra accuracy.

If you run the cell, you will get some print outs of the Unsloth version, which model you are using, how much memory your GPU has, and some other statistics. Ignore this for now.

5. Parameters for finetuning

Now to customize your finetune, you can edit the numbers above, but you can ignore it, since we already select quite reasonable numbers.

The goal is to change these numbers to increase accuracy, but also counteract over-fitting. Over-fitting is when you make the language model memorize a dataset, and not be able to answer novel new questions. We want to a final model to answer unseen questions, and not do memorization.

The rank of the finetuning process. A larger number uses more memory and will be slower, but can increase accuracy on harder tasks. We normally suggest numbers like 8 (for fast finetunes), and up to 128. Too large numbers can causing over-fitting, damaging your model's quality. 2.

We select all modules to finetune. You can remove some to reduce memory usage and make training faster, but we highly do not suggest this. Just train on all modules! 3.

The scaling factor for finetuning. A larger number will make the finetune learn more about your dataset, but can promote over-fitting. We suggest this to equal to the rank r, or double it. 4.

Leave this as 0 for faster training! Can reduce over-fitting, but not that much. 5.

Leave this as 0 for faster and less over-fit training! 6.

Options include True, False and "unsloth". We suggest "unsloth" since we reduce memory usage by an extra 30% and support extremely long context finetunes.You can read up here: https://unsloth.ai/blog/long-context for more details. 7.

The number to determine deterministic runs. Training and finetuning needs random numbers, so setting this number makes experiments reproducible. 8.

Advanced feature to set the lora_alpha = 16 automatically. You can use this if you want! 9.

Advanced feature to initialize the LoRA matrices to the top r singular vectors of the weights. Can improve accuracy somewhat, but can make memory usage explode at the start.

We will now use the Alpaca Dataset created by calling GPT-4 itself. It is a list of 52,000 instructions and outputs which was very popular when Llama-1 was released, since it made finetuning a base LLM be competitive with ChatGPT itself.

You can access the GPT4 version of the Alpaca dataset here: https://huggingface.co/datasets/vicgalle/alpaca-gpt4. An older first version of the dataset is here: https://github.com/tatsu-lab/stanford_alpaca. Below shows some examples of the dataset:

You can see there are 3 columns in each row - an instruction, and input and an output. We essentially combine each row into 1 large prompt like below. We then use this to finetune the language model, and this made it very similar to ChatGPT. We call this process supervised instruction finetuning.

7. Multiple columns for finetuning

But a big issue is for ChatGPT style assistants, we only allow 1 instruction / 1 prompt, and not multiple columns / inputs. For example in ChatGPT, you can see we must submit 1 prompt, and not multiple prompts.

This essentially means we have to "merge" multiple columns into 1 large prompt for finetuning to actually function!

For example the very famous Titanic dataset has many many columns. Your job was to predict whether a passenger has survived or died based on their age, passenger class, fare price etc. We can't simply pass this into ChatGPT, but rather, we have to "merge" this information into 1 large prompt.

For example, if we ask ChatGPT with our "merged" single prompt which includes all the information for that passenger, we can then ask it to guess or predict whether the passenger has died or survived.

Other finetuning libraries require you to manually prepare your dataset for finetuning, by merging all your columns into 1 prompt. In Unsloth, we simply provide the function called to_sharegpt which does this in 1 go!

To access the Titanic finetuning notebook or if you want to upload a CSV or Excel file, go here: https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS?usp=sharing

Now this is a bit more complicated, since we allow a lot of customization, but there are a few points:

  • You must enclose all columns in curly braces {}. These are the column names in the actual CSV / Excel file.
  • Optional text components must be enclosed in [[]]. For example if the column "input" is empty, the merging function will not show the text and skip this. This is useful for datasets with missing values.
  • Select the output or target / prediction column in output_column_name. For the Alpaca dataset, this will be output.

For example in the Titanic dataset, we can create a large merged prompt format like below, where each column / piece of text becomes optional.

For example, pretend the dataset looks like this with a lot of missing data:

Embarked Age Fare
S 23
18 7.25

Then, we do not want the result to be:

  1. The passenger embarked from S. Their age is 23. Their fare is EMPTY.
  2. The passenger embarked from EMPTY. Their age is 18. Their fare is $7.25.

Instead by optionally enclosing columns using [[]], we can exclude this information entirely.

  1. The passenger embarked from S. Their age is 23. [[Their fare is EMPTY.]]

  2. [[The passenger embarked from EMPTY.]] Their age is 18. Their fare is $7.25.

  3. The passenger embarked from S. Their age is 23.

  4. Their age is 18. Their fare is $7.25.

8. Multi turn conversations

A bit issue if you didn't notice is the Alpaca dataset is single turn, whilst remember using ChatGPT was interactive and you can talk to it in multiple turns. For example, the left is what we want, but the right which is the Alpaca dataset only provides singular conversations. We want the finetuned language model to somehow learn how to do multi turn conversations just like ChatGPT.

So we introduced the conversation_extension parameter, which essentially selects some random rows in your single turn dataset, and merges them into 1 conversation! For example, if you set it to 3, we randomly select 3 rows and merge them into 1! Setting them too long can make training slower, but could make your chatbot and final finetune much better!

Then set output_column_name to the prediction / output column. For the Alpaca dataset dataset, it would be the output column.

We then use the standardize_sharegpt function to just make the dataset in a correct format for finetuning! Always call this!

9. Customizable Chat Templates

We can now specify the chat template for finetuning itself. The very famous Alpaca format is below:

But remember we said this was a bad idea because ChatGPT style finetunes require only 1 prompt? Since we successfully merged all dataset columns into 1 using Unsloth, we essentially can create the below style chat template with 1 input column (instruction) and 1 output:

We just require you must put a {INPUT} field for the instruction and an {OUTPUT} field for the model's output field. We in fact allow an optional {SYSTEM} field as well which is useful to customize a system prompt just like in ChatGPT. For example, below are some cool examples which you can customize the chat template to be:

For the ChatML format used in OpenAI models:

Or you can use the Llama-3 template itself (which only functions by using the instruct version of Llama-3): We in fact allow an optional {SYSTEM} field as well which is useful to customize a system prompt just like in ChatGPT.

Or in the Titanic prediction task where you had to predict if a passenger died or survived in this Colab notebook which includes CSV and Excel uploading: https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS?usp=sharing

10. Train the model

Let's train the model now! We normally suggest people to not edit the below, unless if you want to finetune for longer steps or want to train on large batch sizes.

We do not normally suggest changing the parameters above, but to elaborate on some of them:

Increase the batch size if you want to utilize the memory of your GPU more. Also increase this to make training more smooth and make the process not over-fit. We normally do not suggest this, since this might make training actually slower due to padding issues. We normally instead ask you to increase gradient_accumulation_steps which just does more passes over the dataset. 2.

Equivalent to increasing the batch size above itself, but does not impact memory consumption! We normally suggest people increasing this if you want smoother training loss curves. 3.

We set steps to 60 for faster training. For full training runs which can take hours, instead comment out max_steps, and replace it with num_train_epochs = 1. Setting it to 1 means 1 full pass over your dataset. We normally suggest 1 to 3 passes, and no more, otherwise you will over-fit your finetune. 4.

Reduce the learning rate if you want to make the finetuning process slower, but also converge to a higher accuracy result most likely. We normally suggest 2e-4, 1e-4, 5e-5, 2e-5 as numbers to try.

Youll see a log of numbers during training. This is the training loss, which shows how well the model is learning from your dataset. For many cases, a loss around 0.5 to 1.0 is a good sign, but it depends on your dataset and task. If the loss is not going down, you might need to adjust your settings. If the loss goes to 0, that could mean overfitting, so it's important to check validation too.

11. Inference / running the model

Now let's run the model after we completed the training process! You can edit the yellow underlined part! In fact, because we created a multi turn chatbot, we can now also call the model as if it saw some conversations in the past like below:

Reminder Unsloth itself provides 2x faster inference natively as well, so always do not forget to call FastLanguageModel.for_inference(model). If you want the model to output longer responses, set max_new_tokens = 128 to some larger number like 256 or 1024. Notice you will have to wait longer for the result as well!

12. Saving the model

We can now save the finetuned model as a small 100MB file called a LoRA adapter like below. You can instead push to the Hugging Face hub as well if you want to upload your model! Remember to get a Hugging Face token via https://huggingface.co/settings/tokens and add your token!

After saving the model, we can again use Unsloth to run the model itself! Use FastLanguageModel again to call it for inference!

13. Exporting to Ollama

Finally we can export our finetuned model to Ollama itself! First we have to install Ollama in the Colab notebook:

Then we export the finetuned model we have to llama.cpp's GGUF formats like below:

Reminder to convert False to True for 1 row, and not change every row to True, or else you'll be waiting for a very time! We normally suggest the first row getting set to True, so we can export the finetuned model quickly to Q8_0 format (8 bit quantization). We also allow you to export to a whole list of quantization methods as well, with a popular one being q4_k_m.

Head over to https://github.com/ggerganov/llama.cpp to learn more about GGUF. We also have some manual instructions of how to export to GGUF if you want here: https://github.com/unslothai/unsloth/wiki#manually-saving-to-gguf

You will see a long list of text like below - please wait 5 to 10 minutes!!

And finally at the very end, it'll look like below:

Then, we have to run Ollama itself in the background. We use subprocess because Colab doesn't like asynchronous calls, but normally one just runs ollama serve in the terminal / command prompt.

14. Automatic Modelfile creation

The trick Unsloth provides is we automatically create a Modelfile which Ollama requires! This is a just a list of settings and includes the chat template which we used for the finetune process! You can also print the Modelfile generated like below:

We then ask Ollama to create a model which is Ollama compatible, by using the Modelfile

15. Ollama Inference

And we can now call the model for inference if you want to do call the Ollama server itself which is running on your own local machine / in the free Colab notebook in the background. Remember you can edit the yellow underlined part.

16. Interactive ChatGPT style

But to actually run the finetuned model like a ChatGPT, we have to do a bit more! First click the terminal icon and a Terminal will pop up. It's on the left sidebar.

Then, you might have to press ENTER twice to remove some weird output in the Terminal window. Wait a few seconds and type ollama run unsloth_model then hit ENTER.

And finally, you can interact with the finetuned model just like an actual ChatGPT! Hit CTRL + D to exit the system, and hit ENTER to converse with the chatbot!

You've successfully finetuned a language model and exported it to Ollama with Unsloth 2x faster and with 70% less VRAM! And all this for free in a Google Colab notebook!

If you want to learn how to do reward modelling, do continued pretraining, export to vLLM or GGUF, do text completion, or learn more about finetuning tips and tricks, head over to our Github.

If you need any help on finetuning, you can also join our Discord server here. If you want help with Ollama, you can also join their server here.

And finally, we want to thank you for reading and following this far! We hope this made you understand some of the nuts and bolts behind finetuning language models, and we hope this was useful!

To access our Alpaca dataset example click here, and our CSV / Excel finetuning guide is here.

Examples:

Example 1 (unknown):

max_seq_length = 2048

Example 2 (unknown):

dtype = None

Example 3 (unknown):

load_in_4bit = True

Example 4 (unknown):

r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128

Colors

URL: llms-txt#colors

pipe_colors = [(0, 100, 0), (210, 180, 140), (50, 50, 50)] land_colors = [(139, 69, 19), (255, 255, 0)]


https://github.com/ggerganov/llama.cpp/blob/master/examples/quantize/quantize.cpp#L19

URL: llms-txt#https://github.com/ggerganov/llama.cpp/blob/master/examples/quantize/quantize.cpp#l19


Load the Elise dataset (e.g., the version with emotion tags)

URL: llms-txt#load-the-elise-dataset-(e.g.,-the-version-with-emotion-tags)

dataset = load_dataset("MrDragonFox/Elise", split="train") print(len(dataset), "samples") # ~1200 samples in Elise


Gemma 3: How to Run & Fine-tune

URL: llms-txt#gemma-3:-how-to-run-&-fine-tune

Contents:

  • ⚙️ Recommended Inference Settings
    • Running Gemma 3 on your phone
  • 🦙 Tutorial: How to Run Gemma 3 in Ollama
  • 📖 Tutorial: How to Run Gemma 3 27B in llama.cpp

How to run Gemma 3 effectively with our GGUFs on llama.cpp, Ollama, Open WebUI and how to fine-tune with Unsloth!

Google releases Gemma 3 with a new 270M model and the previous 1B, 4B, 12B, and 27B sizes. The 270M and 1B are text-only, while larger models handle both text and vision. We provide GGUFs, and a guide of how to run it effectively, and how to finetune & do RL with Gemma 3!

{% hint style="success" %} NEW Aug 14, 2025 Update: Try our fine-tuning Gemma 3 (270M) notebook and GGUFs to run.

Also see our Gemma 3n Guide. {% endhint %}

Running TutorialFine-tuning Tutorial

Unsloth is the only framework which works in float16 machines for Gemma 3 inference and training. This means Colab Notebooks with free Tesla T4 GPUs also work!

{% hint style="info" %} According to the Gemma team, the optimal config for inference is
temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0 {% endhint %}

Unsloth Gemma 3 uploads with optimal configs:

GGUF Unsloth Dynamic 4-bit Instruct 16-bit Instruct

According to the Gemma team, the official recommended settings for inference is:

  • Temperature of 1.0
  • Top_K of 64
  • Min_P of 0.00 (optional, but 0.01 works well, llama.cpp default is 0.1)
  • Top_P of 0.95
  • Repetition Penalty of 1.0. (1.0 means disabled in llama.cpp and transformers)
  • Chat template:
<bos><start_of_turn>user\nHello!<end_of_turn>\n<start_of_turn>model\nHey there!<end_of_turn>\n<start_of_turn>user\nWhat is 1+1?<end_of_turn>\n<start_of_turn>model\n
  
  • Chat template with \nnewlines rendered (except for the last)

{% code overflow="wrap" %}

{% hint style="danger" %} llama.cpp an other inference engines auto add a <bos> - DO NOT add TWO <bos> tokens! You should ignore the <bos> when prompting the model! {% endhint %}

Running Gemma 3 on your phone

To run the models on your phone, we recommend using any mobile app that can run GGUFs locally on edge devices like phones. After fine-tuning you can export it to GGUF then run it locally on your phone. Ensure your phone has enough RAM/power to process the models as it can overheat so we recommend using Gemma 3 270M or the Gemma 3n models for this use-case. You can try the open-source project AnythingLLM's mobile app which you can download on Android here or ChatterUI, which are great apps for running GGUFs on your phone.

{% hint style="success" %} Remember, you can change the model name 'gemma-3-27b-it-GGUF' to any Gemma model like 'gemma-3-270m-it-GGUF:Q8_K_XL' for all the tutorials. {% endhint %}

🦙 Tutorial: How to Run Gemma 3 in Ollama

  1. Install ollama if you haven't already!

  2. Run the model! Note you can call ollama servein another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in params in our Hugging Face upload! You can change the model name 'gemma-3-27b-it-GGUF' to any Gemma model like 'gemma-3-270m-it-GGUF:Q8_K_XL'.

📖 Tutorial: How to Run Gemma 3 27B in llama.cpp

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

  2. If you want to use llama.cpp directly to load models, you can do the below: (:Q4_K_XL) is the quantization type. You can also download via Hugging Face (point 3). This is similar to ollama run

  3. OR download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose Q4_K_M, or other quantized versions (like BF16 full precision). More versions at: https://huggingface.co/unsloth/gemma-3-27b-it-GGUF

Examples:

Example 1 (unknown):

<bos><start_of_turn>user
Hello!<end_of_turn>
<start_of_turn>model
Hey there!<end_of_turn>
<start_of_turn>user
What is 1+1?<end_of_turn>
<start_of_turn>model\n

Example 2 (bash):

apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh

Example 3 (bash):

ollama run hf.co/unsloth/gemma-3-27b-it-GGUF:Q4_K_XL

Example 4 (bash):

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggerganov/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=ON -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli
cp llama.cpp/build/bin/llama-* llama.cpp

Unsloth Docs

URL: llms-txt#unsloth-docs

Contents:

  • 🦥 Why Unsloth?
  • Key Features
  • Quickstart
  • What is Fine-tuning and RL? Why?

Train your own model with Unsloth, an open-source framework for LLM fine-tuning and reinforcement learning.

At Unsloth, our mission is to make AI as accurate and accessible as possible. Train, run, evaluate and save gpt-oss, Llama, DeepSeek, TTS, Qwen, Mistral, Gemma LLMs 2x faster with 70% less VRAM.

Our docs will guide you through running & training your own model locally.

Get started Our GitHub

Cover image
DeepSeek-OCRFine-tune DeepSeek's latest OCR model.deepseek ocr logo.pngdeepseek-ocr-how-to-run-and-fine-tune
Qwen3-VLRun & fine-tune Qwen's new vision models!qwen3-vl promo.pngqwen3-vl-how-to-run-and-fine-tune
gpt-ossRun & Train OpenAI's new open LLMs.gpt-oss image.pnggpt-oss-reinforcement-learning

{% columns %} {% column %} {% content-ref url="fine-tuning-llms-guide" %} fine-tuning-llms-guide {% endcontent-ref %}

{% content-ref url="unsloth-notebooks" %} unsloth-notebooks {% endcontent-ref %}

{% column %} {% content-ref url="all-our-models" %} all-our-models {% endcontent-ref %}

{% content-ref url="../models/tutorials-how-to-fine-tune-and-run-llms" %} tutorials-how-to-fine-tune-and-run-llms {% endcontent-ref %} {% endcolumn %} {% endcolumns %}

Cover image
Unsloth Docker imageTrain LLMs with no setup with our new Docker!train without setup.pnghow-to-fine-tune-llms-with-unsloth-and-docker
Vision Reinforcement LearningVLM RL is now in Unsloth! RL with Qwen, Gemma.vision rl site.pngvision-reinforcement-learning-vlm-rl
How do Unsloth 1-bit Dynamic GGUFs perform?See GGUF benchmarks on Aider Polyglot!dynamic v2 with unsloth.pngunsloth-dynamic-ggufs-on-aider-polyglot
  • Unsloth streamlines model training locally and on Colab/Kaggle, covering loading, quantization, training, evaluation, saving, exporting, and integration with inference engines like Ollama, llama.cpp, and vLLM.

  • We directly collaborate with teams behind gpt-oss, Qwen3, Llama 4, Mistral, Google (Gemma 13) and Phi-4, where weve fixed critical bugs in models that greatly improved model accuracy.

  • Unsloth is the only training framework to support all model types: vision, text-to-speech (TTS), BERT, reinforcement learning (RL) while remaining highly customizable with flexible chat templates, dataset formatting and ready-to-use notebooks.

  • Supports full-finetuning, pretraining, 4-bit, 16-bit and 8-bit training.

  • The most efficient RL library, using 80% less VRAM. Supports GRPO, GSPO etc.

  • Supports all models: TTS, multimodal, BERT and more. Any model that works in transformers works in Unsloth.

  • 0% loss in accuracy - no approximation methods - all exact.

  • MultiGPU works already but a much better version is coming!

  • Unsloth supports Linux, Windows, Colab, Kaggle, NVIDIA and AMD & Intel. See:

{% content-ref url="beginner-start-here/unsloth-requirements" %} unsloth-requirements {% endcontent-ref %}

Install locally with pip (recommended) for Linux or WSL devices:

Use our official Docker image: unsloth/unsloth. Read our Docker guide.

For Windows install instructions, see here.

{% content-ref url="install-and-update" %} install-and-update {% endcontent-ref %}

What is Fine-tuning and RL? Why?

Fine-tuning an LLM customizes its behavior, enhances domain knowledge, and optimizes performance for specific tasks. By fine-tuning a pre-trained model (e.g. Llama-3.1-8B) on a dataset, you can:

  • Update Knowledge: Introduce new domain-specific information.
  • Customize Behavior: Adjust the models tone, personality, or response style.
  • Optimize for Tasks: Improve accuracy and relevance for specific use cases.

Reinforcement Learning (RL) is where an "agent" learns to make decisions by interacting with an environment and receiving feedback in the form of rewards or penalties.

  • Action: What the model generates (e.g. a sentence).
  • Reward: A signal indicating how good or bad the model's action was (e.g. did the response follow instructions? was it helpful?).
  • Environment: The scenario or task the model is working on (e.g. answering a users question).

Example use-cases of fine-tuning or RL:

  • Train LLM to predict if a headline impacts a company positively or negatively.
  • Use historical customer interactions for more accurate and custom responses.
  • Train LLM on legal texts for contract analysis, case law research, and compliance.

You can think of a fine-tuned model as a specialized agent designed to do specific tasks more effectively and efficiently. Fine-tuning can replicate all of RAG's capabilities, but not vice versa.

{% content-ref url="beginner-start-here/faq-+-is-fine-tuning-right-for-me" %} faq-+-is-fine-tuning-right-for-me {% endcontent-ref %}

{% content-ref url="reinforcement-learning-rl-guide" %} reinforcement-learning-rl-guide {% endcontent-ref %}

Examples:

Example 1 (unknown):

pip install unsloth

Do model patching and add fast LoRA weights

URL: llms-txt#do-model-patching-and-add-fast-lora-weights

model = FastLanguageModel.get_peft_model( model, r = 64, target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj",], lora_alpha = 64, lora_dropout = 0, # Supports any, but = 0 is optimized bias = "none", # Supports any, but = "none" is optimized # [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes! use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context random_state = 3407, max_seq_length = max_seq_length, )

dpo_trainer = DPOTrainer( model = model, ref_model = None, args = TrainingArguments( per_device_train_batch_size = 4, gradient_accumulation_steps = 8, warmup_ratio = 0.1, num_train_epochs = 3, fp16 = not is_bfloat16_supported(), bf16 = is_bfloat16_supported(), logging_steps = 1, optim = "adamw_8bit", seed = 42, output_dir = "outputs", ), beta = 0.1, train_dataset = YOUR_DATASET_HERE, # eval_dataset = YOUR_DATASET_HERE, tokenizer = tokenizer, max_length = 1024, max_prompt_length = 512, ) dpo_trainer.train()


---

## Saving to GGUF

**URL:** llms-txt#saving-to-gguf

Saving models to 16bit for GGUF so you can use it for Ollama, Jan AI, Open WebUI and more!

{% tabs %}
{% tab title="Locally" %}

To save to GGUF, use the below to save locally:

To push to Hugging Face hub:

All supported quantization options for `quantization_method` are listed below:

**Examples:**

Example 1 (python):
```python
model.save_pretrained_gguf("directory", tokenizer, quantization_method = "q4_k_m")
model.save_pretrained_gguf("directory", tokenizer, quantization_method = "q8_0")
model.save_pretrained_gguf("directory", tokenizer, quantization_method = "f16")

Example 2 (python):

model.push_to_hub_gguf("hf_username/directory", tokenizer, quantization_method = "q4_k_m")
model.push_to_hub_gguf("hf_username/directory", tokenizer, quantization_method = "q8_0")

Install library

URL: llms-txt#install-library

!pip install wandb --upgrade


How to Fine-tune LLMs with Unsloth & Docker

URL: llms-txt#how-to-fine-tune-llms-with-unsloth-&-docker

Contents:

  • Step-by-Step Tutorial
  • 📖 Usage Example

Learn how to fine-tune LLMs or do Reinforcement Learning (RL) with Unsloth's Docker image.

Local training can be complex due to dependency hell or breaking environments. Unsloths Docker image can bypass these issues. No setup is needed: pull and run the image and start training.

Why Use Unsloth & Docker?

Unsloths Docker image is stable, up-to-date and works in supported setups like Windows.

  • Fully contained dependencies keep your system clean. Runs safely without root.
  • Use locally or on any platform with pre-installed notebooks.

{% hint style="success" %} You can now use our main Docker image unsloth/unsloth for Blackwell and 50-series GPUs - no separate image needed. {% endhint %}

Step-by-Step Tutorial

{% stepper %} {% step %}

Install Docker and NVIDIA Container Toolkit.

Install Docker via Linux or Desktop (other).
Then install NVIDIA Container Toolkit:

export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.17.8-1
sudo apt-get update && sudo apt-get install -y \
  nvidia-container-toolkit=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
  nvidia-container-toolkit-base=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
  libnvidia-container-tools=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
  libnvidia-container1=${NVIDIA_CONTAINER_TOOLKIT_VERSION}
{% endstep %}

Run the container.

unsloth/unsloth is Unsloth's only Docker image. For Blackwell and 50-series GPUs, use this same image - no separate image needed. If using DGX Spark, you'll need to follow our DGX guide.

{% endstep %}

Access Jupyter Lab

Go to http://localhost:8888 and open Unsloth.

Access the unsloth-notebooks tabs to see Unsloth notebooks.

{% endstep %}

Start training with Unsloth

If you're new, follow our step-by-step Fine-tuning Guide, RL Guide or just save/copy any of our premade notebooks.

{% endstep %} {% endstepper %}

📂 Container Structure

  • /workspace/work/ — Your mounted work directory
  • /workspace/unsloth-notebooks/ — Example fine-tuning notebooks
  • /home/unsloth/ — User home directory

Setting up SSH Key

If you don't have an SSH key pair:

Examples:

Example 1 (bash):

docker run -d -e JUPYTER_PASSWORD="mypassword" \
  -p 8888:8888 -p 2222:22 \
  -v $(pwd)/work:/workspace/work \
  --gpus all \
  unsloth/unsloth

Example 2 (bash):

docker run -d -e JUPYTER_PORT=8000 \
  -e JUPYTER_PASSWORD="mypassword" \
  -e "SSH_KEY=$(cat ~/.ssh/container_key.pub)" \
  -e USER_PASSWORD="unsloth2024" \
  -p 8000:8000 -p 2222:22 \
  -v $(pwd)/work:/workspace/work \
  --gpus all \
  unsloth/unsloth

Google Colab

URL: llms-txt#google-colab

Contents:

  • Colab Example Code

To install and run Unsloth on Google Colab, follow the steps below:

If you have never used a Colab notebook, a quick primer on the notebook itself:

  1. Play Button at each "cell". Click on this to run that cell's code. You must not skip any cells and you must run every cell in chronological order. If you encounter errors, simply rerun the cell you did not run. Another option is to click CTRL + ENTER if you don't want to click the play button.
  2. Runtime Button in the top toolbar. You can also use this button and hit "Run all" to run the entire notebook in 1 go. This will skip all the customization steps, but is a good first try.
  3. Connect / Reconnect T4 button. T4 is the free GPU Google is providing. It's quite powerful!

The first installation cell looks like below: Remember to click the PLAY button in the brackets [ ]. We grab our open source Github package, and install some other packages.

Colab Example Code

Unsloth example code to fine-tune gpt-oss-20b:

from unsloth import FastLanguageModel, FastModel
import torch
from trl import SFTTrainer, SFTConfig
from datasets import load_dataset
max_seq_length = 2048 # Supports RoPE Scaling internally, so choose any!

---

## RL Reward Hacking

**URL:** llms-txt#rl-reward-hacking

**Contents:**
- :trophy: Reward Hacking Overview

Learn what is Reward Hacking in Reinforcement Learning and how to counter it.

The ultimate goal of RL is to maximize some reward (say speed, revenue, some metric). But RL can **cheat.** When the RL algorithm learns a trick or exploits something to increase the reward, without actually doing the task at end, this is called "**Reward Hacking**".

It's the reason models learn to modify unit tests to pass coding challenges, and these are critical blockers for real world deployment. Some other good examples are from [Wikipedia](https://en.wikipedia.org/wiki/Reward_hacking).

<div align="center"><figure><img src="https://i.pinimg.com/originals/55/e0/1b/55e01b94a9c5546b61b59ae300811c83.gif" alt="" width="188"><figcaption></figcaption></figure></div>

**Can you counter reward hacking? Yes!** In our [free gpt-oss RL notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/gpt-oss-\(20B\)-GRPO.ipynb) we explore how to counter reward hacking in a code generation setting and showcase tangible solutions to common error modes. We saw the model edit the timing function, outsource to other libraries, cache the results, and outright cheat. After countering, the result is our model generates genuinely optimized matrix multiplication kernels, not clever cheats.

## :trophy: Reward Hacking Overview

Some common examples of reward hacking during RL include:

RL learns to use Numpy, Torch, other libraries, which calls optimized CUDA kernels. We can stop the RL algorithm from calling optimized code by inspecting if the generated code imports other non standard Python libraries.

#### Caching & Cheating

RL learns to cache the result of the output and RL learns to find the actual output by inspecting Python global variables.

We can stop the RL algorithm from using cached data by wiping the cache with a large fake matrix. We also have to benchmark carefully with multiple loops and turns.

RL learns to edit the timing function to make it output 0 time as passed. We can stop the RL algorithm from using global or cached variables by restricting it's `locals` and `globals`. We are also going to use `exec` to create the function, so we have to save the output to an empty dict. We also disallow global variable access via `types.FunctionType(f.__code__, {})`\\

---

## Install & Update

**URL:** llms-txt#install-&-update

Learn to install Unsloth locally or online.

Unsloth works on Linux, Windows, NVIDIA, AMD, Google Colab and more. See our [system requirements](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements).

**Recommended installation method:**

<table data-view="cards"><thead><tr><th data-type="content-ref"></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td><a href="install-and-update/pip-install">pip-install</a></td><td><a href="install-and-update/pip-install">pip-install</a></td></tr><tr><td><a href="install-and-update/docker">docker</a></td><td></td></tr><tr><td><a href="install-and-update/windows-installation">windows-installation</a></td><td></td></tr><tr><td><a href="install-and-update/updating">updating</a></td><td><a href="install-and-update/updating">updating</a></td></tr><tr><td><a href="install-and-update/amd">amd</a></td><td></td></tr><tr><td><a href="install-and-update/conda-install">conda-install</a></td><td><a href="install-and-update/conda-install">conda-install</a></td></tr><tr><td><a href="install-and-update/google-colab">google-colab</a></td><td><a href="install-and-update/google-colab">google-colab</a></td></tr></tbody></table>

**Examples:**

Example 1 (unknown):
```unknown
pip install unsloth

Saving to vLLM for deployment

URL: llms-txt#saving-to-vllm-for-deployment

Contents:

  • :computer:Installing vLLM
  • :truck:Deploying vLLM models
  • :fire_engine:vLLM Deployment Server Flags, Engine Arguments & Options

Saving models to 16bit for vLLM deployment and serving

To save to 16bit for vLLM, use:

To merge to 4bit to load on HuggingFace, first call merged_4bit. Then use merged_4bit_forced if you are certain you want to merge to 4bit. I highly discourage you, unless you know what you are going to do with the 4bit model (ie for DPO training for eg or for HuggingFace's online inference engine)

To save just the LoRA adapters, either use:

Or just use our builtin function to do that:

:computer:Installing vLLM

For NVIDIA GPUs, use uv and do:

For AMD GPUs, please use then nightly Docker image: rocm/vllm-dev:nightly

For the nightly branch for NVIDIA GPUs, do:

See https://docs.vllm.ai/en/stable/getting_started/installation for more details

:truck:Deploying vLLM models

After saving your finetune, you can simply do:

:fire_engine:vLLM Deployment Server Flags, Engine Arguments & Options

Some important server flags to use are at #vllm-deployment-server-flags-engine-arguments-and-options

Examples:

Example 1 (python):

model.save_pretrained_merged("model", tokenizer, save_method = "merged_16bit")
model.push_to_hub_merged("hf/model", tokenizer, save_method = "merged_16bit", token = "")

Example 2 (python):

model.save_pretrained_merged("model", tokenizer, save_method = "merged_4bit")
model.push_to_hub_merged("hf/model", tokenizer, save_method = "merged_4bit", token = "")

Example 3 (python):

model.save_pretrained("model")
tokenizer.save_pretrained("tokenizer")

Example 4 (python):

model.save_pretrained_merged("model", tokenizer, save_method = "lora")
model.push_to_hub_merged("hf/model", tokenizer, save_method = "lora", token = "")

Generate new key pair

URL: llms-txt#generate-new-key-pair

ssh-keygen -t rsa -b 4096 -f ~/.ssh/container_key


Use the exact same config as QAT (convenient function)

URL: llms-txt#use-the-exact-same-config-as-qat-(convenient-function)

model.save_pretrained_torchao( model, "tokenizer", torchao_config = model._torchao_config.base_config, )


Pip Install

URL: llms-txt#pip-install

Contents:

  • Recommended installation:
  • Uninstall + Reinstall
  • Advanced Pip Installation

To install Unsloth locally via Pip, follow the steps below:

Install with pip (recommended) for the latest pip release:

To install the latest main branch of Unsloth:

If you're installing Unsloth in Jupyter, Colab, or other notebooks, be sure to prefix the command with !. This isn't necessary when using a terminal

{% hint style="info" %} Python 3.13 is now supported! {% endhint %}

Uninstall + Reinstall

If you're still encountering dependency issues with Unsloth, many users have resolved them by forcing uninstalling and reinstalling Unsloth:

Advanced Pip Installation

{% hint style="warning" %} Do NOT use this if you have Conda. {% endhint %}

Pip is a bit more complex since there are dependency issues. The pip command is different for torch 2.2,2.3,2.4,2.5 and CUDA versions.

For other torch versions, we support torch211, torch212, torch220, torch230, torch240 and for CUDA versions, we support cu118 and cu121 and cu124. For Ampere devices (A100, H100, RTX3090) and above, use cu118-ampere or cu121-ampere or cu124-ampere.

For example, if you have torch 2.4 and CUDA 12.1, use:

Another example, if you have torch 2.5 and CUDA 12.4, use:

Or, run the below in a terminal to get the optimal pip installation command:

Or, run the below manually in a Python REPL:

Examples:

Example 1 (bash):

pip install unsloth

Example 2 (bash):

pip uninstall unsloth unsloth_zoo -y && pip install --no-deps git+https://github.com/unslothai/unsloth_zoo.git && pip install --no-deps git+https://github.com/unslothai/unsloth.git

Example 3 (bash):

pip install --upgrade --force-reinstall --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth.git
pip install --upgrade --force-reinstall --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth-zoo.git

Example 4 (bash):

pip install --upgrade pip
pip install "unsloth[cu121-torch240] @ git+https://github.com/unslothai/unsloth.git"