Skip to content

Conversation

@void-mckenzie
Copy link
Contributor

@void-mckenzie void-mckenzie commented Mar 16, 2025

While the previous release fixes the vLLM component of the issue #2008 , the process still errors out for custom models due to the on the fly bnb_config not being passed to the convert_vllm_to_huggingface method in unsloth_zoo's vllm_utils.py

NOTE: This PR needs to be merged along with unslothai/unsloth-zoo#79 in unsloth_zoo, where the vllm_utils.py is edited to handle this additional configuration parsing logic.

Code Snippet:

from unsloth import FastLanguageModel
import torch
from dotenv import load_dotenv
import os

load_dotenv()
max_seq_length = 1024 # Can increase for longer reasoning traces
lora_rank = 32 # Larger rank = smarter, but slower

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "void-mckenzie/krikri-sft_compound_instruct",
    max_seq_length = max_seq_length,
    load_in_4bit = True, # False for LoRA 16bit
    fast_inference = True, # Enable vLLM fast inference
    max_lora_rank = lora_rank,
    gpu_memory_utilization = 0.8, # Reduce if out of memory
)

Output before fix:

🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
INFO 03-16 00:00:28 __init__.py:207] Automatically detected platform cuda.
==((====))==  Unsloth 2025.3.14: Fast Llama patching. Transformers: 4.49.0. vLLM: 0.7.3.
   \\   /|    NVIDIA GeForce RTX 4090. Num GPUs = 1. Max memory: 23.621 GB. Platform: Linux.
O^O/ \_/ \    Torch: 2.5.1+cu124. CUDA: 8.9. CUDA Toolkit: 12.4. Triton: 3.1.0
\        /    Bfloat16 = TRUE. FA [Xformers = 0.0.28.post3. FA2 = False]
 "-____-"     Free license: https://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Unsloth: vLLM loading void-mckenzie/krikri-sft_compound_instruct with actual GPU utilization = 74.61%
Unsloth: Your GPU has CUDA compute capability 8.9 with VRAM = 23.62 GB.
Unsloth: Using conservativeness = 1.0. Chunked prefill tokens = 1024. Num Sequences = 160.
Unsloth: vLLM's KV Cache can use up to 2.19 GB. Also swap space = 4 GB.
INFO 03-16 00:00:34 config.py:549] This model supports multiple tasks: {'generate', 'reward', 'score', 'classify', 'embed'}. Defaulting to 'generate'.
Unsloth: vLLM Bitsandbytes config using kwargs = {'load_in_8bit': False, 'load_in_4bit': True, 'bnb_4bit_compute_dtype': 'bfloat16', 'bnb_4bit_quant_storage': 'uint8', 'bnb_4bit_quant_type': 'fp4', 'bnb_4bit_use_double_quant': False, 'llm_int8_enable_fp32_cpu_offload': False, 'llm_int8_has_fp16_weight': False, 'llm_int8_skip_modules': [], 'llm_int8_threshold': 6.0}
INFO 03-16 00:00:34 llm_engine.py:234] Initializing a V0 LLM engine (v0.7.3) with config: model='void-mckenzie/krikri-sft_compound_instruct', speculative_config=None, tokenizer='void-mckenzie/krikri-sft_compound_instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=1024, download_dir=None, load_format=LoadFormat.BITSANDBYTES, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=bitsandbytes, enforce_eager=False, kv_cache_dtype=auto,  device_config=cuda:0, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=void-mckenzie/krikri-sft_compound_instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=False, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":0,"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":160}, use_cached_outputs=False, 
INFO 03-16 00:00:38 cuda.py:229] Using Flash Attention backend.
[W316 00:00:38.381537842 CUDAAllocatorConfig.h:28] Warning: expandable_segments not supported on this platform (function operator())
INFO 03-16 00:00:38 model_runner.py:1110] Starting to load model void-mckenzie/krikri-sft_compound_instruct...
INFO 03-16 00:00:38 loader.py:1089] Loading weights with BitsAndBytes quantization.  May take a while ...
INFO 03-16 00:00:38 weight_utils.py:254] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards:   0% Completed | 0/4 [00:00<?, ?it/s]

INFO 03-16 00:00:42 model_runner.py:1115] Loading model weights took 5.6977 GB
INFO 03-16 00:00:42 punica_selector.py:18] Using PunicaWrapperGPU.
INFO 03-16 00:00:43 worker.py:267] Memory profiling takes 0.77 seconds
INFO 03-16 00:00:43 worker.py:267] the current vLLM instance can use total_gpu_memory (23.62GiB) x gpu_memory_utilization (0.75) = 17.62GiB
INFO 03-16 00:00:43 worker.py:267] model weights take 5.70GiB; non_torch_memory takes 0.09GiB; PyTorch activation peak memory takes 0.87GiB; the rest of the memory reserved for KV Cache is 10.97GiB.
INFO 03-16 00:00:43 executor_base.py:111] # cuda blocks: 5617, # CPU blocks: 2048
INFO 03-16 00:00:43 executor_base.py:116] Maximum concurrency for 1024 tokens per request: 87.77x
INFO 03-16 00:00:45 model_runner.py:1434] Capturing cudagraphs for decoding. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. If out-of-memory error occurs during cudagraph capture, consider decreasing `gpu_memory_utilization` or switching to eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
Capturing CUDA graph shapes: 100%|██████████| 23/23 [00:07<00:00,  2.98it/s]INFO 03-16 00:00:52 model_runner.py:1562] Graph capturing finished in 8 secs, took 0.58 GiB
INFO 03-16 00:00:52 llm_engine.py:436] init engine (profile, create kv cache, warmup model) took 10.52 seconds

---------------------------------------------------------------------------
UnboundLocalError                         Traceback (most recent call last)
Cell In[1], line 10
      7 max_seq_length = 1024 # Can increase for longer reasoning traces
      8 lora_rank = 32 # Larger rank = smarter, but slower
---> 10 model, tokenizer = FastLanguageModel.from_pretrained(
     11     model_name = "void-mckenzie/krikri-sft_compound_instruct",
     12     max_seq_length = max_seq_length,
     13     load_in_4bit = True, # False for LoRA 16bit
     14     fast_inference = True, # Enable vLLM fast inference
     15     max_lora_rank = lora_rank,
     16     gpu_memory_utilization = 0.8, # Reduce if out of memory
     17     token='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',
     18 )

File ~/anaconda3/envs/llm/lib/python3.12/site-packages/unsloth/models/loader.py:351, in FastLanguageModel.from_pretrained(model_name, max_seq_length, dtype, load_in_4bit, load_in_8bit, full_finetuning, token, device_map, rope_scaling, fix_tokenizer, trust_remote_code, use_gradient_checkpointing, resize_model_vocab, revision, use_exact_model_name, fast_inference, gpu_memory_utilization, float8_kv_cache, random_state, max_lora_rank, disable_log_stats, *args, **kwargs)
    348     pass
    349 pass
--> 351 model, tokenizer = dispatch_model.from_pretrained(
    352     model_name        = model_name,
    353     max_seq_length    = max_seq_length,
    354     dtype             = _get_dtype(dtype),
    355     load_in_4bit      = load_in_4bit,
    356     token             = token,
    357     device_map        = device_map,
    358     rope_scaling      = rope_scaling,
    359     fix_tokenizer     = fix_tokenizer,
    360     model_patcher     = dispatch_model,
    361     tokenizer_name    = tokenizer_name,
    362     trust_remote_code = trust_remote_code,
    363     revision          = revision if not is_peft else None,
    364 
    365     fast_inference    = fast_inference,
    366     gpu_memory_utilization = gpu_memory_utilization,
    367     float8_kv_cache   = float8_kv_cache,
    368     random_state      = random_state,
    369     max_lora_rank     = max_lora_rank,
    370     disable_log_stats = disable_log_stats,
    371     *args, **kwargs,
    372 )
    374 if resize_model_vocab is not None:
    375     model.resize_token_embeddings(resize_model_vocab)

File ~/anaconda3/envs/llm/lib/python3.12/site-packages/unsloth/models/llama.py:1825, in FastLlamaModel.from_pretrained(model_name, max_seq_length, dtype, load_in_4bit, token, device_map, rope_scaling, fix_tokenizer, model_patcher, tokenizer_name, trust_remote_code, fast_inference, gpu_memory_utilization, float8_kv_cache, random_state, max_lora_rank, disable_log_stats, **kwargs)
   1823 # Convert to HF format
   1824 _, quant_state_dict = get_vllm_state_dict(llm, config = model_config)
-> 1825 model = convert_vllm_to_huggingface(quant_state_dict, model_config, dtype)
   1826 model.vllm_engine = llm
   1827 model.fast_generate = model.vllm_engine.generate

File ~/anaconda3/envs/llm/lib/python3.12/site-packages/torch/utils/_contextlib.py:116, in context_decorator.<locals>.decorate_context(*args, **kwargs)
    113 @functools.wraps(func)
    114 def decorate_context(*args, **kwargs):
    115     with ctx_factory():
--> 116         return func(*args, **kwargs)

File ~/anaconda3/envs/llm/lib/python3.12/site-packages/unsloth_zoo/vllm_utils.py:614, in convert_vllm_to_huggingface(quant_state_dict, config, dtype, bnb_config)
    612 quant_state = quant_state_dict[f"{layer_name}.weight.quant_state"]
    613 n_layers = config.num_hidden_layers
--> 614 layer = Linear4bit(0, 0, device = "cuda:0", bias = has_bias, compute_dtype = compute_dtype, **kwargs)
    615 layer.in_features  = quant_state.shape[1]
    616 layer.out_features = quant_state.shape[0]

UnboundLocalError: cannot access local variable 'compute_dtype' where it is not associated with a value

Output After Fix:

🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
INFO 03-16 00:08:17 __init__.py:207] Automatically detected platform cuda.
==((====))==  Unsloth 2025.3.14: Fast Llama patching. Transformers: 4.49.0. vLLM: 0.7.3.
   \\   /|    NVIDIA GeForce RTX 4090. Num GPUs = 1. Max memory: 23.621 GB. Platform: Linux.
O^O/ \_/ \    Torch: 2.5.1+cu124. CUDA: 8.9. CUDA Toolkit: 12.4. Triton: 3.1.0
\        /    Bfloat16 = TRUE. FA [Xformers = 0.0.28.post3. FA2 = False]
 "-____-"     Free license: https://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Unsloth: vLLM loading void-mckenzie/krikri-sft_compound_instruct with actual GPU utilization = 75.45%
Unsloth: Your GPU has CUDA compute capability 8.9 with VRAM = 23.62 GB.
Unsloth: Using conservativeness = 1.0. Chunked prefill tokens = 1024. Num Sequences = 160.
Unsloth: vLLM's KV Cache can use up to 2.39 GB. Also swap space = 4 GB.
INFO 03-16 00:08:22 config.py:549] This model supports multiple tasks: {'score', 'classify', 'reward', 'generate', 'embed'}. Defaulting to 'generate'.
Unsloth: vLLM Bitsandbytes config using kwargs = {'load_in_8bit': False, 'load_in_4bit': True, 'bnb_4bit_compute_dtype': 'bfloat16', 'bnb_4bit_quant_storage': 'uint8', 'bnb_4bit_quant_type': 'fp4', 'bnb_4bit_use_double_quant': False, 'llm_int8_enable_fp32_cpu_offload': False, 'llm_int8_has_fp16_weight': False, 'llm_int8_skip_modules': [], 'llm_int8_threshold': 6.0}
INFO 03-16 00:08:22 llm_engine.py:234] Initializing a V0 LLM engine (v0.7.3) with config: model='void-mckenzie/krikri-sft_compound_instruct', speculative_config=None, tokenizer='void-mckenzie/krikri-sft_compound_instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=1024, download_dir=None, load_format=LoadFormat.BITSANDBYTES, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=bitsandbytes, enforce_eager=False, kv_cache_dtype=auto,  device_config=cuda:0, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=void-mckenzie/krikri-sft_compound_instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=False, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":0,"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":160}, use_cached_outputs=False, 
INFO 03-16 00:08:23 cuda.py:229] Using Flash Attention backend.
[W316 00:08:23.516938937 CUDAAllocatorConfig.h:28] Warning: expandable_segments not supported on this platform (function operator())
INFO 03-16 00:08:23 model_runner.py:1110] Starting to load model void-mckenzie/krikri-sft_compound_instruct...
INFO 03-16 00:08:23 loader.py:1089] Loading weights with BitsAndBytes quantization.  May take a while ...
INFO 03-16 00:08:24 weight_utils.py:254] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards:   0% Completed | 0/4 [00:00<?, ?it/s]

INFO 03-16 00:08:26 model_runner.py:1115] Loading model weights took 5.6977 GB
INFO 03-16 00:08:26 punica_selector.py:18] Using PunicaWrapperGPU.
INFO 03-16 00:08:27 worker.py:267] Memory profiling takes 0.77 seconds
INFO 03-16 00:08:27 worker.py:267] the current vLLM instance can use total_gpu_memory (23.62GiB) x gpu_memory_utilization (0.75) = 17.82GiB
INFO 03-16 00:08:27 worker.py:267] model weights take 5.70GiB; non_torch_memory takes 0.08GiB; PyTorch activation peak memory takes 0.87GiB; the rest of the memory reserved for KV Cache is 11.18GiB.
INFO 03-16 00:08:27 executor_base.py:111] # cuda blocks: 5723, # CPU blocks: 2048
INFO 03-16 00:08:27 executor_base.py:116] Maximum concurrency for 1024 tokens per request: 89.42x
INFO 03-16 00:08:29 model_runner.py:1434] Capturing cudagraphs for decoding. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. If out-of-memory error occurs during cudagraph capture, consider decreasing `gpu_memory_utilization` or switching to eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
Capturing CUDA graph shapes: 100%|██████████| 23/23 [00:08<00:00,  2.68it/s]INFO 03-16 00:08:38 model_runner.py:1562] Graph capturing finished in 9 secs, took 0.59 GiB
INFO 03-16 00:08:38 llm_engine.py:436] init engine (profile, create kv cache, warmup model) took 11.35 seconds

@danielhanchen
Copy link
Contributor

Thanks! Will add this to nightly for now!

@danielhanchen danielhanchen merged commit b1ec22d into unslothai:nightly Mar 16, 2025
danielhanchen added a commit that referenced this pull request Mar 18, 2025
* _wrap_fast_inference

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* SFT dataset prepare

* Update pyproject.toml

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update utils.py

* bug fix

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update __init__.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update _utils.py

* Update __init__.py

* Update _utils.py

* Version

* versioning

* Update _utils.py

* Update llama.py

* Update llama.py

* Bug fixes

* FastModel

* __doc__

* Update vision.py

* Update loader.py

* Update loader.py

* Update loader.py

* version

* move use_modelscope to _utils (#1938)

* move use_modelscope to _utils

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Don't use revision when loading model_config and is_peft=True (#1949)

* More syntax warnings (#1944)

* move use_modelscope to _utils

* fix

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update loader.py

* Full finetuning and other fixes

* UNSLOTH_ENABLE_FULL_FINETUNING

* Update loader.py

* Update loader.py

* Update loader.py

* Update vision.py

* Update vision.py

* full finetuning

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* max_seq_length

* Update rl.py

* Update rl.py

* Update rl.py

* Update pyproject.toml

* AutoModelForImageTextToText

* Update mapper.py

* Update pyproject.toml

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Batch samples

* Update loader.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* Update loader.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Temporary patches

* Update loader.py

* model names

* Gemma 3 chat template

* Bug fixes

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update llama.py

* Update llama.py

* Update rl.py

* Update chat_templates.py

* Update chat_templates.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Revert

* Update _utils.py

* forced precision

* Autocast

* Update vision.py

* Update vision.py

* Update rl.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* vLLM fixes

* constexpr

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update save.py

* New models

* Triton windows update (#1976)

* Update pyproject.toml

* Update README.md

* Update RMS LayerNorm implementation, and list compr. change in chat templates (#1974)

* Update RMS LayerNorm implementation with optimizations and testing suite

* perf: optimize list comprehension in get_ollama_eos_tokens

* Update Zoo

* Update llama.py

* Update llama.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* grpo fix

* Update rl_replacements.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update save.py

* Update save.py

* Update save.py

* Update rl.py

* Update _utils.py

* Version

* Update pyproject.toml

* Update llama.py

* Update llama.py

* bug fix #2008 (#2039)

* fix (#2051)

* Update loader.py

* Update pyproject.toml

* Update pyproject.toml

* Update vision.py

* more prints

* Update loader.py

* LoRA 16bit fix

* Update vision.py

* Update vision.py

* Update _utils.py

* Update vision.py

* move forced float32

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* move print

* Update _utils.py

* disable bfloat16

* Fix forced float32

* move float32

* Ensure trust_remote_code propegates down to unsloth_compile_transformers (#2075)

* Update _utils.py

* Show both `peft_error` and `autoconfig_error`, not just `autoconfig_error` (#2080)

When loading a PEFT model fails, only the `autoconfig_error` is shown. Instead of the `peft_error`, which is what really matters when we're trying to load a PEFT adapter, the user will see something like this:

```
RuntimeError: Unrecognized model in my_model. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, ...
```

This PR just changes it so `autoconfig_error` and `peft_error` are both displayed.

* fix error message (#2046)

* Update vision.py

* Update _utils.py

* Update pyproject.toml

* Update __init__.py

* Update __init__.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update rl_replacements.py

---------

Co-authored-by: Kareem <81531392+KareemMusleh@users.noreply.github.com>
Co-authored-by: Wilson Wu <140025193+wiwu2390@users.noreply.github.com>
Co-authored-by: Akshay Behl <126911424+Captain-T2004@users.noreply.github.com>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Mukkesh Ganesh <mukmckenzie@gmail.com>
Co-authored-by: Xander Hawthorne <167850078+CuppaXanax@users.noreply.github.com>
Co-authored-by: Isaac Breen <isaac.breen@icloud.com>
danielhanchen added a commit that referenced this pull request Mar 19, 2025
* Update rl.py

* Update rl.py

* Update _utils.py

* Update __init__.py

* Update _utils.py

* Version

* versioning

* Update _utils.py

* Update llama.py

* Update llama.py

* Bug fixes

* FastModel

* __doc__

* Update vision.py

* Update loader.py

* Update loader.py

* Update loader.py

* version

* move use_modelscope to _utils (#1938)

* move use_modelscope to _utils

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Don't use revision when loading model_config and is_peft=True (#1949)

* More syntax warnings (#1944)

* move use_modelscope to _utils

* fix

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update loader.py

* Full finetuning and other fixes

* UNSLOTH_ENABLE_FULL_FINETUNING

* Update loader.py

* Update loader.py

* Update loader.py

* Update vision.py

* Update vision.py

* full finetuning

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* max_seq_length

* Update rl.py

* Update rl.py

* Update rl.py

* Update pyproject.toml

* AutoModelForImageTextToText

* Update mapper.py

* Update pyproject.toml

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Batch samples

* Update loader.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* Update loader.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Temporary patches

* Update loader.py

* model names

* Gemma 3 chat template

* Bug fixes

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update llama.py

* Update llama.py

* Update rl.py

* Update chat_templates.py

* Update chat_templates.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Revert

* Update _utils.py

* forced precision

* Autocast

* Update vision.py

* Update vision.py

* Update rl.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* vLLM fixes

* constexpr

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update save.py

* New models

* Triton windows update (#1976)

* Update pyproject.toml

* Update README.md

* Update RMS LayerNorm implementation, and list compr. change in chat templates (#1974)

* Update RMS LayerNorm implementation with optimizations and testing suite

* perf: optimize list comprehension in get_ollama_eos_tokens

* Update Zoo

* Update llama.py

* Update llama.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* grpo fix

* Update rl_replacements.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update save.py

* Update save.py

* Update save.py

* Update rl.py

* Update _utils.py

* Version

* Update pyproject.toml

* Update llama.py

* Update llama.py

* bug fix #2008 (#2039)

* fix (#2051)

* Update loader.py

* Update pyproject.toml

* Update pyproject.toml

* Update vision.py

* more prints

* Update loader.py

* LoRA 16bit fix

* Update vision.py

* Update vision.py

* Update _utils.py

* Update vision.py

* move forced float32

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* move print

* Update _utils.py

* disable bfloat16

* Fix forced float32

* move float32

* Ensure trust_remote_code propegates down to unsloth_compile_transformers (#2075)

* Update _utils.py

* Show both `peft_error` and `autoconfig_error`, not just `autoconfig_error` (#2080)

When loading a PEFT model fails, only the `autoconfig_error` is shown. Instead of the `peft_error`, which is what really matters when we're trying to load a PEFT adapter, the user will see something like this:

```
RuntimeError: Unrecognized model in my_model. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, ...
```

This PR just changes it so `autoconfig_error` and `peft_error` are both displayed.

* fix error message (#2046)

* Update vision.py

* Update _utils.py

* Update pyproject.toml

* Update __init__.py

* Update __init__.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Remove double generate patch

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update mapper.py

* Update vision.py

* fix: config.torch_dtype in LlamaModel_fast_forward_inference (#2091)

* fix: config.torch_dtype in LlamaModel_fast_forward_inference

* Update llama.py

* update for consistency

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* versioning

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* model_type_arch

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

---------

Co-authored-by: Kareem <81531392+KareemMusleh@users.noreply.github.com>
Co-authored-by: Wilson Wu <140025193+wiwu2390@users.noreply.github.com>
Co-authored-by: Akshay Behl <126911424+Captain-T2004@users.noreply.github.com>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Mukkesh Ganesh <mukmckenzie@gmail.com>
Co-authored-by: Xander Hawthorne <167850078+CuppaXanax@users.noreply.github.com>
Co-authored-by: Isaac Breen <isaac.breen@icloud.com>
Co-authored-by: lurf21 <93976703+lurf21@users.noreply.github.com>
danielhanchen added a commit that referenced this pull request Mar 19, 2025
* versioning

* Update _utils.py

* Update llama.py

* Update llama.py

* Bug fixes

* FastModel

* __doc__

* Update vision.py

* Update loader.py

* Update loader.py

* Update loader.py

* version

* move use_modelscope to _utils (#1938)

* move use_modelscope to _utils

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Don't use revision when loading model_config and is_peft=True (#1949)

* More syntax warnings (#1944)

* move use_modelscope to _utils

* fix

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update loader.py

* Full finetuning and other fixes

* UNSLOTH_ENABLE_FULL_FINETUNING

* Update loader.py

* Update loader.py

* Update loader.py

* Update vision.py

* Update vision.py

* full finetuning

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* max_seq_length

* Update rl.py

* Update rl.py

* Update rl.py

* Update pyproject.toml

* AutoModelForImageTextToText

* Update mapper.py

* Update pyproject.toml

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Batch samples

* Update loader.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* Update loader.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Temporary patches

* Update loader.py

* model names

* Gemma 3 chat template

* Bug fixes

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update llama.py

* Update llama.py

* Update rl.py

* Update chat_templates.py

* Update chat_templates.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Revert

* Update _utils.py

* forced precision

* Autocast

* Update vision.py

* Update vision.py

* Update rl.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* vLLM fixes

* constexpr

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update save.py

* New models

* Triton windows update (#1976)

* Update pyproject.toml

* Update README.md

* Update RMS LayerNorm implementation, and list compr. change in chat templates (#1974)

* Update RMS LayerNorm implementation with optimizations and testing suite

* perf: optimize list comprehension in get_ollama_eos_tokens

* Update Zoo

* Update llama.py

* Update llama.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* grpo fix

* Update rl_replacements.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update save.py

* Update save.py

* Update save.py

* Update rl.py

* Update _utils.py

* Version

* Update pyproject.toml

* Update llama.py

* Update llama.py

* bug fix #2008 (#2039)

* fix (#2051)

* Update loader.py

* Update pyproject.toml

* Update pyproject.toml

* Update vision.py

* more prints

* Update loader.py

* LoRA 16bit fix

* Update vision.py

* Update vision.py

* Update _utils.py

* Update vision.py

* move forced float32

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* move print

* Update _utils.py

* disable bfloat16

* Fix forced float32

* move float32

* Ensure trust_remote_code propegates down to unsloth_compile_transformers (#2075)

* Update _utils.py

* Show both `peft_error` and `autoconfig_error`, not just `autoconfig_error` (#2080)

When loading a PEFT model fails, only the `autoconfig_error` is shown. Instead of the `peft_error`, which is what really matters when we're trying to load a PEFT adapter, the user will see something like this:

```
RuntimeError: Unrecognized model in my_model. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, ...
```

This PR just changes it so `autoconfig_error` and `peft_error` are both displayed.

* fix error message (#2046)

* Update vision.py

* Update _utils.py

* Update pyproject.toml

* Update __init__.py

* Update __init__.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Remove double generate patch

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update mapper.py

* Update vision.py

* fix: config.torch_dtype in LlamaModel_fast_forward_inference (#2091)

* fix: config.torch_dtype in LlamaModel_fast_forward_inference

* Update llama.py

* update for consistency

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* versioning

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* model_type_arch

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update loader.py

* check

* Update _utils.py

* Update loader.py

* Update loader.py

* Remove prints

---------

Co-authored-by: Kareem <81531392+KareemMusleh@users.noreply.github.com>
Co-authored-by: Wilson Wu <140025193+wiwu2390@users.noreply.github.com>
Co-authored-by: Akshay Behl <126911424+Captain-T2004@users.noreply.github.com>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Mukkesh Ganesh <mukmckenzie@gmail.com>
Co-authored-by: Xander Hawthorne <167850078+CuppaXanax@users.noreply.github.com>
Co-authored-by: Isaac Breen <isaac.breen@icloud.com>
Co-authored-by: lurf21 <93976703+lurf21@users.noreply.github.com>
danielhanchen added a commit that referenced this pull request Mar 22, 2025
* Update pyproject.toml

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Batch samples

* Update loader.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* Update loader.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Temporary patches

* Update loader.py

* model names

* Gemma 3 chat template

* Bug fixes

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update llama.py

* Update llama.py

* Update rl.py

* Update chat_templates.py

* Update chat_templates.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Revert

* Update _utils.py

* forced precision

* Autocast

* Update vision.py

* Update vision.py

* Update rl.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* vLLM fixes

* constexpr

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update save.py

* New models

* Triton windows update (#1976)

* Update pyproject.toml

* Update README.md

* Update RMS LayerNorm implementation, and list compr. change in chat templates (#1974)

* Update RMS LayerNorm implementation with optimizations and testing suite

* perf: optimize list comprehension in get_ollama_eos_tokens

* Update Zoo

* Update llama.py

* Update llama.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* grpo fix

* Update rl_replacements.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update save.py

* Update save.py

* Update save.py

* Update rl.py

* Update _utils.py

* Version

* Update pyproject.toml

* Update llama.py

* Update llama.py

* bug fix #2008 (#2039)

* fix (#2051)

* Update loader.py

* Update pyproject.toml

* Update pyproject.toml

* Update vision.py

* more prints

* Update loader.py

* LoRA 16bit fix

* Update vision.py

* Update vision.py

* Update _utils.py

* Update vision.py

* move forced float32

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* move print

* Update _utils.py

* disable bfloat16

* Fix forced float32

* move float32

* Ensure trust_remote_code propegates down to unsloth_compile_transformers (#2075)

* Update _utils.py

* Show both `peft_error` and `autoconfig_error`, not just `autoconfig_error` (#2080)

When loading a PEFT model fails, only the `autoconfig_error` is shown. Instead of the `peft_error`, which is what really matters when we're trying to load a PEFT adapter, the user will see something like this:

```
RuntimeError: Unrecognized model in my_model. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, ...
```

This PR just changes it so `autoconfig_error` and `peft_error` are both displayed.

* fix error message (#2046)

* Update vision.py

* Update _utils.py

* Update pyproject.toml

* Update __init__.py

* Update __init__.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Remove double generate patch

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update mapper.py

* Update vision.py

* fix: config.torch_dtype in LlamaModel_fast_forward_inference (#2091)

* fix: config.torch_dtype in LlamaModel_fast_forward_inference

* Update llama.py

* update for consistency

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* versioning

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* model_type_arch

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update loader.py

* check

* Update _utils.py

* Update loader.py

* Update loader.py

* Remove prints

* Update _utils.py

* Update _utils.py

* versioning

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update vision.py

* HF Transfer

* fix(utils): add missing importlib import to fix NameError (#2134)

This commit fixes a NameError that occurs when `importlib` is referenced in _utils.py
without being imported, especially when UNSLOTH_USE_MODELSCOPE=1 is enabled.
By adding the missing import statement, the code will no longer throw a NameError.

* Add QLoRA Train and Merge16bit Test (#2130)

* add reference and unsloth lora merging tests

* add test / dataset printing to test scripts

* allow running tests from repo root

* add qlora test readme

* more readme edits

* ruff formatting

* additional readme comments

* forgot to add actual tests

* add apache license

* Update pyproject.toml

---------

Co-authored-by: Akshay Behl <126911424+Captain-T2004@users.noreply.github.com>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Mukkesh Ganesh <mukmckenzie@gmail.com>
Co-authored-by: Kareem <81531392+KareemMusleh@users.noreply.github.com>
Co-authored-by: Xander Hawthorne <167850078+CuppaXanax@users.noreply.github.com>
Co-authored-by: Isaac Breen <isaac.breen@icloud.com>
Co-authored-by: lurf21 <93976703+lurf21@users.noreply.github.com>
Co-authored-by: naliazheli <nalia0316@gmail.com>
Co-authored-by: jeromeku <jerome.ku@gmail.com>
danielhanchen added a commit that referenced this pull request Mar 26, 2025
* Update loader.py

* model names

* Gemma 3 chat template

* Bug fixes

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update llama.py

* Update llama.py

* Update rl.py

* Update chat_templates.py

* Update chat_templates.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Revert

* Update _utils.py

* forced precision

* Autocast

* Update vision.py

* Update vision.py

* Update rl.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* vLLM fixes

* constexpr

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update save.py

* New models

* Triton windows update (#1976)

* Update pyproject.toml

* Update README.md

* Update RMS LayerNorm implementation, and list compr. change in chat templates (#1974)

* Update RMS LayerNorm implementation with optimizations and testing suite

* perf: optimize list comprehension in get_ollama_eos_tokens

* Update Zoo

* Update llama.py

* Update llama.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* grpo fix

* Update rl_replacements.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update save.py

* Update save.py

* Update save.py

* Update rl.py

* Update _utils.py

* Version

* Update pyproject.toml

* Update llama.py

* Update llama.py

* bug fix #2008 (#2039)

* fix (#2051)

* Update loader.py

* Update pyproject.toml

* Update pyproject.toml

* Update vision.py

* more prints

* Update loader.py

* LoRA 16bit fix

* Update vision.py

* Update vision.py

* Update _utils.py

* Update vision.py

* move forced float32

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* move print

* Update _utils.py

* disable bfloat16

* Fix forced float32

* move float32

* Ensure trust_remote_code propegates down to unsloth_compile_transformers (#2075)

* Update _utils.py

* Show both `peft_error` and `autoconfig_error`, not just `autoconfig_error` (#2080)

When loading a PEFT model fails, only the `autoconfig_error` is shown. Instead of the `peft_error`, which is what really matters when we're trying to load a PEFT adapter, the user will see something like this:

```
RuntimeError: Unrecognized model in my_model. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, ...
```

This PR just changes it so `autoconfig_error` and `peft_error` are both displayed.

* fix error message (#2046)

* Update vision.py

* Update _utils.py

* Update pyproject.toml

* Update __init__.py

* Update __init__.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Remove double generate patch

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update mapper.py

* Update vision.py

* fix: config.torch_dtype in LlamaModel_fast_forward_inference (#2091)

* fix: config.torch_dtype in LlamaModel_fast_forward_inference

* Update llama.py

* update for consistency

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* versioning

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* model_type_arch

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update loader.py

* check

* Update _utils.py

* Update loader.py

* Update loader.py

* Remove prints

* Update _utils.py

* Update _utils.py

* versioning

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update vision.py

* HF Transfer

* fix(utils): add missing importlib import to fix NameError (#2134)

This commit fixes a NameError that occurs when `importlib` is referenced in _utils.py
without being imported, especially when UNSLOTH_USE_MODELSCOPE=1 is enabled.
By adding the missing import statement, the code will no longer throw a NameError.

* Add QLoRA Train and Merge16bit Test (#2130)

* add reference and unsloth lora merging tests

* add test / dataset printing to test scripts

* allow running tests from repo root

* add qlora test readme

* more readme edits

* ruff formatting

* additional readme comments

* forgot to add actual tests

* add apache license

* Update pyproject.toml

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update loader.py

* Revert

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Bug fix

* Update mapper.py

* check SDPA for Mistral 3, Pixtral

* Update vision.py

* Versioning

* Update rl_replacements.py

---------

Co-authored-by: Akshay Behl <126911424+Captain-T2004@users.noreply.github.com>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Mukkesh Ganesh <mukmckenzie@gmail.com>
Co-authored-by: Kareem <81531392+KareemMusleh@users.noreply.github.com>
Co-authored-by: Xander Hawthorne <167850078+CuppaXanax@users.noreply.github.com>
Co-authored-by: Isaac Breen <isaac.breen@icloud.com>
Co-authored-by: lurf21 <93976703+lurf21@users.noreply.github.com>
Co-authored-by: naliazheli <nalia0316@gmail.com>
Co-authored-by: jeromeku <jerome.ku@gmail.com>
danielhanchen added a commit that referenced this pull request May 1, 2025
* bug fix #2008 (#2039)

* fix (#2051)

* Update loader.py

* Update pyproject.toml

* Update pyproject.toml

* Update vision.py

* more prints

* Update loader.py

* LoRA 16bit fix

* Update vision.py

* Update vision.py

* Update _utils.py

* Update vision.py

* move forced float32

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* move print

* Update _utils.py

* disable bfloat16

* Fix forced float32

* move float32

* Ensure trust_remote_code propegates down to unsloth_compile_transformers (#2075)

* Update _utils.py

* Show both `peft_error` and `autoconfig_error`, not just `autoconfig_error` (#2080)

When loading a PEFT model fails, only the `autoconfig_error` is shown. Instead of the `peft_error`, which is what really matters when we're trying to load a PEFT adapter, the user will see something like this:

```
RuntimeError: Unrecognized model in my_model. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, ...
```

This PR just changes it so `autoconfig_error` and `peft_error` are both displayed.

* fix error message (#2046)

* Update vision.py

* Update _utils.py

* Update pyproject.toml

* Update __init__.py

* Update __init__.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Remove double generate patch

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update mapper.py

* Update vision.py

* fix: config.torch_dtype in LlamaModel_fast_forward_inference (#2091)

* fix: config.torch_dtype in LlamaModel_fast_forward_inference

* Update llama.py

* update for consistency

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* versioning

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* model_type_arch

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update loader.py

* check

* Update _utils.py

* Update loader.py

* Update loader.py

* Remove prints

* Update README.md

typo

* Update _utils.py

* Update _utils.py

* versioning

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update vision.py

* HF Transfer

* fix(utils): add missing importlib import to fix NameError (#2134)

This commit fixes a NameError that occurs when `importlib` is referenced in _utils.py
without being imported, especially when UNSLOTH_USE_MODELSCOPE=1 is enabled.
By adding the missing import statement, the code will no longer throw a NameError.

* Add QLoRA Train and Merge16bit Test (#2130)

* add reference and unsloth lora merging tests

* add test / dataset printing to test scripts

* allow running tests from repo root

* add qlora test readme

* more readme edits

* ruff formatting

* additional readme comments

* forgot to add actual tests

* add apache license

* Update pyproject.toml

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update loader.py

* Revert

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Bug fix

* Update mapper.py

* check SDPA for Mistral 3, Pixtral

* Update vision.py

* Versioning

* Update rl_replacements.py

* Update README.md

* add model registry

* move hf hub utils to unsloth/utils

* refactor global model info dicts to dataclasses

* fix dataclass init

* fix llama registration

* remove deprecated key function

* start registry reog

* add llama vision

* quant types -> Enum

* remap literal quant types to QuantType Enum

* add llama model registration

* fix quant tag mapping

* add qwen2.5 models to registry

* add option to include original model in registry

* handle quant types per model size

* separate registration of base and instruct llama3.2

* add QwenQVQ to registry

* add gemma3 to registry

* add phi

* add deepseek v3

* add deepseek r1 base

* add deepseek r1 zero

* add deepseek distill llama

* add deepseek distill models

* remove redundant code when constructing model names

* add mistral small to registry

* rename model registration methods

* rename deepseek registration methods

* refactor naming for mistral and phi

* add global register models

* refactor model registration tests for new registry apis

* add model search method

* remove deprecated registration api

* add quant type test

* add registry readme

* make llama registration more specific

* clear registry when executing individual model registration file

* more registry readme updates

* Update _auto_install.py

* Llama4

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Synthetic data

* Update mapper.py

* Xet and Synthetic

* Update synthetic.py

* Update loader.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update synthetic.py

* Update pyproject.toml

* Delete .gitignore

---------

Co-authored-by: Mukkesh Ganesh <mukmckenzie@gmail.com>
Co-authored-by: Kareem <81531392+KareemMusleh@users.noreply.github.com>
Co-authored-by: Xander Hawthorne <167850078+CuppaXanax@users.noreply.github.com>
Co-authored-by: Isaac Breen <isaac.breen@icloud.com>
Co-authored-by: lurf21 <93976703+lurf21@users.noreply.github.com>
Co-authored-by: Jack Shi Wei Lun <87535974+jackswl@users.noreply.github.com>
Co-authored-by: naliazheli <nalia0316@gmail.com>
Co-authored-by: jeromeku <jerome.ku@gmail.com>
Co-authored-by: Michael Han <107991372+shimmyshimmer@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants