Skip to content

Looks like the updated code requires trl>=0.9.3, which isn't compatiable with unsloth #4120

@YifanDengWHU

Description

@YifanDengWHU

Reminder

  • I have read the README and searched the existing issues.

System Info

  • llamafactory version: 0.7.2.dev0
  • Platform: Linux-4.18.0-517.el8.x86_64-x86_64-with-glibc2.28
  • Python version: 3.9.0
  • PyTorch version: 2.0.0+cu117 (GPU)
  • Transformers version: 4.41.2
  • Datasets version: 2.19.2
  • Accelerate version: 0.30.1
  • PEFT version: 0.11.1
  • TRL version: 0.9.4
  • GPU type: NVIDIA L40
  • Bitsandbytes version: 0.43.1

Reproduction


git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e .[metrics]

pip install "unsloth[cu118-torch230] @ git+https://github.com/unslothai/unsloth.git"

Yesterday everthing worked well, but today I got error:

Traceback (most recent call last):
File "/var/lib/condor/execute/slot1/dir_677837/mm_molecule/bin/llamafactory-cli", line 5, in
from llamafactory.cli import main
File "/var/lib/condor/execute/slot1/dir_677837/LLaMA-Factory/src/llamafactory/init.py", line 3, in
from .cli import VERSION
File "/var/lib/condor/execute/slot1/dir_677837/LLaMA-Factory/src/llamafactory/cli.py", line 7, in
from . import launcher
File "/var/lib/condor/execute/slot1/dir_677837/LLaMA-Factory/src/llamafactory/launcher.py", line 1, in
from llamafactory.train.tuner import run_exp
File "/var/lib/condor/execute/slot1/dir_677837/LLaMA-Factory/src/llamafactory/train/tuner.py", line 9, in
from ..hparams import get_infer_args, get_train_args
File "/var/lib/condor/execute/slot1/dir_677837/LLaMA-Factory/src/llamafactory/hparams/init.py", line 6, in
from .parser import get_eval_args, get_infer_args, get_train_args
File "/var/lib/condor/execute/slot1/dir_677837/LLaMA-Factory/src/llamafactory/hparams/parser.py", line 27, in
check_dependencies()
File "/var/lib/condor/execute/slot1/dir_677837/LLaMA-Factory/src/llamafactory/extras/misc.py", line 68, in check_dependencies
require_version("trl>=0.9.3", "To fix: pip install trl>=0.9.3")
File "/var/lib/condor/execute/slot1/dir_677837/mm_molecule/lib/python3.9/site-packages/transformers/utils/versions.py", line 111, in require_version
_compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
File "/var/lib/condor/execute/slot1/dir_677837/mm_molecule/lib/python3.9/site-packages/transformers/utils/versions.py", line 44, in _compare_versions
raise ImportError(
ImportError: trl>=0.9.3 is required for a normal functioning of this module, but found trl==0.8.6.
To fix: pip install trl>=0.9.3

Even though I can install trl==0.9.4, the unsloth will override it again:

Collecting trl<0.9.0,>=0.7.9 (from unsloth[cu118-torch230]@ git+https://github.com/unslothai/unsloth.git)
Downloading trl-0.8.6-py3-none-any.whl.metadata (11 kB)

After I delete the pip install "unsloth[cu118-torch230] @ git+https://github.com/unslothai/unsloth.git", it works fine. But it also means I can't use unsloth for training.

Expected behavior

It seems that updated LLAMA-Factory code add the constraint for trl version>=0.9.3. Is it necessary?

Others

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    solvedThis problem has been already solved

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions