Skip to content

Commit 93d6399

Browse files
committed
Update README
1 parent e0c30e1 commit 93d6399

File tree

3 files changed

+6
-3
lines changed

3 files changed

+6
-3
lines changed

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -95,9 +95,10 @@ pip install -r requirements.txt
9595
可通过以下代码加载 XVERSE-13B 模型进行推理:
9696

9797
```python
98+
>>> import torch
9899
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
99100
>>> tokenizer = AutoTokenizer.from_pretrained("xverse/XVERSE-13B")
100-
>>> model = AutoModelForCausalLM.from_pretrained("xverse/XVERSE-13B", trust_remote_code=True).half().cuda()
101+
>>> model = AutoModelForCausalLM.from_pretrained("xverse/XVERSE-13B", trust_remote_code=True, torch_dtype=torch.float16, device_map='auto')
101102
>>> model = model.eval()
102103
>>> inputs = tokenizer('北京的景点:故宫、天坛、万里长城等。\n深圳的景点:', return_tensors='pt').input_ids
103104
>>> inputs = inputs.cuda()

README_EN.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,9 +96,10 @@ pip install -r requirements.txt
9696
The XVERSE-13B model can be loaded for inference using the following code:
9797

9898
```python
99+
>>> import torch
99100
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
100101
>>> tokenizer = AutoTokenizer.from_pretrained("xverse/XVERSE-13B")
101-
>>> model = AutoModelForCausalLM.from_pretrained("xverse/XVERSE-13B", trust_remote_code=True).half().cuda()
102+
>>> model = AutoModelForCausalLM.from_pretrained("xverse/XVERSE-13B", trust_remote_code=True, torch_dtype=torch.float16, device_map='auto')
102103
>>> model = model.eval()
103104
>>> inputs = tokenizer('北京的景点:故宫、天坛、万里长城等。\n深圳的景点:', return_tensors='pt').input_ids
104105
>>> inputs = inputs.cuda()

README_JA.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,9 +96,10 @@ pip install -r requirements.txt
9696
XVERSE-13B モデルは、以下のコードを用いて推論のためにロードすることができる:
9797

9898
```python
99+
>>> import torch
99100
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
100101
>>> tokenizer = AutoTokenizer.from_pretrained("xverse/XVERSE-13B")
101-
>>> model = AutoModelForCausalLM.from_pretrained("xverse/XVERSE-13B", trust_remote_code=True).half().cuda()
102+
>>> model = AutoModelForCausalLM.from_pretrained("xverse/XVERSE-13B", trust_remote_code=True, torch_dtype=torch.float16, device_map='auto')
102103
>>> model = model.eval()
103104
>>> inputs = tokenizer('北京的景点:故宫、天坛、万里长城等。\n深圳的景点:', return_tensors='pt').input_ids
104105
>>> inputs = inputs.cuda()

0 commit comments

Comments
 (0)