Skip to content

Commit 6c083a8

Browse files
committed
update README
1 parent 3ca5efe commit 6c083a8

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ pip install -r requirements.txt
100100
>>> model = model.eval()
101101
>>> inputs = tokenizer('北京的景点:故宫、天坛、万里长城等。\n深圳的景点:', return_tensors='pt').input_ids
102102
>>> inputs = inputs.cuda()
103-
>>> generated_ids = model.generate(inputs, max_new_tokens=64, eos_token_id=tokenizer.eos_token_id)
103+
>>> generated_ids = model.generate(inputs, max_new_tokens=64, eos_token_id=tokenizer.eos_token_id, repetition_penalty=1.1)
104104
>>> print(tokenizer.batch_decode(generated_ids, skip_special_tokens=True))
105105
```
106106

README_EN.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ The XVERSE-13B model can be loaded for inference using the following code:
101101
>>> model = model.eval()
102102
>>> inputs = tokenizer('北京的景点:故宫、天坛、万里长城等。\n深圳的景点:', return_tensors='pt').input_ids
103103
>>> inputs = inputs.cuda()
104-
>>> generated_ids = model.generate(inputs, max_new_tokens=64, eos_token_id=tokenizer.eos_token_id)
104+
>>> generated_ids = model.generate(inputs, max_new_tokens=64, eos_token_id=tokenizer.eos_token_id, repetition_penalty=1.1)
105105
>>> print(tokenizer.batch_decode(generated_ids, skip_special_tokens=True))
106106
```
107107

0 commit comments

Comments
 (0)