@@ -8,11 +8,16 @@ effectively replacing proprietary providers Large Language Models (LLMs) web pla
88- [ OpenAI Playground] ( https://platform.openai.com/playground )
99- Other platforms planned for inclusion in future versions of this plugin.
1010
11- ## How it works
11+ ⚠ prompter.vim is not primarily designed as a code completion tool,
12+ although you can use it for that purpose.
13+ Instead, this plugin aims to be a general-purpose replacement for web text completion playgrounds,
14+ intended for prompt engineers who want to test and debug natural language prompts.
1215
13- 1 . set your variable environment to configure model and settings
14- 2 . run ` :PrompterSetup `
15- 3 . edit your prompt
16+ ## Usage
17+
18+ 1 . Install and set your variable environment to configure model and settings
19+ 2 . Run ` :PrompterSetup `
20+ 3 . Edit your prompt
16214 . Press ` <F12> ` to get the LLM completion
17225 . Enjoy your prompt engineering!
1823
@@ -61,29 +66,36 @@ Undertaking all of this with web playgrounds is a cumbersome and error-prone pro
6166The final thought was: what if I could run my completion directly inside mt vim editor?
6267
6368
64- ## ⚠️ Completion modes: ` text ` versus ` chat `
69+ ## ` text ` completion or ` chat ` completion?
6570There are two common "completion modes" foreseen in OpenAI or similar current LLMs:
6671
6772- ** ` text ` completion**
6873
6974 Completion mode set as ` text ` means that LLM completes,
7075 the given context window prompt text with a completion text (text in -> text out).
7176 An example of such a model setting is the ` text-da-vinci-003 ` OpenAI model.
72- To use a text completion mode, the model must support that mode trough specific API.
77+ To use a text completion mode, the model must support that mode through a specific API.
7378
7479- ** ` chat ` completion**
7580
7681 Completion mode set as ` chat ` means that LLM s fine-tuned for chat "roles"
77- (user say, assistant say, ...).
82+ (user say, assistant say, ...).
83+ Fore details, please read [ this] ( https://platform.openai.com/docs/guides/gpt/chat-completions-api ) .
7884 The context window prompt is in fact made by
79- a "system prompt" and a list of user and assistant messages.
85+ - a "system prompt" and
86+ - a list of "user" and "assistant" messages.
8087 An example of such a model setting is the ` gpt3.5-turbo ` OpenAI model.
8188 To use a chat completion mode, the model must support that mode, trough specific API.
8289
83- ⚠️ Prompter.vim plugin is conceived to work as text completer fast prototyping playground,
90+ ⚠️ Prompter.vim plugin is conceived to work as text completer fast prototyping playground,
8491avoiding the complications of the chat roles.
85- So a model that works only in chat mode (as the ` gpt3.5-turbo ` ) is behind the scenes "faked "
92+ So a model that works only in chat mode (as the ` gpt3.5-turbo ` ) is behind the scenes "simulates "
8693to be a text completion model, just inserting the prompt text you are editing, as "system" role prompt.
94+ See also this
95+ [ discussion] ( https://community.openai.com/t/achieving-text-completion-with-gpt-3-5-or-gpt-4-best-practices-using-azure-deployment/321503 ) .
96+ I'm aware that using a chat-based model as a text-based model, as described above,
97+ is not the optimal usage, but it's a compromise between the simplicity of
98+ having a single text completion playground and the complexity of managing chat roles.
8799
88100
89101## 📦 Install
@@ -271,6 +283,15 @@ Reports the current plugin version, the list of plugin commands, the current mod
271283 ``` viml
272284 messages
273285 ```
286+ vim will show last completion statistics info. By example, if you just run 3 completions:
287+ ```
288+ Model: azure/gpt-35-turbo completion mode: chat temperature: 0.5 max_tokens: 1500 stop: u:
289+ Latency: 961ms (1.0s) Tokens: 616 (prompt: 577 completion: 39) Throughput: 641 Words: 21 Chars: 134
290+ Model: azure/gpt-35-turbo completion mode: chat temperature: 0.5 max_tokens: 1500 stop: u:
291+ Latency: 368ms (0.4s) Tokens: 648 (prompt: 642 completion: 6) Throughput: 1761 Words: 2 Chars: 15
292+ Model: azure/gpt-35-turbo completion mode: chat temperature: 0.5 max_tokens: 1500 stop: u:
293+ Latency: 4227ms (4.2s) Tokens: 775 (prompt: 660 completion: 115) Throughput: 183 Words: 60 Chars: 377, Lines: 5
294+ ```
274295
275296- Enabling Soft Wrap
276297 ``` viml
@@ -383,15 +404,25 @@ These vim comands could be useful:
383404 The highlight could forsee all completions or it could be avoided optionally.
384405
385406- [ ] ** Streaming support**
407+
386408 So far streaming completion is not take in consideration.
387409
388- ## 👏 Acknowledgements
389- Thanks you to [ David Shapiro] ( https://github.com/daveshap ) huge dissemination work on LLMs and generative AI.
390- I have followed with enthusiasm especially his LLM prompt engineering live coding [ youtube videos] ( https://www.youtube.com/@4IR.David.Shapiro ) !
391410
392411## Similar projects
393412
394413- [ vim-ai] ( https://github.com/madox2/vim-ai )
414+ Very similar to prompter.vim. Nevertheless it's focused on code completion allowing small prompts from the command line.
415+ - [ llm.nvim] ( https://github.com/gsuuon/llm.nvim )
416+ Just for neovim. Pretty similar to prompter.vim in the concept, but more oriented to code completions
417+ - [ llm.nvim] ( https://github.com/huggingface/llm.nvim )
418+ just for neovim. It works with huggingface inference APis.
419+ - [ copilot.vim] ( https://github.com/github/copilot.vim )
420+ Neovim plugin for GitHub Copilot
421+
422+
423+ ## 👏 Acknowledgements
424+ Thanks you to [ David Shapiro] ( https://github.com/daveshap ) huge dissemination work on LLMs and generative AI.
425+ I have followed with enthusiasm especially his LLM prompt engineering live coding [ youtube videos] ( https://www.youtube.com/@4IR.David.Shapiro ) !
395426
396427
397428## ⭐️ Status / How to contribute
0 commit comments