@@ -8,6 +8,14 @@ effectively replacing proprietary providers Large Language Models (LLMs) web pla
88- [ OpenAI Playground] ( https://platform.openai.com/playground )
99- Other platforms planned for inclusion in future versions of this plugin.
1010
11+ ## How it works
12+
13+ 1 . set your variable environment to configure model and settings
14+ 2 . run ` :PrompterSetup `
15+ 3 . edit your prompt
16+ 4 . Press ` <F12> ` to get the LLM completion
17+ 5 . Enjoy your prompt engineering!
18+
1119![ alt text] ( screens/screenshot.1.png )
1220
1321
@@ -25,30 +33,35 @@ effectively replacing proprietary providers Large Language Models (LLMs) web pla
2533 support last completion color highlight.
2634
2735
28- ## Backstory
36+ ## 🙄 Backstory
2937
30- The idea emerged as I was writing some LLM prompts, experimenting with some
31- prompt engineering techniques, using a simple "text completion" approach.
32- You write your text prompt and then request a Large Language Model (LLM) completion.
38+ The idea emerged as I was writing LLM prompts, experimenting with some
39+ prompt engineering techniques, using a simple "text completion" approach where
40+ you write your text prompt corpus and then request a Large Language Model (LLM) completion.
3341
3442My initial approach was to utilize the web playgrounds offered by LLM providers.
3543However, I encountered numerous issues especially while interacting
36- with Azure OpenAI web playgrounds. For reasons I do not yet comprehend, the
37- web interaction on the Azure web playground slow down considerably after a
38- certain point. I suspect a bug within the completion boxes.
39- Furthermore, I am not fond of the Azure web interface for the "chat completion" mode.
44+ with Azure OpenAI web playgrounds.
45+
46+ For reasons I do not yet comprehend,
47+ the web interaction on the Azure web playground slow down considerably after a
48+ certain point.I suspect a bug within the completion boxes.
49+ Furthermore, I am not fond of the Azure web interface for the "chat completion" mode.
4050A total mess! Instead, the original OpenAI playground is better implemented,
4151and I did not encounter the aforementioned issues.
52+
4253Nevertheless, both web playgrounds permit only one prompt per browser tab.
4354Therefore, when dealing with multiple active prompts (developing a composite
44- application composed of nested/chained template prompts), you must maintain
45- multiple playgrounds open in distinct tabs.
55+ application composed of nested/chained template prompts),
56+ you must maintain multiple playgrounds open in distinct tabs.
4657When you achieve certain (intermediate) noteworthy outcomes,
4758you must copy all text boxes and save them in versioned files.
59+
4860Undertaking all of this with web playgrounds is a cumbersome and error-prone process.
61+ The final thought was: what if I could run my completion directly inside mt vim editor?
4962
5063
51- ## Completion modes
64+ ## ⚠️ Completion modes: ` text ` versus ` chat `
5265There are two common "completion modes" foreseen in OpenAI or similar current LLMs:
5366
5467- ** ` text ` completion**
@@ -67,13 +80,13 @@ There are two common "completion modes" foreseen in OpenAI or similar current LL
6780 An example of such a model setting is the ` gpt3.5-turbo ` OpenAI model.
6881 To use a chat completion mode, the model must support that mode, trough specific API.
6982
70- 💡 Prompter.vim plugin is conceived to work as text completer fast prototyping playground,
83+ ⚠️ Prompter.vim plugin is conceived to work as text completer fast prototyping playground,
7184avoiding the complications of the chat roles.
7285So a model that works only in chat mode (as the ` gpt3.5-turbo ` ) is behind the scenes "faked"
7386to be a text completion model, just inserting the prompt text you are editing, as "system" role prompt.
7487
7588
76- ## Install
89+ ## 📦 Install
7790
7891This pluguin is made in Python3. Check if your vim installation support Python3
7992
@@ -88,7 +101,7 @@ Plug 'solyarisoftware/prompter.vim'
88101```
89102
90103
91- ## Environment Setup
104+ ## 📦 Environment Variables Setup
92105
93106### OpenAI Provider
94107In the example here below, you set the secret API key, the completion mode as ` chat ` and you specify the model to be used
@@ -99,12 +112,12 @@ In the example here below, you set the secret API key, the completion mode as `c
99112export AZURE_OPENAI_API_KEY=" YOUR OPENAI API KEY"
100113
101114export OPENAI_COMPLETION_MODE=" chat"
102-
103115export OPENAI_MODEL_NAME_CHAT_COMPLETION=" gpt-3.5-turbo"
104- export OPENAI_MODEL_TEXT_COMPLETION=" text-davinci-003"
116+
117+ # export OPENAI_COMPLETION_MODE="text"
118+ # export OPENAI_MODEL_TEXT_COMPLETION="text-davinci-003"
105119
106120# OPTIONAL SETTINGS
107- # specify the LLM provider. Default is just "openai"
108121export LLM_PROVIDER=" openai"
109122
110123export OPENAI_TEMPERATURE=0.7
@@ -126,50 +139,94 @@ export AZURE_OPENAI_API_KEY="YOUR AZURE OPENAI API KEY"
126139export AZURE_OPENAI_API_ENDPOINT=" YOUR AZURE OPENAI ENDPOINT"
127140
128141export OPENAI_COMPLETION_MODE=" chat"
129-
130142export AZURE_DEPLOYMENT_NAME_CHAT_COMPLETION=" gpt-35-turbo"
131- export AZURE_DEPLOYMENT_NAME_TEXT_COMPLETION=" text-davinci-003"
143+
144+ # export OPENAI_COMPLETION_MODE="text"
145+ # export AZURE_DEPLOYMENT_NAME_TEXT_COMPLETION="text-davinci-003"
132146
133147# OPTIONAL SETTINGS
134- export OPENAI_TEMPERATURE=0.7
135- export OPENAI_MAX_TOKENS=100
136- export OPENAI_STOP=" a: u: "
148+ export OPENAI_TEMPERATURE=0.5
149+ export OPENAI_MAX_TOKENS=1000
150+ export OPENAI_STOP=" a:"
137151```
138152
153+ 💡 A good idea is to edit and keep all variables above in a hidden file, e.g. ` vi ~/.prompter.vim ` ,
154+ and execute it with ` source ~/.prompter.vim `
155+
139156
140- ## Commands
157+ ## 👊 Commands
158+ In vim command mode (:) these commands are available:
141159
142160### ` PrompterSetup `
143161
144162When you enter vim, to activate the Prompter playground environment, first of all run in command mode:
145163``` viml
146- : PrompterSetup
164+ PrompterSetup
147165```
148166Following the environment settings, if successful, the command print in the status line the model configurations:
149167```
150- chat completion model: azure/gpt-35-turbo (temperature: 0.7 max_tokens: 100)
168+ Model: azure/gpt-35-turbo completion mode: chat temperature: 0.7 max_tokens: 100
169+ ```
170+ Explanation of values in the status line report:
171+ ```
172+ temperature preset value ───────────────────────────┐
173+ │
174+ max_tokens preset value ──────────┐ │
175+ │ │
176+ ┌─────┐ ┌────────────┐ ┌────┐ ┌─┴─┐ ┌─┴─┐
177+ Model:│azure│/│gpt-35-turbo│ completion mode:│chat│ temperature:│0.7│ max_tokens:│100│
178+ └──┬──┘ └─────┬──────┘ └──┬─┘ └───┘ └───┘
179+ │ │ │
180+ │ │ └─ chat or text, depending on the model
181+ │ │
182+ │ └── name of the Azure deployment
183+ │
184+ └───────────── name of the LLM provider
151185```
152186
153187### ` PrompterComplete `
154188
155189Edit your prompt on a vim windows, and to run the LLM completion just
156190``` viml
157- : PrompterComplete
191+ PrompterComplete
158192```
159193the status line report some statistics:
160194```
161195Latency: 1480ms (1.5s) Tokens: 228 (prompt: 167 completion: 61) Throughput: 154 Words: 28 Chars: 176, Lines: 7
162196```
197+ Explanation of values in the status line report:
198+ ```
199+ ┌─ latency in milliseconds and seconds
200+ │
201+ │ ┌───────────────────────────────── total nr. of tokens
202+ │ │
203+ │ │ ┌──────────────────── nr. of tokens in prompt
204+ │ │ │
205+ │ │ │ ┌──── nr. of tokens in completion
206+ │ │ │ │
207+ ┌─┴───────────┐ ┌─┴─┐ ┌─┴─┐ ┌─┴┐ ┌───┐ ┌──┐ ┌───┐ ┌─┐
208+ Latency:│1480ms (1.5s)│Tokens:│228│(prompt:│167│completion:│61│) Throughput:│154│Words:│28│Chars:│176│ Lines:│7│
209+ └─────────────┘ └───┘ └───┘ └──┘ └─┬─┘ └─┬┘ └─┬─┘ └┬┘
210+ │ │ │ │
211+ │ │ │ │
212+ Latency / Tokens ───────────────────────┘ │ │ │
213+ │ │ │
214+ nr. of words ───────────┘ │ │
215+ │ │
216+ nr. of characters ─────────────────────┘ │
217+ │
218+ nr. of lines ────────────────────────────────┘
219+ ```
163220
164- The statistics reports these magnitudes :
221+ The statistics reports these variables :
165222- ** Latency** : bot in milliseconds and second approximations
166223- ** Tokens** : the total tokens amount, the prompt subtotal and the completion subtotal
167224- ** Throughput** : this is the completion Tokens / latency ratio (in seconds)
168225- ** Words** , the number of words generated in the completion
169226- ** Chars** , the number of character in the completions
170227- ** Lines** : the number of lines generated in the completion
171228
172- 💡 By default the command is assigned to the function key ` F12 ` .
229+ 🚀 By default the command is assigned to the function key ` F12 ` .
173230In such a way you can run the completion just pressingto the single keystroke ` F12 ` .
174231
175232### ` PrompterInfo `
@@ -207,15 +264,40 @@ Reports the current plugin version, the list of plugin commands, the current mod
207264 let g:stop = ['x:', 'y:', 'z:']
208265 ```
209266
267+
268+ ## Other useful vim settings
269+
270+ - To read all statistics print of your completions:
271+ ``` viml
272+ messages
273+ ```
274+
275+ - Enabling Soft Wrap
276+ ``` viml
277+ set wrap linebreak nolist
278+ ```
279+
280+ - How to see what mapping for a particular key, e.g. ` F12 ` :
281+ ``` viml
282+ map <F12>
283+ ```
284+
210285- You can assign the commands like ` :PrompterComplete ` to any key mappings of your preference, by example:
211286 ``` vim
212287 map <F2> :PrompterComplete<CR>
213288 ```
214289
290+ - mark placeholders
291+ ``` viml
292+ syntax region CustomBraces start=/{/ end=/}/
293+ highlight link CustomBraces Statement
294+ au BufRead,BufNewFile *.{your-file-extension} set syntax=custom_braces
295+ ```
215296
216- ## Dialogues as part of the text prompt
217297
218- 💡 A technique I'm using to prototype dialog prompts, is to insert a dialog turns block
298+ ## 💡 Dialogues as part of the text prompt
299+
300+ A technique I'm using to prototype dialog prompts, is to insert a dialog turns block
219301as in the follwing example, where the dialog block terminates with the "stop sequence" (e.g. ` a: ` )
220302triggering LLM to complete the assistant role:
221303
@@ -258,31 +340,6 @@ These vim comands could be useful:
258340 ```
259341
260342
261- ## Other useful vim settings
262-
263- - To read all statistics print of your completions:
264- ``` viml
265- messages
266- ```
267-
268- - Enabling Soft Wrap
269- ``` viml
270- set wrap linebreak nolist
271- ```
272-
273- - How to see what mapping for a particular key, e.g. ` F12 ` :
274- ``` viml
275- map <F12>
276- ```
277-
278- - mark placeholders
279- ``` viml
280- syntax region CustomBraces start=/{/ end=/}/
281- highlight link CustomBraces Statement
282- au BufRead,BufNewFile *.{your-file-extension} set syntax=custom_braces
283- ```
284-
285-
286343## Features to do in future releases
287344
288345- [ ] ** Support template prompts**
@@ -328,20 +385,27 @@ These vim comands could be useful:
328385- [ ] ** Streaming support**
329386 So far streaming completion is not take in consideration.
330387
388+ ## 👏 Acknowledgements
389+ Thanks you to David Shapiro dissemination on LLMs and generative AI.
390+ I have followed with enthusiasm especially his LLM prompt engineering live coding youtube videos!
331391
332392## Similar projects
333393
334394- [ vim-ai] ( https://github.com/madox2/vim-ai )
335395
396+ ## 🛠Tested on
397+
336398
337- ## How to contribute
399+ ## ⭐️ Status / How to contribute
400+
401+ This project is work-in-progress proof-of-concept alfa version!
338402
339- This project is work-in-progress proof-of-concept alfa version.
340403I'm not a vimscript expert, so any contribute or suggestion is welcome.
341404For any proposal and issue, please submit here on github issues for bugs, suggestions, etc.
342405You can also contact me via email (giorgio.robino@gmail.com ).
343406
344- ** If you like the project, please ⭐️star this repository to show your support! 🙏**
407+
408+ ** 🙏 IF YOU LIKE THE PROJECT, PLEASE ⭐️STAR THIS REPOSITORY TO SHOW YOUR SUPPORT!**
345409
346410
347411## MIT LICENSE
0 commit comments