Skip to content

Commit f6a7cbe

Browse files
API Changes
1 parent 02b4689 commit f6a7cbe

14 files changed

Lines changed: 118 additions & 31 deletions

File tree

README.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,15 @@ There are three main ways to use PromptWizard:
8383
- Use ```promptopt_config.yaml``` to set configurations. For example for GSM8k this [file](demos/gsm8k/configs/promptopt_config.yaml) can be used
8484
- Use ```.env``` to set environmental varibles. For GSM8k this [file](demos/gsm8k/.env) can be used
8585
```
86+
USE_OPENAI_API_KEY="XXXX"
87+
# Replace with True/False based on whether or not to use OPENAI API key
88+
89+
# If the first variable is set to True then fill the following two
90+
OPENAI_API_KEY="XXXX"
91+
OPENAI_MODEL_NAME ="XXXX"
92+
93+
# If the first variable is set to False then fill the following three
94+
8695
AZURE_OPENAI_ENDPOINT="XXXXX"
8796
# Replace with your Azure OpenAI Endpoint
8897

@@ -101,7 +110,7 @@ There are three main ways to use PromptWizard:
101110
102111
#### Running on GSM8k (AQUARAT/SVAMP)
103112
104-
- Please note that this code requires access to LLMs via API calling, we use AZURE endpoints for this
113+
- Please note that this code requires access to LLMs via API calling for which we support AZURE endpoints or OPENAI keys
105114
- Set the AZURE endpoint configurations in [.env](demos/gsm8k/.env)
106115
- Follow the steps in [demo.ipynb](demos/gsm8k/demo.ipynb) to download the data, run the prompt optimization and carry out inference.
107116

demos/aquarat/.env

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,8 @@
1+
USE_OPENAI_API_KEY="False"
2+
3+
OPENAI_API_KEY=""
4+
OPENAI_MODEL_NAME =""
5+
16
OPENAI_API_VERSION=""
27
AZURE_OPENAI_ENDPOINT=""
38
AZURE_OPENAI_DEPLOYMENT_NAME=""

demos/aquarat/configs/promptopt_config.yaml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,8 @@ answer_format: "At the end, wrap only your final option between <ANS_START> and
3838
seen_set_size: 25
3939
# Number of examples to be given for few shots
4040
few_shot_count: 5
41+
# Number of synthetic training examples to be generated
42+
num_train_examples: 20
4143
# Generate synthetic reasoning
4244
generate_reasoning: true
4345
# Generate description of an expert which can solve the task at hand

demos/bbh/.env

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,8 @@
1+
USE_OPENAI_API_KEY="False"
2+
3+
OPENAI_API_KEY=""
4+
OPENAI_MODEL_NAME =""
5+
16
OPENAI_API_VERSION=""
27
AZURE_OPENAI_ENDPOINT=""
38
AZURE_OPENAI_DEPLOYMENT_NAME=""

demos/bbh/configs/promptopt_config.yaml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,8 @@ answer_format : 'For each input word, present the reasoning followed by the extr
3838
seen_set_size: 25
3939
# Number of examples to be given for few shots
4040
few_shot_count: 5
41+
# Number of synthetic training examples to be generated
42+
num_train_examples: 20
4143
# Generate synthetic reasoning
4244
generate_reasoning: true
4345
# Generate description of an expert which can solve the task at hand

demos/gsm8k/.env

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,8 @@
1+
USE_OPENAI_API_KEY="False"
2+
3+
OPENAI_API_KEY=""
4+
OPENAI_MODEL_NAME =""
5+
16
OPENAI_API_VERSION=""
27
AZURE_OPENAI_ENDPOINT=""
38
AZURE_OPENAI_DEPLOYMENT_NAME=""

demos/gsm8k/configs/promptopt_config.yaml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,8 @@ answer_format: "For each question present the reasoning followed by the correct
3838
seen_set_size: 25
3939
# Number of examples to be given for few shots
4040
few_shot_count: 5
41+
# Number of synthetic training examples to be generated
42+
num_train_examples: 20
4143
# Generate synthetic reasoning
4244
generate_reasoning: true
4345
# Generate description of an expert which can solve the task at hand

demos/scenarios/.env

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,8 @@
1+
USE_OPENAI_API_KEY="False"
2+
3+
OPENAI_API_KEY=""
4+
OPENAI_MODEL_NAME =""
5+
16
OPENAI_API_VERSION=""
27
AZURE_OPENAI_ENDPOINT=""
38
AZURE_OPENAI_DEPLOYMENT_NAME=""
Lines changed: 48 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,53 @@
1-
answer_format: For each question present the reasoning followed by the correct answer.
2-
base_instruction: Lets think step by step.
3-
few_shot_count: 5
4-
generate_expert_identity: true
5-
generate_intent_keywords: true
6-
generate_reasoning: true
7-
max_eval_batches: 6
8-
min_correct_count: 3
1+
# Specify one or more prompt refinement technique to be used. If you specify more than one prompt refinement techniques,
2+
# all these technique would run on same seed data. Result, iterations needed & cost incurred for each of these
3+
# technique would be logged. And winning technique for each data instance and overall would be logged.
4+
5+
# Supported prompt refinement techniques: Basic, RecursiveEval, MedPrompt
6+
# Uncomment techniques that you want to use
7+
############################ Critique Task Description Start ############################
8+
prompt_technique_name: "critique_n_refine"
9+
# unique_model_id of model defined in llm_config.yaml
10+
unique_model_id: gpt-4o
11+
# Number of iterations for conducting <mutation_rounds> rounds of mutation of task description
12+
# followed by refinement of instructions
913
mutate_refine_iterations: 3
10-
mutation_rounds: 2
11-
num_train_examples: 40
12-
prompt_technique_name: critique_n_refine
13-
questions_batch_size: 1
14+
# Number of rounds of mutation to be performed when generating different styles
15+
mutation_rounds: 3
16+
# Refine instruction post mutation
1417
refine_instruction: true
18+
# Number of iterations for refining task description and in context examples for few-shot
1519
refine_task_eg_iterations: 3
16-
seen_set_size: 20
20+
# Number of variations of prompts to generate in given iteration
1721
style_variation: 5
18-
task_description: You are a mathematics expert. You will be given a mathematics problem
19-
which you need to solve
22+
# Number of questions to be asked to LLM in a single batch, during training step
23+
questions_batch_size: 1
24+
# Number of batches of questions to correctly answered, for a prompt to be considered as performing good
25+
min_correct_count: 3
26+
# Max number of mini-batches on which we should evaluate our prompt
27+
max_eval_batches: 6
28+
# Number of top best performing prompts to be considered for next iterations
2029
top_n: 1
21-
unique_model_id: gpt-4o
30+
# Description of task. This will be fed to prompt
31+
task_description: "You are a mathematics expert. You will be given a mathematics problem which you need to solve"
32+
# Base instruction, in line with your dataset. This will be fed to prompt
33+
base_instruction: "Lets think step by step."
34+
# Instruction for specifying answer format
35+
answer_format: "For each question present the reasoning followed by the correct answer."
36+
# Number of samples from dataset, set aside as training data. In every iteration we would be drawing
37+
# `questions_batch_size` examples from training data with replacement.
38+
seen_set_size: 25
39+
# Number of examples to be given for few shots
40+
few_shot_count: 5
41+
# Number of synthetic training examples to be generated
42+
num_train_examples: 20
43+
# Generate synthetic reasoning
44+
generate_reasoning: true
45+
# Generate description of an expert which can solve the task at hand
46+
generate_expert_identity: true
47+
# Generate keywords that describe the intent of the task
48+
generate_intent_keywords: false
49+
############################ Critique Task Description End ############################
50+
51+
52+
53+

demos/svamp/.env

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,8 @@
1+
USE_OPENAI_API_KEY="False"
2+
3+
OPENAI_API_KEY=""
4+
OPENAI_MODEL_NAME =""
5+
16
OPENAI_API_VERSION=""
27
AZURE_OPENAI_ENDPOINT=""
38
AZURE_OPENAI_DEPLOYMENT_NAME=""

0 commit comments

Comments
 (0)