Conversation
|
Let me know your opinion on this:
|
We could do this. We will have datapoint from each search that will help us do it easily I believe.
Regarding this we compare the performance of the neighbor noises w.r.t the base noise right? |
yes that's correct |
|
@willisma I went with 2. {
"pretrained_model_name_or_path": "black-forest-labs/FLUX.1-dev",
"torch_dtype": "bf16",
"pipeline_call_args": {
"height": 1024,
"width": 1024,
"max_sequence_length": 512,
"guidance_scale": 3.5,
"num_inference_steps": 50
},
"verifier_args": {
"name": "gemini",
"max_new_tokens": 800,
"choice_of_metric": "overall_score"
},
"search_args": {
"search_method": "zero-order",
"search_rounds": 4,
"threshold": 0.95,
"num_neighbors": 4
}
}python main.py --prompt="a tiny astronaut hatching from an egg on the moon" --num_prompts=None --pipeline_config_path=configs/flux.1_dev_zero_order.jsonIf you could take a look and let me know your comments, that would be helpful. |
main.py
Outdated
| print("Using the best noise from the previous round.") | ||
| prev_dp = best_datapoint_per_round[previous_round] | ||
| noises = {int(prev_dp["best_noise_seed"]): prev_dp["best_noise"]} | ||
| else: |
There was a problem hiding this comment.
I'm a little confused about here & the should_generate_noise flag; what's the difference between the two?


Config:
{ "pretrained_model_name_or_path": "black-forest-labs/FLUX.1-dev", "torch_dtype": "bf16", "pipeline_call_args": { "height": 1024, "width": 1024, "max_sequence_length": 512, "guidance_scale": 3.5, "num_inference_steps": 50 }, "verifier_args": { "name": "gemini", "max_new_tokens": 800, "choice_of_metric": "overall_score" }, "search_args": { "search_method": "zero-order", "search_rounds": 4, "threshold": 0.95, "num_neighbors": 4 } }