Hi, thanks a lot for this demo repo.
It'd be great if we didn't need to use our OpenAI key for running it.
Could you consider taking in a model name as input, and then putting that in a litellm.chat_completion.create() call instead of openai... in the same format?
That'll allow us to set our own LLMs (over 100 options!): https://litellm.vercel.app/docs
Hi, thanks a lot for this demo repo.
It'd be great if we didn't need to use our OpenAI key for running it.
Could you consider taking in a model name as input, and then putting that in a
litellm.chat_completion.create()call instead ofopenai...in the same format?That'll allow us to set our own LLMs (over 100 options!): https://litellm.vercel.app/docs