An open-source, multi-model AI chat playground built with Next.js App Router. Switch between providers and models, compare outputs side-by-side, and use optional web search and image attachments.
- Multiple providers: Gemini, OpenRouter (DeepSeek R1, Llama 3.3, Qwen, Mistral, Moonshot, Reka, Sarvam, etc.)
- Selectable model catalog: choose up to 5 models to run
- Web search toggle per message
- Image attachment support (Gemini)
- Conversation sharing: Share conversations with shareable links
- Clean UI: keyboard submit, streaming-friendly API normalization
- Next.js 14 (App Router, TypeScript)
- Tailwind CSS
- API routes for provider calls
- Docker containerization support
- Install deps
npm i- Configure environment Copy the example environment file:
cp .env.example .envThen set the environment variables you plan to use. You can also enter keys at runtime in the app's Settings.
# OpenRouter (wide catalog of community models)
OPENROUTER_API_KEY=your_openrouter_key
# Google Gemini (Gemini 2.5 Flash/Pro)
GEMINI_API_KEY=your_gemini_key
# Unstable Inference endpoint (custom provider)
- Run dev server
npm run dev
# open http://localhost:3000Set only those you need; others can be provided per-request from the UI:
OPENROUTER_API_KEY— required for OpenRouter models.GEMINI_API_KEY— required for Gemini models with images/web.OLLAMA_URL— base URL for Ollama API (e.g., http://localhost:11434 or http://host.docker.internal:11434)
Open-Fiesta supports local Ollama models. To use Ollama:
-
Configure Ollama:
- Ensure Ollama is running and accessible:
ollama serve - Make sure Ollama is configured to accept external connections by setting:
export OLLAMA_HOST=0.0.0.0:11434
- Ensure Ollama is running and accessible:
-
Add Ollama Models:
- Go to the "Custom Models" section in the app (wrench icon)
- Add Ollama models by entering the model name (e.g., "llama3", "mistral", "gemma")
- The system will validate that the model exists in your Ollama instance
-
Docker Networking:
- If running Open-Fiesta in Docker, use
http://host.docker.internal:11434as the Ollama URL - This allows the Docker container to communicate with Ollama running on your host machine
- If running Open-Fiesta in Docker, use
-
Select and Use:
- Select your Ollama models in the model picker
- Start chatting with your locally running models
This project includes comprehensive Docker support for both development and production:
- Hot reload enabled for instant code changes
- Volume mounting for live code updates
- Includes all development dependencies
- Multi-stage build for optimized image size (~100MB)
- Proper security practices with non-root user
- Environment variable configuration support
npm run docker:build- Build production Docker imagenpm run docker:run- Run production containernpm run docker:dev- Start development environment with Docker Composenpm run docker:prod- Start production environment with Docker Compose
app/– UI and API routesapi/openrouter/route.ts– normalizes responses across OpenRouter models; strips reasoning, cleans up DeepSeek R1 to plain textapi/gemini/route.ts,api/gemini-pro/route.tsshared/[encodedData]/– shared conversation viewer
components/– UI components (chat box, model selector, etc.)shared/– components for shared conversation display
lib/– model catalog and client helperssharing/– conversation sharing utilities
Dockerfile– Production container definitionDockerfile.dev– Development container definitiondocker-compose.yml– Multi-container setup.dockerignore– Files to exclude from Docker builds
Open-Fiesta post-processes DeepSeek R1 outputs to remove reasoning tags and convert Markdown to plain text for readability while preserving content.
We welcome contributions of all kinds: bug fixes, features, docs, and examples.
-
Set up
- Fork this repo and clone your fork.
- Start the dev server with
npm run dev.
-
Branching
- Create a feature branch from
main:feat/<short-name>orfix/<short-name>.
- Create a feature branch from
-
Coding standards
- TypeScript, Next.js App Router.
- Run linters and build locally:
npm run lintnpm run build
- Keep changes focused and small. Prefer clear names and minimal dependencies.
-
UI/UX
- Reuse components in
components/where possible. - Keep props typed and avoid unnecessary state.
- Reuse components in
-
APIs & models
- OpenRouter logic lives in
app/api/openrouter/. - Gemini logic lives in
app/api/gemini/andapp/api/gemini-pro/. - If adding models/providers, update
lib/models.tsorlib/customModels.tsand ensure the UI reflects new options.
- OpenRouter logic lives in
-
Docker changes
- When modifying dependencies, ensure both
DockerfileandDockerfile.devare updated if needed - Test both development and production Docker builds
- When modifying dependencies, ensure both
-
Commit & PR
- Write descriptive commits (imperative mood):
fix: …,feat: …,docs: …. - Open a PR to
mainwith:- What/why, screenshots if UI changes, and testing notes.
- Checklist confirming
npm run lintandnpm run buildpass. - Test both traditional and Docker setups if applicable.
- Link related issues if any.
- Write descriptive commits (imperative mood):
-
Issue reporting
Thank you for helping improve Open‑Fiesta!
This project is licensed under the MIT License. See LICENSE for details.
- Model access via OpenRouter and Google
