A modern, user-friendly web interface for interacting with locally installed LLM models via Ollama. This project provides an intuitive chat interface that allows you to communicate with various language models running on your local machine.
- 🚀 Clean, modern chat interface
- 💬 Real-time streaming responses
- 🔄 Dynamic model switching
- 🎨 Markdown and code syntax highlighting
- 📱 Responsive design for all devices
-
Install Ollama and download your preferred models:
Install Ollama from https://ollama.ai/ or run the following command:
curl -fsSL https://ollama.com/install.sh | shThen pull your desired models, for example:
ollama pull gemma:3b
-
Clone and set up the project:
git clone git@github.com:khokonm/ollama-gui.git cd ollama-gui npm install -
Start the development server:
npm run dev
-
Open http://localhost:3000 in your browser
- Select your preferred model from the dropdown menu in the header
- Type your message in the input field
- Press Enter or click the Send button to chat with the model
- The model's responses will stream in real-time with proper formatting for code and markdown
This project is built with:
- Next.js - React framework
- React Markdown - Markdown rendering
- Syntax Highlighter - Code highlighting
Contributions are welcome! Please feel free to submit a Pull Request.
MIT