Skip to content
#

local-llm-integration

Here are 35 public repositories matching this topic...

🚀 A powerful Flutter-based AI chat application that lets you run LLMs directly on your mobile device or connect to local model servers. Features offline model execution, Ollama/LLMStudio integration, and a beautiful modern UI. Privacy-focused, cross-platform, and fully open source.

  • Updated Nov 19, 2025
  • Dart

An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.

  • Updated Aug 11, 2025
  • Python

LLM Router is a service that can be deployed on‑premises or in the cloud. It adds a layer between any application and the LLM provider. In real time it controls traffic, distributes a load among providers of a specific LLM, and enables analysis of outgoing requests from a security perspective (masking, anonymization, prohibited content).

  • Updated Jan 6, 2026
  • Python

Improve this page

Add a description, image, and links to the local-llm-integration topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the local-llm-integration topic, visit your repo's landing page and select "manage topics."

Learn more