Skip to content

nikhil-shukl/LLM_Projects

Repository files navigation

🚀 Generative AI & RAG with LangChain 1.x

This repository contains end-to-end implementations of Generative AI and Retrieval-Augmented Generation (RAG) systems built using LangChain 1.x, following the modern runnable-based pipeline approach instead of deprecated chain abstractions.

The project demonstrates a complete real-world GenAI workflow — from data ingestion to embeddings, vector stores, retrievers, and context-aware LLM responses, using both OpenAI and open-source models (Ollama, HuggingFace).


🧠 What You Will Learn

  • Modern LangChain 1.x runnable pipelines
  • Retrieval-Augmented Generation (RAG) from scratch
  • Vector similarity search using FAISS & ChromaDB
  • Context-aware LLM generation
  • Working with both cloud (OpenAI) and local (Ollama) models
  • Debuggable, production-ready GenAI architecture

🛠️ Tech Stack

  • LangChain 1.x
  • OpenAI (GPT-4 / GPT-4o-mini)
  • Ollama (Local LLMs & Embeddings)
  • FAISS & ChromaDB (Vector Stores)
  • HuggingFace Embeddings
  • Python
  • Streamlit (Chat UI)
  • LangSmith (Tracing & Observability)

✨ Key Highlights

  • ✅ Uses pure LangChain 1.x runnable pipelines
  • ❌ No deprecated langchain.chains APIs
  • ✅ Explicit retrieval and context injection
  • ✅ Supports cloud + local LLMs
  • ✅ Clean, debuggable, production-style code
  • ✅ Streamlit-based chatbot interface
  • ✅ LangSmith tracing for observability

📌 Author Nikhil Shukla Generative AI | LangChain | RAG | LLM Engineering

About

End-to-end Generative AI & RAG implementations using LangChain 1.x, FAISS, OpenAI & Ollama — covering data ingestion, embeddings, vector stores, retrievers, and runnable-based pipelines.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors