Skip to content

Latest commit

 

History

History
37 lines (27 loc) · 1.44 KB

File metadata and controls

37 lines (27 loc) · 1.44 KB

Streamlit chatbot with Agentic RAG and Information Extractor

To run the chatbot, install the necessary libs from requirements.txt
The solution uses OpenAI, Mistral AI with llamaindex for the Agentic RAG and Google Gemini-1.5-Flash for Image text inference
Signup for Openai, Mistral AI, and Google Gemini (Generative AI Studio), and get the keys for each.
Set the env variables:
export OPENAI_API_KEY=
export MISTRAL_API_KEY=
export GOOGLE_API_KEY=

Put your PDF files in a folder "data"

Run the multiExtractImages.py, by giving the PDF file path and output dir for each pdf. This will extract the images from the PDF and store in output folders

Edit the chatbot.py and make sure the same image output folders are specified in line 149,150,151 msg = img_ext.get_response(query,"data_output")

Make sure the same data folder is specified in line 12, where the pdf files are located agent=multiDocAgent.get_agent("data")

Run the chatbot using : streamlit run chatbot.py

To test the Agentic RAG:
Use cmd : python testAgent.py
Specify the correct data folder, and input a query

To test the Image based RAG using Gemini 1.5 flash:
Use cmd : python testImg.py
Specify the correct image file path in line 57 and input a query

Ref : https://youtu.be/LiVOx0Rp4oI

AgenticRAG