Introduction
To run R2R with default local LLM settings, executer2r serve --docker --config-name=local_llm
.
R2R supports RAG with local LLMs through the Ollama library. You may follow the instructions on their official website to install Ollama outside of the R2R Docker.
Preparing Local LLMs
Next, make sure that you have all the necessary LLMs installed:Configuration
R2R uses a TOML configuration file for managing settings, which you can read about here. For local setup, we’ll use the defaultlocal_llm
configuration. This can be customized to your needs by setting up a standalone project.
Local Configuration Details
Local Configuration Details
The This configuration uses
local_llm
configuration file (core/configs/local_llm.toml
) includes:ollama
and the model mxbai-embed-large
to run embeddings. We have excluded media file parsers as they are not yet supported locally.We are still working on adding local multimodal RAG features. Your feedback would be appreciated.