Quantcast
Channel: Andrej Baranovskij Blog
Viewing all articles
Browse latest Browse all 705

Secure and Private: On-Premise Invoice Processing with LangChain and Ollama RAG

$
0
0
The Ollama desktop tool helps run LLMs locally on your machine. This tutorial explains how I implemented a pipeline with LangChain and Ollama for on-premise invoice processing. Running LLM on-premise provides many advantages in terms of security and privacy. Ollama works similarly to Docker; you can think of it as Docker for LLMs. You can pull and run multiple LLMs. This allows to switch between LLMs without changing RAG pipeline. 

 

Viewing all articles
Browse latest Browse all 705

Trending Articles