Skip to content

Using Local LLMs (Ollama)

Ollama lets you run LLMs locally for privacy and offline translation.

Install Ollama

  1. Download and install from https://ollama.ai
  2. Start Ollama
  3. Pull a model (example):
bash
ollama pull llama3

Use in Supervertaler

Once Ollama is installed and running, Supervertaler can use it as a provider.

INFO

Local models vary a lot in quality. For best results, test a few models on your typical content.