How to run Elephas offline with Ollama

Running Elephas knowledge assistant 100% offline with Ollama

Elephas offers privacy friendly offline mode to run it 100% local.

Here are the steps,

Install Ollama on your Mac

Visit https://ollama.com/download/mac and click “Download for Mac”

You will get a DMG file, that you need to double click to install Ollama. It will be added to the Applications folder. Now, Ollama must be running in the background.

 

Let’s say, you want to try the DeepSeek-R1-Distill-Qwen-7B,


ollama run deepseek-r1:7b

Then if you want to index files to Super Brain, you need to pull an embedding model. Say comic embed,

 
ollama pull nomic-embed-text

Install AI Deck client to manage Ollama model

This is optional, if you prefer a nice UI to manage models from Ollama.

Install AI Deck from AppStore,

You can download, start and stop models using AI Deck. This is built by Elephas team.

 
Notion image
 

Download a Chat model

There are many decent local models, some of the recommended ones are,

  • Llama 3.2
  • Mistral
  • DeepSeek

Embedding model

Embedding is a process of converting text to mathematical vectors for easier comparison.

nomic-embed-text-1.5 is a popular local embedding model. We have explained that step in the first step of this article.

ollama pull nomic-embed-text

Using local model on Elephas

Now, go to Elephas Settings > Offline AI > and click the Settings icon next to Ollama.

Elephas by default will pick the local models loaded in Ollama. So, you should see the active models shown here.

 
Notion image
 
Notion image
 

Now, you can choose this model as your chat model or text editing model under Model Settings.

 
Notion image
 

To use the local embedding model, ideally set up a new brain and choose the downloaded embedding model (nomic-embed-text in our case) from the settings.

 

Then when you added files to the brain, Elephas will use that.

 
💡

You can’t change indexing model

 
Did this answer your question?
😞
😐
🤩