Running Elephas offline with LM Studio (Silicon Chip)
How We Got Elephas Offline: Let's Dive In!
Elephas now offers multiple ways to run AI models offline, giving you complete control over your data privacy and performance. Choose the option that best fits your needs.
NEW: Elephas Built-in Offline Models
The simplest way to go offline! Elephas now comes with pre-integrated offline AI models that require no technical setup. Just select, download with one click, and start using immediately. You need Apple Silicon (M1 and later) to use this feature.
The primary way to leverage Elephas offline models is through Brain creation.

Perfect for users who want instant offline AI without any technical setup!
💡 For detailed information about Elephas offline models, check out our comprehensive Elephas Inbuilt Offline Models guide.
LM Studio Integration
There are two external apps you can use for offline AI models:
- LM Studio (Supports only M1, M2, M3 Macs)
- Jan.ai (Supports both Silicon and Intel Macs)
In this section, we will focus specifically on LM Studio setup and integration.
Why LM Studio
LM Studio offers a straightforward solution: download AI models to your local system for enhanced data security, bypassing risky unknown endpoints.
But only for (M1, M2, M3) Mac users.
Installing LM Studio (M1, M2, and M3)
Visit the page LM studio and download to set up on your Mac

Configuring AI models
After installing LM Studio into your system, Do LM Studio → search→ Search the model is, “Llama-3-8B-Instruct-32k-v0.1-GGUF” and download it.

There are many AI models available in the LM Studio, we suggest to use, Llama3 quantised model from here,
Running Local Server
Select the downloaded model,

Click the “Start Server” button

How to connect with Elephas
Now go to Elephas → Preference → AI Providers → Custom AI, and Enter your local host url.


How use in Elephas
Now go to Elephas → Preference → Model Settings, and you can pick a feature and select any LM Studio models available,


Super Chat
In Super Chat as well, you can select the LM studio models

Super brain
You can now (from 10.x version) index Super Brain files using offline indexing models. Check out LM Studio’s Text Embeddings | LM Studio
Make sure to choose the embedding model before starting the server,

When creating a brain, choose the new local model,

The performance may vary depending on your machine hardware.
Need help? We're Here for You!
Contact us in support@elephas.app.