Running Elephas offline with LM Studio (Silicon Chip)
How We Got Elephas Offline: Let's Dive In!
There are two apps that you can use,
- LM Studio (Supports only M1, M2, M3 macs)
- Jan.ai (Supports both Silicon and Intel Macs)
In this post, we will see about LMStudio.
Why LM Studio
LM Studio offers a straightforward solution: download AI models to your local system for enhanced data security, bypassing risky unknown endpoints.
But only for (M1, M2, M3) Mac users.
Installing LM Studio (M1, M2, and M3)
Visit the page LM studio and download to set up on your Mac
Configuring AI models
After installing LM Studio into your system, Do LM Studio → search→ Search the model is, “Llama-3-8B-Instruct-32k-v0.1-GGUF” and download it.
There are many AI models available in the LM Studio, we suggest to use, Llama3 quantised model from here,
Running Local Server
Select the downloaded model,
Click the “Start Server” button
How use in connect with Elephas
Now go to Elephas → Preference → AI Providers → Custom AI, and Enter your local host url.
And Tap on refresh models
How use in Elephas
Now go to Elephas → Preference → General, and you can pick a feature and select any LM Studio models available,
Super Chat
In Super Chat as well, you can select the LM studio models
Super brain
Super brain indexing still requires internet connection as it depends on backend AI engine, we will add the offline capability soon.
You can now (from 10.x version) index Super Brain files using offline indexing models. Check out LM Studio’s Text Embeddings | LM Studio
Make sure to choose the embedding model before starting the server,
When creating a brain, choose the new local model,
The performance may vary depending on your machine hardware.
Need help? We're Here for You!
Contact us in support@elephas.app.