Super Brain configuration

Super brain indexing model

Indexing model

When you add documents to your brain, Elephas translates them to a mathematical format. The type of model determines the quality of the output.

There are 3 models available,

Small

Small is now the default, it has a 1536 vector dimension. Suitable for most of the professional work.

Machine name: text-embedding-3-small

Large

(Available only with your OpenAI key setup)

If you need the highest accuracy to brainstorm with your Super Brain, then you can pick this model. This has higher vector dimension, can pick up even small nuances when returning answers.

text-embedding-3-large

Ada

This used to the default model by OpenAI, now is deprecated. Use it only with your old brains, say, created before Feb 2024.

Machine name: text-embedding-ada-002

💡
Elephas moved to Small model by Feb 2024.
 

Context Token Size

When you ask a question against a brain, Elephas internally fetches relevant content from your brain (from across the documents), then sends them to AI with your question. The amount of information included with each message is controlled by “Context Token Size”. By default, it is set to 2867.

Say, in the case of English, when you set it to 2867, you may see around 12-15 contexts included.

If you want to save cost and your answer does not contain many excerpts (say, the top 5 excerpts will contain the answer), then set it to a lower value, say 500-1000. On the other hand, if you want to include the maximum number of excerpts, set it to a higher value. Say, you are generating some of sort of summary.

You can verify the contexts included in the right-side bar context window of your chat.

💡
For each brain, set a value according to your use case.
Did this answer your question?
😞
😐
🤩