What is Velixar?
Velixar is a memory layer for AI. It gives your LLM-powered applications persistent, semantic memory — so they remember users, learn preferences, and retrieve relevant context automatically.Quickstart
Store your first memory in under 2 minutes
API Reference
Full endpoint documentation with examples
Python SDK
pip install velixar
JavaScript SDK
npm install velixar
Why Velixar?
LLMs are stateless. Every conversation starts from zero. Velixar fixes that.| Without Velixar | With Velixar |
|---|---|
| User repeats preferences every session | Preferences recalled automatically |
| No context between conversations | Full conversation history available |
| Generic responses for every user | Personalized responses from memory |
| Manual RAG pipeline management | Semantic search built in |
How it works
Store
Send memories to the API — facts, preferences, context, decisions. Each memory is embedded and indexed for semantic search.
Recall
Query with natural language. Velixar returns the most relevant memories ranked by semantic similarity.
Key features
- Semantic search — Find memories by meaning, not just keywords
- Tiered storage — Pin critical facts (tier 0), session context (tier 1), semantic (tier 2), org-wide (tier 3)
- Auto-extraction — Automatically extract memories from conversations with deduplication
- Conflict resolution — New information supersedes old, stale memories are cleaned up
- Multi-tenant — Isolate memories per user, per workspace, or share across an org
- SDKs — Python and JavaScript clients, plus LangChain integration