How it works
Extraction triggers
Extraction runs on two triggers:
- Session end —
POST /api/session/endfires extraction on the full conversation - Periodic — Every 10 messages as a fallback
LLM extracts memories
An LLM analyzes the conversation and pulls out discrete facts, preferences, and decisions with a salience score.
Entity extraction
Entities (people, tools, projects, concepts) and their relationships are extracted from each memory and added to the knowledge graph. This enables graph traversal and structural queries alongside vector search.
Dedup & conflict resolution
- Semantic dedup: If a new memory has cosine similarity > 0.85 with an existing one, it’s skipped
- Conflict resolution: If the new memory contradicts an old one, the old memory is deleted and the new one is stored
Example
A user says during a chat:“Actually, I switched from VS Code to Cursor last week. And I prefer TypeScript over JavaScript now.”Velixar extracts:
| Memory | Type | Salience | Tier |
|---|---|---|---|
| ”User’s primary editor is Cursor (switched from VS Code)“ | preference | 0.85 | 0 |
| ”User prefers TypeScript over JavaScript” | preference | 0.82 | 0 |