Skip to main content
Velixar complements traditional RAG by adding personal context on top of your document retrieval.

The pattern

User query
    ├── RAG pipeline → relevant documents
    └── Velixar      → relevant personal memories

        Combined context → LLM → response

Implementation

import requests
import openai

def answer(user_id: str, query: str, rag_results: list[str]) -> str:
    # Fetch personal memories
    memories = requests.get("https://api.velixarai.com/v1/memory/search",
        headers={"Authorization": "Bearer vlx_your_key"},
        params={"q": query, "user_id": user_id, "limit": 5}
    ).json().get("memories", [])

    memory_context = "\n".join(f"- {m['content']}" for m in memories)
    doc_context = "\n".join(f"- {doc}" for doc in rag_results)

    response = openai.chat.completions.create(
        model="gpt-4",
        messages=[{
            "role": "system",
            "content": f"""Answer using these sources:

Documents:
{doc_context}

Personal context about this user:
{memory_context}"""
        }, {
            "role": "user",
            "content": query
        }]
    )
    return response.choices[0].message.content

Why this works

RAG aloneRAG + Velixar
Same answer for every userPersonalized to each user
No memory of past questionsRemembers what was asked before
Generic document retrievalDocuments + personal context