Skip to content

Q&A

Ask natural language questions about your files. Roset uses Retrieval Augmented Generation (RAG) to find relevant documents via vector search, then generates an answer with source citations.

How It Works

  1. Your question is embedded using OpenAI text-embedding-3-small
  2. Vector search finds the most relevant document chunks
  3. The matched content is sent as context to gpt-4o-mini
  4. You get an answer with citations back to specific files

Quick Start

Parameters

ParameterTypeDefaultDescription
questionstringrequiredThe question to ask
spacestringall spacesScope Q&A to a specific space
topKnumber5Number of context documents (max 10)
streambooleanfalseStream the response via SSE

Response

json
{
  "answer": "The payment terms are net 30 days from the invoice date...",
  "sources": [
    {
      "fileId": "abc-123",
      "filename": "contract.pdf",
      "snippet": "Payment shall be made within thirty (30) days...",
      "score": 0.89
    }
  ],
  "question": "What are the payment terms?"
}

Streaming

When stream: true, the response is an SSE stream with three event types:

data: {"type": "chunk", "content": "The payment"}
data: {"type": "chunk", "content": " terms are"}
data: {"type": "chunk", "content": " net 30 days..."}
data: {"type": "sources", "sources": [...]}
data: [DONE]
Note

Q&A requires an OpenAI API key for both embedding and answer generation. Configure one via the console or PUT /v1/org/provider-keys.

Next Steps