Q&A
Ask natural language questions about your files. Roset uses Retrieval Augmented Generation (RAG) to find relevant documents via vector search, then generates an answer with source citations.
How It Works
- Your question is embedded using OpenAI
text-embedding-3-small - Vector search finds the most relevant document chunks
- The matched content is sent as context to
gpt-4o-mini - You get an answer with citations back to specific files
Quick Start
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
question | string | required | The question to ask |
space | string | all spaces | Scope Q&A to a specific space |
topK | number | 5 | Number of context documents (max 10) |
stream | boolean | false | Stream the response via SSE |
Response
json
{
"answer": "The payment terms are net 30 days from the invoice date...",
"sources": [
{
"fileId": "abc-123",
"filename": "contract.pdf",
"snippet": "Payment shall be made within thirty (30) days...",
"score": 0.89
}
],
"question": "What are the payment terms?"
}Streaming
When stream: true, the response is an SSE stream with three event types:
data: {"type": "chunk", "content": "The payment"}
data: {"type": "chunk", "content": " terms are"}
data: {"type": "chunk", "content": " net 30 days..."}
data: {"type": "sources", "sources": [...]}
data: [DONE]
Note
Q&A requires an OpenAI API key for both embedding and answer generation. Configure one via the console or PUT /v1/org/provider-keys.
Next Steps
- Search — Lower-level search without LLM answer generation
- TypeScript SDK — Full SDK reference