API Reference
Query & Search
API endpoints for semantic, keyword, and hybrid search.
Single Collection Query
POST /v1/collections/{collection_name}/queryPerform semantic search against a collection. The query is embedded using the collection's configured embedding model.
Request body:
{
"query": "What are the main findings about climate change?",
"top_k": 10,
"filters": { "author": "Smith" },
"min_score": 0.5,
"search_mode": "semantic",
"rerank": true
}| Field | Type | Required | Default | Constraints |
|---|---|---|---|---|
query | string | yes | — | Natural language query text |
top_k | integer | no | Collection default | 1–1,000 |
filters | object | no | — | Metadata filters (operators) |
min_score | float | no | Collection default | Minimum similarity score |
search_mode | string | no | Collection default | semantic, keyword, hybrid |
rerank | boolean | no | Collection setting | Override reranking |
Response 200:
{
"results": [
{
"id": "chunk_abc123",
"text": "The study found significant increases in global temperatures...",
"score": 0.892,
"document_id": "660e8400-e29b-41d4-a716-446655440000",
"chunk_index": 3,
"metadata": { "author": "Smith", "page": 5 }
}
],
"query": "What are the main findings about climate change?",
"collection": "research_papers",
"total": 2
}Result Fields
| Field | Type | Description |
|---|---|---|
id | string | Chunk/vector ID |
text | string | The chunk text content |
score | float | Similarity score (higher = more relevant) |
document_id | string | Source document UUID |
chunk_index | integer | Position within the source document |
metadata | object | Chunk and document metadata |
Errors: 404 — Collection not found
Multi-Collection Query
POST /v1/querySearch across multiple collections. Results are merged and sorted by score.
{
"query": "machine learning",
"collections": ["docs", "papers"],
"top_k": 20,
"search_mode": "hybrid"
}| Field | Type | Required | Description |
|---|---|---|---|
query | string | yes | Search query |
collections | string[] | yes | Collection names to search |
top_k | number | no | Max results per collection |
filters | object | no | Metadata filters (operators) |
min_score | number | no | Minimum similarity score |
search_mode | string | no | semantic, keyword, hybrid |
rerank | boolean | no | Override collection reranking |
Each result includes a collection field indicating its source.
curl -X POST http://localhost:6100/v1/query \
-H "Authorization: Bearer $BIGRAG_API_SECRET" \
-H "Content-Type: application/json" \
-d '{"query":"machine learning","collections":["docs","papers"],"top_k":20}'Batch Query
POST /v1/batch/queryRun up to 20 independent queries in parallel.
{
"queries": [
{"collection": "docs", "query": "authentication", "top_k": 5},
{"collection": "papers", "query": "neural networks", "top_k": 10, "search_mode": "hybrid", "filters": {"year": {"$gte": 2024}}}
]
}Each query item supports: collection, query, top_k, filters, min_score, search_mode, and rerank.
Response contains an array of result sets matching the input order.
Collection Analytics
GET /v1/collections/{name}/analyticsQuery statistics for a collection.
Response 200:
{
"collection": "docs",
"period_24h": {
"query_count": 142,
"avg_latency_ms": 45.2,
"avg_score": 0.82,
"avg_result_count": 8.3
},
"period_7d": {
"query_count": 1203,
"avg_latency_ms": 48.1,
"avg_score": 0.79,
"avg_result_count": 7.9
},
"period_30d": {
"query_count": 4521,
"avg_latency_ms": 46.7,
"avg_score": 0.80,
"avg_result_count": 8.1
},
"top_queries": [
{ "query": "authentication flow", "count": 23 }
]
}