Skip to main content
POST
/
api
/
sre
/
llm
/
v1
/
evaluate
Evaluate
curl --request POST \
  --url https://developer.synq.io/api/sre/llm/v1/evaluate \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "outputSchema": {},
  "systemPrompt": "<string>",
  "messages": [
    {
      "content": "<string>",
      "role": "MESSAGE_ROLE_UNSPECIFIED"
    }
  ],
  "modelType": "MODEL_TYPE_UNSPECIFIED"
}
'
{
  "output": {},
  "metrics": {
    "inputTokens": 123,
    "outputTokens": 123,
    "totalTokens": 123,
    "latencyMs": 123,
    "model": "<string>",
    "cacheWriteTokens": 123,
    "cacheReadTokens": 123
  }
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json

Request to evaluate an LLM request with structured output.

outputSchema
object
required

JSON schema defining the structure of the expected output. The LLM will produce output conforming to this schema.

systemPrompt
string
required

Main system prompt providing instructions to the LLM. This should be constant as it will be cached for efficiency.

messages
Message · object[]

History of messages in the conversation. Must contain at least one message. The last message is used as the final request to the LLM.

Minimum array length: 1
modelType
enum<string>

Type of model to use for the evaluation. Defaults to MODEL_TYPE_SUMMARY if not specified or set to MODEL_TYPE_UNSPECIFIED.

Available options:
MODEL_TYPE_UNSPECIFIED,
MODEL_TYPE_SUMMARY,
MODEL_TYPE_THINKING

Response

200 - application/json

Success

Response from the LLM evaluation.

output
object

Structured output from the LLM conforming to the provided schema.

metrics
LlmResponseMetrics · object

Metrics about the LLM response.