Chat Completion Object
The Chat Completions API returns two object types: non-streaming responses return a ChatCompletion object, and streaming responses return ChatCompletionChunk objects.
Chat Completion Object
The complete response object returned by non-streaming requests.
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1709123456,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 15,
"total_tokens": 27
},
"system_fingerprint": "fp_abc123"
}
Field Descriptions
| Field | Type | Description |
|---|
id | string | Unique identifier for this completion |
object | string | Always chat.completion |
created | integer | Unix timestamp of creation time |
model | string | The model name actually used |
choices | array | List of completion results, length determined by the n request parameter |
choices[].index | integer | Index of the result in the list |
choices[].message | object | The message generated by the model |
choices[].message.role | string | Always assistant |
choices[].message.content | string|null | Text content of the message |
choices[].message.tool_calls | array|null | List of tools the model requests to call |
choices[].finish_reason | string | Stop reason: stop, length, tool_calls, content_filter |
usage | object | Token usage statistics |
usage.prompt_tokens | integer | Number of input tokens |
usage.completion_tokens | integer | Number of output tokens |
usage.total_tokens | integer | Total number of tokens |
system_fingerprint | string|null | System fingerprint |
Chat Completion Chunk Object (Streaming)
During streaming requests, the server returns ChatCompletionChunk objects in SSE (Server-Sent Events) format, chunk by chunk.
{
"id": "chatcmpl-abc123",
"object": "chat.completion.chunk",
"created": 1709123456,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"delta": {
"role": "assistant",
"content": "Hello"
},
"finish_reason": null
}
]
}
Field Descriptions
| Field | Type | Description |
|---|
id | string | Unique identifier for this completion, shared across all chunks |
object | string | Always chat.completion.chunk |
created | integer | Unix timestamp of creation time |
model | string | The model name actually used |
choices | array | List of incremental results |
choices[].index | integer | Index of the result in the list |
choices[].delta | object | Incremental message content |
choices[].delta.role | string | Only appears in the first chunk, value is assistant |
choices[].delta.content | string | Incremental text content |
choices[].delta.tool_calls | array | Incremental tool calls |
choices[].finish_reason | string|null | Stop reason in the last chunk, null for all others |
usage | object|null | Only included in the last chunk when stream_options.include_usage is true |
Streaming responses follow the SSE protocol. Each event is prefixed with data: , and the stream ends with data: [DONE]:
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1709123456,"model":"gpt-4o","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null}]}
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1709123456,"model":"gpt-4o","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1709123456,"model":"gpt-4o","choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]}
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1709123456,"model":"gpt-4o","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}
data: [DONE]
Crazyrouter is fully compatible with the OpenAI Chat Completion object format and supports all standard fields. Additional fields returned by upstream models are also passed through to the client.