|
||
---|---|---|
Details | ||
1,16$ per 1M Tokens
Billed monthly, metered.
|
$0.00000116 each
|
This API provides access to the DeepSeek R1 language model, allowing you to generate responses to user messages with configurable parameters and real-time web search capabilities.
| Parameter | Type | Description | Example |
|-----------|------|-------------|---------|
| messages
| array | An array of message objects representing the conversation | [{"role": "user", "content": "Hello"}]
|
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| max_tokens
| integer | 2048 | Maximum number of tokens to generate in the response |
| temperature
| float | 0.7 | Controls randomness in response generation (0.0 - 1.0) |
| top_p
| float | 0.9 | Controls diversity of token selection (0.0 - 1.0) |
| stream
| boolean | false | Whether to stream the response |
| search_web
| boolean | false | Enable web search to enhance response |
{
"role": "user" | "assistant" | "system",
"content": "Message text here"
}
user
: Messages from the end-userassistant
: Previous AI responses (optional, for context)system
: System-level instructions (optional)model_name could be one of (depending on your active plan):
-> deepseek-r1-distill-llama-70b-websearch
curl -X POST "https://deadlock.p.nadles.com/{model_name}/completion" \
-H "Content-Type: application/json" \
-H "x-billing-token: {your_api_token}" \
-d '{
"messages": [
{
"role": "user",
"content": "just respond with a short hi, nothing more"
}
],
"max_tokens": 2048,
"temperature": 0.7,
"top_p": 0.9,
"stream": false,
"search_web": false
}'
{
"model_response": {
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": "content text here",
"name": null,
"role": "assistant",
"tool_calls": null
}
}
],
"created": 1738346112,
"id": "chatcmpl-f90e0bbff53d4b9390dc7e44bf67d044",
"model": "deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"object": "chat.completion"
},
"summaries": [
{
"error": null,
"status": 200,
"summary": "summary text here",
"url": "source_url_here"
}
],
"token_info": {
"prompt_tokens": 431,
"response_tokens": 814,
"total_tokens": 1245
}
}
| Field | Type | Description |
|-------|------|-------------|
| model_response.choices[].finish_reason
| string | Reason for response completion |
| model_response.choices[].index
| integer | Index of the choice |
| model_response.choices[].logprobs
| object|null | Log probabilities of tokens |
| model_response.choices[].message.content
| string | Generated response text |
| model_response.choices[].message.name
| string|null | Name identifier |
| model_response.choices[].message.role
| string | Role of the message sender |
| model_response.choices[].message.tool_calls
| array|null | Tool call information |
| model_response.created
| integer | Timestamp of creation |
| model_response.id
| string | Unique response identifier |
| model_response.model
| string | Model name |
| model_response.object
| string | Object type |
| summaries
| array | Web search results |
| summaries[].error
| string|null | Error information |
| summaries[].status
| integer | HTTP status code |
| summaries[].summary
| string | Search result summary |
| summaries[].url
| string | Source URL |
| token_info.prompt_tokens
| integer | Number of tokens in prompt |
| token_info.response_tokens
| integer | Number of tokens in response |
| token_info.total_tokens
| integer | Total tokens used |
Send your access key (available after subscription) in the "X-Billing-Token" header with each API call.