Sign in

Forgot Password?
Send magic link
Continue with password

New on our platform? Create account


Sign in with Google

Create account

By creating an account you agree to all applicable terms and conditions.

Already have an account? Sign in


Sign in with Google
Applicable terms and conditions

By continuing you agree to the following terms and conditions:

  • LLM models with web search Terms of Service
  • Nadles Terms of Service
  • Nadles Privacy Policy
DeepSeek-R1-Distill-Llama-70B+WebSearch [1,16$ per 1M Tokens]
by duckhosting.lol
Product details
Subscribe
By creating a subscription you agree to all applicable terms and conditions.
Product details
Pricing
Pay per use
Details
1,16$ per 1M Tokens
Billed monthly, metered.
$0.00000116 each
API

LLM models with web search

Terms of Service
  • Live Web Search Capabilities
  • Support: Available on Discord https://discord.com/invite/RhXgJrYTQu

Authentication

  • Requires API key in the request headers -> -H "x-billing-token: {your_api_token}"

Request Body Parameters

Required Parameters

Parameter Type Description Example
messages array An array of message objects representing the conversation [{"role": "user", "content": "Hello"}]

Optional Parameters

Parameter Type Default Description
max_tokens integer 2048 Maximum number of tokens to generate in the response
temperature float 0.7 Controls randomness in response generation (0.0 - 1.0)
top_p float 0.9 Controls diversity of token selection (0.0 - 1.0)
stream boolean false Whether to stream the response
search_web boolean false Enable web search to enhance response

Message Object Structure

{
  "role": "user" | "assistant" | "system",
  "content": "Message text here"
}

Roles

  • user: Messages from the end-user
  • assistant: Previous AI responses (optional, for context)
  • system: System-level instructions (optional)

Example Request

model_name could be one of (depending on your active plan):
-> deepseek-r1-distill-llama-70b-websearch
-> deepseek-v3

curl -X POST "https://deadlock.p.nadles.com/{model_name}/completion" \
  -H "Content-Type: application/json" \
  -H "x-billing-token: {your_api_token}" \
  -d '{
    "messages": [
      {
        "role": "user",
        "content": "just respond with a short hi, nothing more"
      }
    ],
    "max_tokens": 2048,
    "temperature": 0.7,
    "top_p": 0.9,
    "stream": false,
    "search_web": false
  }'

Example Response

{
  "model_response": {
    "choices": [
      {
        "finish_reason": "stop",
        "index": 0,
        "logprobs": null,
        "message": {
          "content": "content text here",
          "name": null,
          "role": "assistant",
          "tool_calls": null
        }
      }
    ],
    "created": 1738346112,
    "id": "chatcmpl-f90e0bbff53d4b9390dc7e44bf67d044",
    "model": "deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
    "object": "chat.completion"
  },
  "summaries": [
    {
      "error": null,
      "status": 200,
      "summary": "summary text here",
      "url": "source_url_here"
    }
  ],
  "token_info": {
    "prompt_tokens": 431,
    "response_tokens": 814,
    "total_tokens": 1245
  }
}

Example Response: { "model_response": string, "summaries": [ { "error": string | null, "status": integer, "summary": string, "url": string } ], "token_info": { "prompt_tokens": integer, "response_tokens": integer, "total_tokens": integer } }

Response Fields

Field Type Description
model_response.choices[].finish_reason string Reason for response completion
model_response.choices[].index integer Index of the choice
model_response.choices[].logprobs object|null Log probabilities of tokens
model_response.choices[].message.content string Generated response text
model_response.choices[].message.name string|null Name identifier
model_response.choices[].message.role string Role of the message sender
model_response.choices[].message.tool_calls array|null Tool call information
model_response.created integer Timestamp of creation
model_response.id string Unique response identifier
model_response.model string Model name
model_response.object string Object type
summaries array Web search results
summaries[].error string|null Error information
summaries[].status integer HTTP status code
summaries[].summary string Search result summary
summaries[].url string Source URL
token_info.prompt_tokens integer Number of tokens in prompt
token_info.response_tokens integer Number of tokens in response
token_info.total_tokens integer Total tokens used

Pricing

  • Pricing is based on token usage
Access URL
Authentication

Send your access key (available after subscription) in the "X-Billing-Token" header with each API call.

Endpoints
  • Generate AI response with optional web search
    POST /deepseek-r1-distill-llama-70b-websearch/completion
  • Get API usage statistics
    GET /usage
  • Health check endpoint
    GET /ping
Subscribe
By creating a subscription you agree to all applicable terms and conditions.
Quotas
Tokens
No limit
POST /deepseek-r1-distill-llama-70b-websearch/completion
GET /usage
GET /ping
Rate limits
  • 60 calls per minute to 3 endpoint(s)
Powered by Nadles