Execute Endpoint Synchronously
curl --request POST \
  --url https://api.tensorone.ai/v2/endpoints/{endpointId}/runsync \
  --header 'Authorization: <api-key>' \
  --header 'Content-Type: application/json' \
  --data '{
  "input": {}
}'
{
  "output": "<any>",
  "executionTime": 123,
  "status": "completed"
}
Execute your serverless endpoint synchronously with input data and receive the results immediately. This is the primary way to run inference on your deployed AI models.

Path Parameters

  • endpointId: The unique identifier of the endpoint to execute

Request Body

The request body should contain an input object with your model-specific data:
{
    "input": {
        "prompt": "Explain quantum computing in simple terms",
        "max_tokens": 100,
        "temperature": 0.7
    }
}

Example Usage

Text Generation

curl -X POST "https://api.tensorone.ai/v2/endpoints/ep_text_model/runsync" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "input": {
      "prompt": "Write a haiku about AI",
      "max_tokens": 50
    }
  }'

Image Generation

curl -X POST "https://api.tensorone.ai/v2/endpoints/ep_image_model/runsync" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "input": {
      "prompt": "A serene mountain landscape at sunset",
      "width": 512,
      "height": 512,
      "steps": 30
    }
  }'

Response

Returns the execution results:
{
    "output": "Artificial minds learn,\nProcessing data with care,\nFuture unfolds bright.",
    "executionTime": 1.23,
    "status": "completed",
    "metadata": {
        "model": "llama-2-7b",
        "tokensUsed": 47,
        "cost": 0.0023
    }
}

Response Fields

  • output: The result from your model (format varies by model type)
  • executionTime: Time taken to execute in seconds
  • status: Execution status (completed or failed)
  • metadata: Additional information about the execution

Error Handling

Common error responses:

400 Bad Request

{
    "error": "INVALID_INPUT",
    "message": "Missing required field: prompt",
    "details": {
        "field": "input.prompt",
        "reason": "This field is required"
    }
}

404 Not Found

{
    "error": "ENDPOINT_NOT_FOUND",
    "message": "Endpoint ep_invalid does not exist"
}

500 Internal Server Error

{
    "error": "EXECUTION_FAILED",
    "message": "Model execution failed",
    "details": {
        "reason": "Out of memory error"
    }
}

Best Practices

  • Input Validation: Always validate your input data before sending requests
  • Error Handling: Implement retry logic for transient failures
  • Timeouts: Set appropriate request timeouts (default: 300 seconds)
  • Rate Limiting: Respect rate limits to avoid throttling
For long-running tasks, consider using asynchronous execution or streaming responses through our SDK libraries.
Execution costs are charged per request. Monitor your usage to avoid unexpected bills.

Authorizations

Authorization
string
header
required

API key authentication. Use 'Bearer YOUR_API_KEY' format.

Path Parameters

endpointId
string
required

Unique identifier of the endpoint

Body

application/json

Input data for the endpoint execution

The body is of type object.

Response

Execution completed successfully

The response is of type object.