Managing Tensor One Resources via Python SDK

The Tensor One Python SDK provides comprehensive functions to dynamically manage platform resources including endpoints, templates, and GPU allocations.

Get Endpoints

Retrieve all available endpoint configurations:
import tensorone
import os

tensorone.api_key = os.getenv("TENSORONE_API_KEY")

# Fetch all endpoints
endpoints = tensorone.get_endpoints()
print(endpoints)

Create a Template

Create a reusable template by specifying name and Docker image:
import tensorone
import os

tensorone.api_key = os.getenv("TENSORONE_API_KEY")

try:
    new_template = tensorone.create_template(
        name="test",
        image_name="tensorone/base:0.1.0"
    )
    print(new_template)

except tensorone.error.QueryError as err:
    print(err)
    print(err.query)

Create an Endpoint

Create a serverless endpoint using custom template and GPU configuration:
import tensorone
import os

tensorone.api_key = os.getenv("TENSORONE_API_KEY")

try:
    # Create a serverless template
    new_template = tensorone.create_template(
        name="test",
        image_name="tensorone/base:0.4.4",
        is_serverless=True
    )
    print(new_template)

    # Create an endpoint using the template
    new_endpoint = tensorone.create_endpoint(
        name="test",
        template_id=new_template["id"],
        gpu_ids="AMPERE_16",
        workers_min=0,
        workers_max=1,
    )
    print(new_endpoint)

except tensorone.error.QueryError as err:
    print(err)
    print(err.query)

Get Available GPUs

List all GPUs available for allocation:
import tensorone
import json
import os

tensorone.api_key = os.getenv("TENSORONE_API_KEY")

gpus = tensorone.get_gpus()
print(json.dumps(gpus, indent=2))

Get GPU by ID

Fetch detailed information about a specific GPU:
import tensorone
import json
import os

tensorone.api_key = os.getenv("TENSORONE_API_KEY")

gpu = tensorone.get_gpu("NVIDIA A100 80GB PCIe")
print(json.dumps(gpu, indent=2))
These SDK utilities enable efficient management and scaling of compute infrastructure for serverless, GPU-intensive, and custom container workloads within Tensor One.