Tensor One Clusters

Tensor One clusters are containerized instances intended for application execution. Note: To guarantee compatibility across supported architectures when building container images for Tensor One, use the following build flag:
--platform Linux/Arm64, Linux/AMD64

Comprehending Cluster Configuration and Components

You can create a Cluster, which is a containerized server instance, to utilize specific hardware resources. Each Cluster is assigned a unique, dynamically generated identifier (e.g., 2s56cp0pof1rmt).

A typical Cluster includes:

Container Volume

  • Holds the operating system and temporary storage
  • This storage is volatile and will be lost upon Cluster shutdown or reboot

Disk Volume

  • Provides persistent storage, preserved throughout the Cluster’s lifespan
  • Storage remains available even after a Cluster restart or shutdown

Ubuntu Linux Container

  • Runs any software compatible with Ubuntu Linux

Resource Allocation

  • Dedicated vCPU and RAM for container processes

Optional GPUs or CPUs

  • Attach specialized resources like CUDA-enabled GPUs for AI/ML workloads

Pre-configured Templates

  • Automatically installs common tools and settings on creation
  • Enables one-click access to frequently used environments

Proxy Connection for Web Access

  • Accessible via proxy URLs
Format:
https://[Cluster-id]-[port-number].proxy.tensorone.ai
Example:
https://2s56cp0pof1rmt-7860.proxy.tensorone.ai

Customizing Your Cluster

When creating a Cluster, you can customize the following options:
  • GPU Type and Quantity
  • System Disk Size
  • Start Commands
  • Environment Variables
  • HTTP/TCP Port Exposure
  • Persistent Storage Options