tensorone logo

CLI

Getting started with project

Develop and deploy your project entirely on Tensor One's infrastructure easily.


Overview

Tensor One projects enable you to develop and deploy endpoints entirely on Tensor One's infrastructure. That means you can get a worker up and running without knowing Docker or needing to structure handler code. This Dockerless workflow also streamlines the development process: you don't need to rebuild and push container images or edit your endpoint to use the new image each time you change your code.

Get started

In this tutorial, we'll explore how to get the IP address of the machine your code is running on and deploy your code to the Tensor One platform. You will get the IP address of your local machine, the development server, and the Serverless Endpoint's server.

By the end, you'll have a solid understanding of how to set up a project environment, interact with your code, and deploy your code to a Serverless Endpoint on the Tensor One platform.

While this project is scoped to getting the IP address of the machine your code is running on, you can use the Tensor One platform to deploy any code you want. For larger projects, bundling large packages a Docker image and making code changes requires multiple steps. With a Tensor One development server, you can make changes to your code and test them in a live environment without having to rebuild a Docker image or redeploy your code to the Tensor One platform.

This tutorial takes advantage of making updates to your code and testing them in a live environment.

Let's get started by setting up the project environment.

Prerequisites

Before we begin, you'll need the following:

  • tensoronecli
  • Python 3.8 or later

Step 1. Set up the project environment

In this first step, you'll set up your project environment using the tensoronecli.

Set your API key in the tensoronecli configuration file.

tensoronectl config --apiKey $(API_KEY)

Next, use the tensoronecli project create command to create a new directory and files for your project.

tensoronecli project create

Select the Hello World project and follow the prompts on the screen.

Step 2. Write the code

Next, you'll write the code to get the IP address of the machine your code is running on.

Use httpbin to retrieve the IP address and test the code locally.

Change directories to the project directory and open the src/handler.py file in your text editor.

cd my_ip

The current code is boiler plate text. Replace the code with the following:

from tensoroneGPU import tensoroneClient
import requests
def get_my_ip(job):
	response = requests.get('https://httpbin.org/ip')
	return response.json()['origin']

tensoroneClient.serverless.start({"handler": get_my_ip})

This uses httpbin to get the IP address of the machine your code is running on.

Run this code locally to get the IP address of your machine, for example:

python3 src/handler.py --test_input '{"input": {"prompt": ""}}'
INFO  | test_input set, using test_input as job input.
DEBUG | Retrieved local job: {'input': {'prompt': ''}, 'id': 'local_test'}
INFO  | local_test | Started.
DEBUG | local_test | Handler output: 174.21.174.xx
DEBUG | local_test | run_job return: {'output': '174.21.174.xx'}
INFO  | Job local_test completed successfully.
INFO  | Job result: {'output': '174.21.174.xx'}
INFO  | Local testing complete, exiting.

This testing environment works for smaller projects, but for larger projects, you will want to use the tensoronecli to deploy your code to run on the Tensor One platform.

In the next step, you'll see how to deploy your code to the Tensor One platform.

Step 3. Run a development server

Now let's run the code you've written using Tensor One's development server. You'll start a development server using the tensoronecli project dev command.

Tensor One provides a development server that allows you to quickly make changes to your code and test these changes in a live environment. You don't need to rebuild a Docker image or redeploy your code to the Tensor One platform just because you made a small change or added a new dependency.

To run a development server, use the tensoronecli project dev command and select a Network volume.

tensoronecli project dev

This starts a development server on a Cluster. The logs shows the status of your Clusteras well as the port number your Cluster is running on.

The development server watches for changes in your code and automatically updates the Cluster with changes to your code and files like requirements.txt.

When the Cluster is running you should see the following logs:

Connect to the API server at:

[lug43rcd07ug47] > https://lug43rcd07ug47-8080.proxy.tpu.one.
[lug43rcd07ug47]
[lug43rcd07ug47] Synced venv to network volume
[lug43rcd07ug47] --- Starting Serverless Worker | Version 1.6.2 ---

The [lug43rcd07ug47] is your Worker Id. The https://lug43rcd07ug47-8080.proxy.tpu.one is the URL to access your Cluster with the 8080 port exposed. You can interact with this URL like you would any other Endpoint.

Step 4. Interact with your code

In this step, you'll interact with your code by running a curl command to fetch the IP address from the development server. You'll learn how to include dependencies in your project and how to use the Tensor One API to run your code.

You might have noticed that the function to get an IP address uses a third-party dependency requests. This means by default it's not included in Python or the Tensor One environment.

To include this dependency, you need to add it to the requirements.txt file in the root of your project.

tensoroneGPU
requests

When you save your file, notice that the development server automatically updates the Cluster with the dependencies.

During this sync, your Cluster is unable to receive requests. Wait until you see the following logs:

Restarted API server with PID: 701
--- Starting Serverless Worker | Version 1.6.2 ---
INFO | Starting API server.

Now you can interact with your code.

While the Cluster is still running, create a new terminal session and run the following command:

curl -X 'POST' \
'https://${YOUR_ENDPOINT}-8080.proxy.tpu.one/runsync' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"input": {}
}'

This command uses the runsync method on the Tensor One API to run your code synchronously.

The previous command returns a response:

{
"id": "test-9613c9be-3fed-401f-8cda-6b5f354417f8",
"status": "COMPLETED",
"output": "69.30.85.70"
}

The output is the IP address of the Cluster your code is running on and not your local machine. Even though you're executing code locally, you can see that it's running on a Cluster.

Now, what if you wanted this function to run as a Serverless Endpoint? Meaning, you didn't want to keep the Cluster running all the time. You only wanted it to turn on when you sent a request to it.

In the next step, you'll learn to deploy your code to the Serverless platform and get the IP address of that machine.

Step 5. Deploy your code

Now that you've tested your code in the development environment, you'll deploy it to the Tensor One platform using the Tensor One CLI project deploy command. This will make your code available as a serverless endpoint.

Stop the development server by pressing Ctrl + C in the terminal.

To deploy your code to the Tensor One platform, use the Tensor One CLI project deploy command.

tensoronecli project deploy

Select your network volume and wait for your Endpoint to deploy.

After deployment, you will see the following logs:

The following URLs are available:

  • https://api.tpu.one/v2/${YOUR_ENDPOINT}/runsync
  • https://api.tpu.one/v2/${YOUR_ENDPOINT}/run
  • https://api.tpu.one/v2/${YOUR_ENDPOINT}/health

Note You can follow the logs to see the status of your deployment. You may notice that the logs show the Cluster being created and then the Endpoint being created.

Step 6. Interact with your Endpoint

Finally, you'll interact with your Endpoint by running a curl command to fetch the IP address from the deployed Serverless function. You'll see how your code runs as expected and tested in the development environment.

When the deployment completes, you can interact with your Endpoint as you would any other Endpoint.

Replace the previous Endpoint URL and specify the new one and add your API key.

Then, run the following command:

curl -X 'POST' \
'https://api.tpu.one/v2/${YOUR_ENDPOINT}/runsync' \
-H 'accept: application/json' \
-H 'authorization: ${YOUR_API_KEY}' \
-H 'Content-Type: application/json' \
-d '{
"input": {}
}'

The previous command returns a response:

{
"delayTime": 249,
"executionTime": 88,
"id": "sync-b2188a79-3f9f-4b99-b4d1-18273db3f428-u1",
"output": "69.30.85.69",
"status": "COMPLETED"
}

The output is the IP address of the Cluster your code is running on.

Conclusion

In this tutorial, you've learned how to get the IP address of the machine your code is running on and deploy your code to the Tensor One platform. You've also learned how to set up a project environment, run a development server, and interact with your code using the Tensor One API. With this knowledge, you can now use this code as a Serverless Endpoint or continue developing your project, testing, and deploying it to the Tensor One platform.

Previous
Installing TensorOne CLI