/instances
Get active and pending instances.
curl --request GET \
--url https://api.shadeform.ai/v1/instances \
--header 'X-API-KEY: <api-key>'
{
"instances": [
{
"id": "d290f1ee-6c54-4b01-90e6-d701748f0851",
"cloud": "hyperstack",
"region": "canada-1",
"shade_instance_type": "A6000",
"cloud_instance_type": "gpu_1x_a6000",
"cloud_assigned_id": "13b057d7-e266-4869-985f-760fe75a78b3",
"shade_cloud": true,
"name": "cool-gpu-server",
"configuration": {
"memory_in_gb": 12,
"storage_in_gb": 256,
"vcpus": 6,
"num_gpus": 1,
"gpu_type": "A100",
"interconnect": "pcie",
"vram_per_gpu_in_gb": 48,
"os": "ubuntu_22_shade_os"
},
"ip": "1.0.0.1",
"ssh_user": "shadeform",
"ssh_port": 22,
"status": "active",
"cost_estimate": "103.4",
"hourly_price": 210,
"launch_configuration": {
"type": "docker",
"docker_configuration": {
"image": "vllm/vllm-openai:latest",
"args": "--model mistralai/Mistral-7B-v0.1",
"shared_memory_in_gb": 8,
"envs": [
{
"name": "HUGGING_FACE_HUB_TOKEN",
"value": "hugging_face_api_token"
}
],
"port_mappings": [
{
"host_port": 80,
"container_port": 8000
}
],
"volume_mounts": [
{
"host_path": "~/.cache/huggingface",
"container_path": "/root/.cache/huggingface"
}
]
},
"script_configuration": {
"base64_script": "IyEvYmluL2Jhc2gKCiMgRW5kbGVzcyBsb29wCndoaWxlIHRydWUKZG8KICAgICMgRmV0Y2ggYSBjYXQgZmFjdCB3aXRoIGEgbWF4aW11bSBsZW5ndGggb2YgMTQwIGNoYXJhY3RlcnMKICAgIGN1cmwgLS1uby1wcm9ncmVzcy1tZXRlciBodHRwczovL2NhdGZhY3QubmluamEvZmFjdD9tYXhfbGVuZ3RoPTE0MAoKICAgICMgUHJpbnQgYSBuZXdsaW5lIGZvciByZWFkYWJpbGl0eQogICAgZWNobwoKICAgICMgU2xlZXAgZm9yIDMgc2Vjb25kcyBiZWZvcmUgdGhlIG5leHQgaXRlcmF0aW9uCiAgICBzbGVlcCAzCmRvbmUKCgo="
}
},
"port_mappings": [
{
"internal_port": 8000,
"external_port": 80
}
],
"created_at": "2016-08-29T09:12:33.001Z",
"deleted_at": "2016-08-29T09:12:33.001Z"
}
]
}
Authorizations
Response
The unique identifier for the instance. Used in the instances for the /instances/{id}/info and /instances/{id}/delete APIs.
"d290f1ee-6c54-4b01-90e6-d701748f0851"
Specifies the underlying cloud provider. See this explanation for more details.
aws
, azure
, lambdalabs
, tensordock
, runpod
, latitude
, jarvislabs
, oblivus
, paperspace
, datacrunch
, massedcompute
, vultr
"hyperstack"
Specifies the region.
"canada-1"
The Shadeform standardized instance type. See this explanation for more details.
"A6000"
The instance type for the underlying cloud provider. See this explanation for more details.
"gpu_1x_a6000"
The unique identifier of the instance issued by the underlying cloud provider.
"13b057d7-e266-4869-985f-760fe75a78b3"
Specifies if the instance is launched in Shade Cloud or in a linked cloud account.
true
The name of the instance
"cool-gpu-server"
The amount of memory for the instance in gigabytes. Note that this is not VRAM which is determined by GPU type and the number of GPUs.
12
The amount of storage for the instance. If this storage is too low for the instance type, please email support@shadeform.ai as the storage may be adjustable.
256
The number of vCPUs for the instance.
6
The number of GPUs for the instance.
1
The type of GPU for the instance.
"A100"
The type of GPU interconnect.
"pcie"
The video memory per GPU for the instance in gigabytes.
48
The operating system of the instance.
"ubuntu_22_shade_os"
The public IP address of the instance. In select cases, it may also be the DNS.
"1.0.0.1"
The SSH user used to SSH into the instance.
"shadeform"
The SSH port of the instance. In most cases, this will be port 22 but for some clouds, this may be a different port.
22
The status of the instance.
pending
, active
, deleted
"active"
The cost incurred by the instance. This only the cost via Shadeform. If the instance is deployed in your own cloud account, then all billing is through your cloud provider.
"103.4"
The timestamp of when the instance was created in UTC.
"2016-08-29T09:12:33.001Z"
The timestamp of when the instance was deleted in UTC.
"2016-08-29T09:12:33.001Z"
The hourly price of the instance in cents.
210
Defines automatic actions after the instance becomes active.
Specifies the type of launch configuration. See Launch Configuration for more details.
docker
, script
"docker"
May only be used if launch_configuration.type is 'docker'. Use docker_configuration to automatically pull and run a docker image. See this tutorial for examples.
Specifies the docker image to be pulled and run on the instance at startup.
"vllm/vllm-openai:latest"
Specifies the container arguments passed into the image at runtime.
"--model mistralai/Mistral-7B-v0.1"
Describes the amount of shared memory allocated for the container. Equivalent to using the --shm-size flag in the docker cli. If shared_memory_in_gb is not specified, then the container will use the host namespace which is the equivalent of --ipc=host.
8
List of environment variable name-value pairs that will be passed to the docker container.
Environment variables for the container image.
List of port mappings between the host instance and the docker container. Equivalent of -p flag for docker run command.
Maps the public instance port to a port on the container.
List of volume mounts between the host instance and the docker container. Equivalent of -v flag for docker run command.
Mounts the host volume to a container file path.
May only be used if launch_configuration.type is 'script'. Configures a startup script to be run automatically after the instance is active. See this [tutorial]/guides/startupscript) for examples.
A startup script that is base64 encoded.
"IyEvYmluL2Jhc2gKCiMgRW5kbGVzcyBsb29wCndoaWxlIHRydWUKZG8KICAgICMgRmV0Y2ggYSBjYXQgZmFjdCB3aXRoIGEgbWF4aW11bSBsZW5ndGggb2YgMTQwIGNoYXJhY3RlcnMKICAgIGN1cmwgLS1uby1wcm9ncmVzcy1tZXRlciBodHRwczovL2NhdGZhY3QubmluamEvZmFjdD9tYXhfbGVuZ3RoPTE0MAoKICAgICMgUHJpbnQgYSBuZXdsaW5lIGZvciByZWFkYWJpbGl0eQogICAgZWNobwoKICAgICMgU2xlZXAgZm9yIDMgc2Vjb25kcyBiZWZvcmUgdGhlIG5leHQgaXRlcmF0aW9uCiAgICBzbGVlcCAzCmRvbmUKCgo="
curl --request GET \
--url https://api.shadeform.ai/v1/instances \
--header 'X-API-KEY: <api-key>'
{
"instances": [
{
"id": "d290f1ee-6c54-4b01-90e6-d701748f0851",
"cloud": "hyperstack",
"region": "canada-1",
"shade_instance_type": "A6000",
"cloud_instance_type": "gpu_1x_a6000",
"cloud_assigned_id": "13b057d7-e266-4869-985f-760fe75a78b3",
"shade_cloud": true,
"name": "cool-gpu-server",
"configuration": {
"memory_in_gb": 12,
"storage_in_gb": 256,
"vcpus": 6,
"num_gpus": 1,
"gpu_type": "A100",
"interconnect": "pcie",
"vram_per_gpu_in_gb": 48,
"os": "ubuntu_22_shade_os"
},
"ip": "1.0.0.1",
"ssh_user": "shadeform",
"ssh_port": 22,
"status": "active",
"cost_estimate": "103.4",
"hourly_price": 210,
"launch_configuration": {
"type": "docker",
"docker_configuration": {
"image": "vllm/vllm-openai:latest",
"args": "--model mistralai/Mistral-7B-v0.1",
"shared_memory_in_gb": 8,
"envs": [
{
"name": "HUGGING_FACE_HUB_TOKEN",
"value": "hugging_face_api_token"
}
],
"port_mappings": [
{
"host_port": 80,
"container_port": 8000
}
],
"volume_mounts": [
{
"host_path": "~/.cache/huggingface",
"container_path": "/root/.cache/huggingface"
}
]
},
"script_configuration": {
"base64_script": "IyEvYmluL2Jhc2gKCiMgRW5kbGVzcyBsb29wCndoaWxlIHRydWUKZG8KICAgICMgRmV0Y2ggYSBjYXQgZmFjdCB3aXRoIGEgbWF4aW11bSBsZW5ndGggb2YgMTQwIGNoYXJhY3RlcnMKICAgIGN1cmwgLS1uby1wcm9ncmVzcy1tZXRlciBodHRwczovL2NhdGZhY3QubmluamEvZmFjdD9tYXhfbGVuZ3RoPTE0MAoKICAgICMgUHJpbnQgYSBuZXdsaW5lIGZvciByZWFkYWJpbGl0eQogICAgZWNobwoKICAgICMgU2xlZXAgZm9yIDMgc2Vjb25kcyBiZWZvcmUgdGhlIG5leHQgaXRlcmF0aW9uCiAgICBzbGVlcCAzCmRvbmUKCgo="
}
},
"port_mappings": [
{
"internal_port": 8000,
"external_port": 80
}
],
"created_at": "2016-08-29T09:12:33.001Z",
"deleted_at": "2016-08-29T09:12:33.001Z"
}
]
}