Serve Model Using vLLM
Model Deployment is a very common GPU use case. With Shadeform, it’s easy to deploy models right to the most affordable gpu’s in the market with just a few commands.
In this guide, we will deploy Mistral-7b-v0.1 with vLLM onto an A6000.
Setup
This guide builds off of our others for finding the best gpu and for deploying gpu containers.
We have a python notebook already to go for you to deploy this model that you can find here.
The requirements are simple, so in a python environment with (requests
+ optionally openai
) installed:
Then in basic_serving_vllm.ipynb
you will need to input your Shadeform API Key.
Serving a Model
Once we have an instance, we deploy a model serving container with this request payload.
Once we request it, Shadeform will provision the machine, and deploy a docker container based on the image, arguments, and environment variables that we selected. This might take 5-10 minutes depending on the machine chosen and the size of the model weights you choose. For more information on the API fields, check out the Create Instance API Reference.
We can see that this will deploy an openai compatible server with vLLM serving Mistral-7b-v0.1.
Checking on our Model server
There are three main steps that we need to wait for: VM Provisioning, image building, and spinning up vLLM.
This cell will print the IP address once it has provisioned. However, the image needs to download, and vLLM needs to download the model and spin up, which should take a few minutes.
Watch via the notebook
Once the model is ready, this code will output the model list and a response to our query. We can use either requests or OpenAI’s completions library.
Watching with the Shadeform UI
Or once we’ve made the request, we can watch the logs under Running Instances. Once it is ready to serve it should look something like this:
Happy Serving!