this is an example for cortex release 0.19 and may not deploy correctly on other releases of cortex
This example shows how to deploy a realtime text generation API using a GPT-2 model from Hugging Face's transformers library.
Create a Python file named predictor.py.
Define a Predictor class with a constructor that loads and initializes the model.
Add a predict function that will accept a payload and return the generated text.
# predictor.pyimport torchfrom transformers import GPT2Tokenizer, GPT2LMHeadModelclass PythonPredictor:def __init__(self, config):self.device = "cuda" if torch.cuda.is_available() else "cpu"print(f"using device: {self.device}")self.tokenizer = GPT2Tokenizer.from_pretrained("gpt2")self.model = GPT2LMHeadModel.from_pretrained("gpt2").to(self.device)def predict(self, payload):input_length = len(payload["text"].split())tokens = self.tokenizer.encode(payload["text"], return_tensors="pt").to(self.device)prediction = self.model.generate(tokens, max_length=input_length + 20, do_sample=True)return self.tokenizer.decode(prediction[0])
Here are the complete Predictor docs.
Create a requirements.txt file to specify the dependencies needed by predictor.py. Cortex will automatically install them into your runtime once you deploy:
# requirements.txttorchtransformers==3.0.*
Create a cortex.yaml file and add the configuration below. A RealtimeAPI provides a runtime for inference and makes your predictor.py implementation available as a web service that can serve real-time predictions:
# cortex.yaml- name: text-generatorkind: RealtimeAPIpredictor:type: pythonpath: predictor.py
Here are the complete API configuration docs.
cortex deploy takes your Predictor implementation along with the configuration from cortex.yaml and creates a web API:
$ cortex deploycreating text-generator (RealtimeAPI)
Monitor the status of your API using cortex get:
$ cortex get --watchenv realtime api status last update avg request 2XXlocal text-generator updating 8s - -
Show additional information for your API (e.g. its endpoint) using cortex get <api_name>:
$ cortex get text-generatorstatus last update avg request 2XXlive 1m - -endpoint: http://localhost:8888...
You can also stream logs from your API:
$ cortex logs text-generator...
Once your API is live, use curl to test your API (it will take a few seconds to generate the text):
$ curl http://localhost:8888 \-X POST -H "Content-Type: application/json" \-d '{"text": "machine learning is"}'"machine learning is ..."
Cortex can automatically provision infrastructure on your AWS account and deploy your models as production-ready web services:
$ cortex cluster up
This creates a Cortex cluster in your AWS account, which will take approximately 15 minutes.
After your cluster is created, you can deploy your model to your cluster by using the same code and configuration as before:
$ cortex deploy --env awscreating text-generator (RealtimeAPI)
Note that the --env flag specifies the name of the CLI environment to use. CLI environments contain the information necessary to connect to your cluster. The default environment is local, and when the cluster was created, a new environment named aws was created to point to the cluster. You can change the default environment with cortex env default <env_name).
Monitor the status of your APIs using cortex get:
$ cortex get --watchenv realtime api status up-to-date requested last update avg request 2XXaws text-generator live 1 1 1m - -local text-generator live 1 1 17m 3.1285 s 1
The output above indicates that one replica of your API was requested and is available to serve predictions. Cortex will automatically launch more replicas if the load increases and will spin down replicas if there is unused capacity.
Show additional information for your API (e.g. its endpoint) using cortex get <api_name>:
$ cortex get text-generator --env awsstatus up-to-date requested last update avg request 2XXlive 1 1 17m - -metrics dashboard: https://us-west-2.console.aws.amazon.com/cloudwatch/home#dashboards:name=cortexendpoint: https://***.execute-api.us-west-2.amazonaws.com/text-generator...
Use your new endpoint to make requests to your API on AWS:
$ curl https://***.execute-api.us-west-2.amazonaws.com/text-generator \-X POST -H "Content-Type: application/json" \-d '{"text": "machine learning is"}'"machine learning is ..."
When you make a change to your predictor.py or your cortex.yaml, you can update your api by re-running cortex deploy.
Let's modify predictor.py to set the length of the generated text based on a query parameter:
# predictor.pyimport torchfrom transformers import GPT2Tokenizer, GPT2LMHeadModelclass PythonPredictor:def __init__(self, config):self.device = "cuda" if torch.cuda.is_available() else "cpu"print(f"using device: {self.device}")self.tokenizer = GPT2Tokenizer.from_pretrained("gpt2")self.model = GPT2LMHeadModel.from_pretrained("gpt2").to(self.device)def predict(self, payload, query_params): # this line is updatedinput_length = len(payload["text"].split())output_length = int(query_params.get("length", 20)) # this line is addedtokens = self.tokenizer.encode(payload["text"], return_tensors="pt").to(self.device)prediction = self.model.generate(tokens, max_length=input_length + output_length, do_sample=True) # this line is updatedreturn self.tokenizer.decode(prediction[0])
Run cortex deploy to perform a rolling update of your API:
$ cortex deploy --env awsupdating text-generator (RealtimeAPI)
You can track the status of your API using cortex get:
$ cortex get --env aws --watchrealtime api status up-to-date stale requested last update avg request 2XXtext-generator updating 0 1 1 29s - -
As your new implementation is initializing, the old implementation will continue to be used to respond to prediction requests. Eventually the API's status will become "live" (with one up-to-date replica), and traffic will be routed to the updated version.
Try your new code:
$ curl https://***.execute-api.us-west-2.amazonaws.com/text-generator?length=30 \-X POST -H "Content-Type: application/json" \-d '{"text": "machine learning is"}'"machine learning is ..."
If your cortex cluster is using GPU instances (configured during cluster creation), you can run your text generator API on GPUs. Add the compute field to your API configuration:
# cortex.yaml- name: text-generatorkind: RealtimeAPIpredictor:type: pythonpath: predictor.pycompute:gpu: 1
Run cortex deploy to update your API with this configuration:
$ cortex deploy --env awsupdating text-generator (RealtimeAPI)
You can use cortex get to check the status of your API, and once it's live, prediction requests should be faster.
In development environments, you may wish to disable rolling updates since rolling updates require additional cluster resources. For example, a rolling update of a GPU-based API will require at least two GPUs, which can require a new instance to spin up if your cluster only has one instance. To disable rolling updates, set max_surge to 0 in the update_strategy configuration:
# cortex.yaml- name: text-generatorkind: RealtimeAPIpredictor:type: pythonpath: predictor.pycompute:gpu: 1update_strategy:max_surge: 0
Run cortex delete to delete each API:
$ cortex delete text-generator --env localdeleting text-generator$ cortex delete text-generator --env awsdeleting text-generator
Running cortex delete will free up cluster resources and allow Cortex to scale down to the minimum number of instances you specified during cluster creation. It will not spin down your cluster.
Deploy another one of our examples.
See our exporting guide for how to export your model to use in an API.
Try the batch API tutorial to learn how to deploy batch APIs in Cortex.
See our traffic splitter example for how to deploy multiple APIs and set up a traffic splitter.
See uninstall if you'd like to spin down your cluster.