Once your model is exported, you've implemented a Predictor, and you've configured your API, you're ready to deploy a Batch API.
The cortex deploy command collects your configuration and source code and deploys your API on your cluster:
$ cortex deploy​created image-classifier (BatchAPI)
APIs are declarative, so to update your API, you can modify your source code and/or configuration and run cortex deploy again.
After deploying a Batch API you can use cortex get <api_name> to display the Batch API endpoint, which you can use to make the following requests:
Submit a batch job
Get the status of a job
Stop a job
You can find documentation for the Batch API endpoint here.
The cortex get command displays the status of all of your API:
$ cortex get​env batch api running jobs latest job id last updateaws image-classifier 1 69d9c0013c2d0d97 (submitted 30s ago) 46s
cortex get <api_name> shows additional information about a specific Batch API and lists a summary of all currently running / recently submitted jobs.
$ cortex get image-classifier​job id status progress failed start time duration69d9c0013c2d0d97 running 1/24 0 29 Jul 2020 14:38:01 UTC 30s69da5b1f8cd3b2d3 completed with failures 15/16 1 29 Jul 2020 13:38:01 UTC 5m20s69da5bc32feb6aa0 succeeded 40/40 0 29 Jul 2020 12:38:01 UTC 10m21s69da5bd5b2f87258 succeeded 34/34 0 29 Jul 2020 11:38:01 UTC 8m54s​endpoint: http://***.amazonaws.com/image-classifier...
Appending the --watch flag will re-run the cortex get command every 2 seconds.
Once a job has been submitted to your Batch API (see here), you can use the Job ID from job submission response to get the status, stream logs, and stop a running job using the CLI.
After a submitting a job, you can use the cortex get <api_name> <job_id> command to show information about the job:
$ cortex get image-classifier 69d9c0013c2d0d97​job id: 69d9c0013c2d0d97status: running​start time: 29 Jul 2020 14:38:01 UTCend time: -duration: 32s​batch statstotal succeeded failed avg time per batch24 1 0 20s​worker statsrequested running failed succeeded2 2 0 0​job endpoint: https://***..amazonaws.com/image-classifier/69d9c0013c2d0d97
You can use cortex logs <api_name> <job_id> to stream logs from a job:
$ cortex logs image-classifier 69d9c0013c2d0d97​started enqueuing batchespartitioning 240 items found in job submission into 24 batches of size 10completed enqueuing a total of 24 batchesspinning up workers...2020-07-30 16:50:30.147522:cortex:pid-1:INFO:downloading the project code2020-07-30 16:50:30.268987:cortex:pid-1:INFO:downloading the python serving image....
You can use cortex delete <api_name> <job_id> to stop a running job:
$ cortex delete image-classifier 69d9c0013c2d0d97​stopped job 69d96a01ea55da8c
Use the cortex delete command to delete your API:
$ cortex delete my-api​deleting my-api
​Tutorial provides a step-by-step walkthrough of deploying an image classification batch API
​CLI documentation lists all CLI commands
​Examples demonstrate how to deploy models from common ML libraries