We recommend that you run your development environment on a cloud instance, e.g. an AWS EC2 instance or GCP VM (due to frequent docker registry pushing). There are a variety of ways to develop on a remote VM, feel free to reach out on gitter and we can point you in the right direction based on your operating system and editor preferences.
Go (>=1.14)
Docker
eksctl
kubectl
aws-cli
Also, please use the VS Code yaml extension and enable auto-formatting for YAML files.
To install Go on linux, run:
wget https://dl.google.com/go/go1.14.7.linux-amd64.tar.gz && \sudo tar -xvf go1.14.7.linux-amd64.tar.gz && \sudo mv go /usr/local && \rm go1.14.7.linux-amd64.tar.gz
To install Docker on Ubuntu, run:
sudo apt install docker.io && \sudo systemctl start docker && \sudo systemctl enable docker && \sudo groupadd docker && \sudo gpasswd -a $USER docker
To install eksctl run:
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp && \sudo mv /tmp/eksctl /usr/local/bin
To install kubectl on linux, run:
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl && \chmod +x ./kubectl && \sudo mv ./kubectl /usr/local/bin/kubectl
Follow these instructions to install aws-cli (v1).
E.g. to install it globally, run:
sudo python -m pip install awscli​aws configure
Clone the project:
git clone https://github.com/cortexlabs/cortex.gitcd cortex
Run the tests:
make test
Create a config directory in the repo's root directory:
mkdir dev/config
Next, create dev/config/build.sh. Add the following content to it (you may use a different region for REGISTRY_REGION):
export CORTEX_VERSION="master"​export REGISTRY_REGION="us-west-2"
Create the AWS Elastic Container Registry:
make registry-create
Take note of the registry URL, this will be needed shortly.
Create the S3 buckets:
aws s3 mb s3://cortex-cluster-<your_name>aws s3 mb s3://cortex-cli-<your_name> # if you'll be uploading your compiled CLI
Update dev/config/build.sh. Paste the following config, and update CLI_BUCKET_NAME, CLI_BUCKET_REGION, REGISTRY_URL (the), and REGISTRY_REGION accordingly:
export CORTEX_VERSION="master"​export REGISTRY_REGION="us-west-2"export REGISTRY_URL="XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com"​# optional, only used for dev/build_cli.shexport CLI_BUCKET_NAME="cortex-cli-<your_name>"export CLI_BUCKET_REGION="us-west-2"
Create dev/config/cluster.yaml. Paste the following config, and update cortex_bucket, cortex_region, and all registry URLs accordingly:
instance_type: m5.largemin_instances: 2max_instances: 5bucket: cortex-cluster-<your_name>region: us-west-2cluster_name: cortex​image_operator: XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/cortexlabs/operator:latestimage_manager: XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/cortexlabs/manager:latestimage_downloader: XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/cortexlabs/downloader:latestimage_request_monitor: XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/cortexlabs/request-monitor:latestimage_cluster_autoscaler: XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/cortexlabs/cluster-autoscaler:latestimage_metrics_server: XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/cortexlabs/metrics-server:latestimage_inferentia: XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/cortexlabs/inferentia:latestimage_neuron_rtd: XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/cortexlabs/neuron-rtd:latestimage_nvidia: XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/cortexlabs/nvidia:latestimage_fluentd: XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/cortexlabs/fluentd:latestimage_statsd: XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/cortexlabs/statsd:latestimage_istio_proxy: XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/cortexlabs/istio-proxy:latestimage_istio_pilot: XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/cortexlabs/istio-pilot:latest
Add this to your bash profile (e.g. ~/.bash_profile, ~/.profile or ~/.bashrc):
export CORTEX_DEV_DEFAULT_PREDICTOR_IMAGE_REGISTRY="XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/cortexlabs" # set the default image for APIsexport CORTEX_TELEMETRY_SENTRY_DSN="https://[email protected]/1848098" # redirect analytics to our dev environmentexport CORTEX_TELEMETRY_SEGMENT_WRITE_KEY="0WvoJyCey9z1W2EW7rYTPJUMRYat46dl" # redirect error reporting to our dev environment​export AWS_ACCESS_KEY_ID=""export AWS_SECRET_ACCESS_KEY=""​alias cortex-dev='<path/to/cortex>/bin/cortex' # replace <path/to/cortex> with the path to the cortex repo that you cloned
Refresh your bash profile:
. ~/.bash_profile # or: `. ~/.bashrc`
Build and push all Cortex images:
make registry-all
Build the Cortex CLI:
make cli # the binary will be placed in <path/to/cortex>/bin/cortexcortex-dev version # should show "master"
If you wish to parallelize the build process, the parallel GNU utility needs to be installed. Once installed, set the NUM_BUILD_PROCS environment variable to the desired number of parallel jobs. For ease of use, export NUM_BUILD_PROCS in dev/config/build.sh.
Start Cortex:
make cluster-up
Tear down the Cortex cluster:
make cluster-down
cortex deploy examples/pytorch/iris-classifier --env aws
If you're making changes in the operator and want faster iterations, you can run an off-cluster operator.
make tools to install the necessary dependencies to run the operator
make operator-stop to stop the in-cluster operator
make devstart to run the off-cluster operator (which rebuilds the CLI and restarts the Operator when files change)
If you want to switch back to the in-cluster operator:
<ctrl+c> to stop your off-cluster operator
make cluster-configure to install the operator in your cluster
make cluster-up
make devstart
Make changes
make registry-dev
Test your changes with projects in examples or your own
See Makefile for additional dev commands.
Feel free to chat with us if you have any questions.