dependencies.sh in the top level Cortex project directory (i.e. the directory which contains cortex.yaml). For example:dependencies.sh is executed with bash shell during the initialization of each replica (before installing Python packages in requirements.txt or conda-packages.txt). Typical use cases include installing required system packages to be used in your Predictor, building Python packages from source, etc.dependencies.sh, which installs the tree utility:tree utility can now be called inside your predictor.py:cortexlabs/python-predictor-cpu-slim:0.22.1cortexlabs/python-predictor-gpu-slim:0.22.1-cuda10.1 (also available in cuda10.0, cuda10.2, and cuda11.0)cortexlabs/python-predictor-inf-slim:0.22.1cortexlabs/tensorflow-predictor-slim:0.22.1cortexlabs/onnx-predictor-cpu-slim:0.22.1cortexlabs/onnx-predictor-gpu-slim:0.22.1-slim suffix; Cortex's default API images are not -slim, since they have additional dependencies installed to cover common use cases. If you are building your own Docker image, starting with a -slim Predictor image will result in a smaller image size.tree is a system package and pandas and rdkit are Python packages.cortexlabs/tensorflow-predictor), and the other is TensorFlow serving to load the SavedModel (cortexlabs/tensorflow-serving-gpu or cortexlabs/tensorflow-serving-cpu). There's a second available field tensorflow_serving_image that can be used to override the TensorFlow Serving image. Both of the default serving images (cortexlabs/tensorflow-serving-gpu and cortexlabs/tensorflow-serving-cpu) are based on the official TensorFlow Serving image (tensorflow/serving). Unless a different version of TensorFlow Serving is required, the TensorFlow Serving image shouldn't have to be overridden, since it's only used to load the SavedModel and does not run your Predictor code.