Transformers run both when transforming data before model training and when responding to prediction requests. You may define transformers for both a PySpark and a Python context. The PySpark implementation is optional but recommended for large-scale data processing.
def transform_spark(data, columns, args, transformed_column):"""Transform a column in a PySpark context.​This function is optional (recommended for large-scale data processing).​Args:data: A dataframe including all of the raw columns.​columns: A dict with the same structure as the transformer's inputcolumns specifying the names of the dataframe's columns thatcontain the input columns.​args: A dict with the same structure as the transformer's input argscontaining the runtime values of the args.​transformed_column: The name of the column containing the transformeddata that is to be appended to the dataframe.​Returns:The original 'data' dataframe with an added column with the name of thetransformed_column arg containing the transformed data."""pass​​def transform_python(sample, args):"""Transform a single data sample outside of a PySpark context.​This function is required.​Args:sample: A dict with the same structure as the transformer's inputcolumns containing a data sample to transform.​args: A dict with the same structure as the transformer's input argscontaining the runtime values of the args.​Returns:The transformed value."""pass​​def reverse_transform_python(transformed_value, args):"""Reverse transform a single data sample outside of a PySpark context.​This function is optional, and only relevant for certain one-to-onetransformers.​Args:transformed_value: The transformed data value.​args: A dict with the same structure as the transformer's input argscontaining the runtime values of the args.​Returns:The raw data value that corresponds to the transformed value."""pass
def transform_spark(data, columns, args, transformed_column):return data.withColumn(transformed_column, ((data[columns["num"]] - args["mean"]) / args["stddev"]))​def transform_python(sample, args):return (sample["num"] - args["mean"]) / args["stddev"]​def reverse_transform_python(transformed_value, args):return args["mean"] + (transformed_value * args["stddev"])
The following packages have been pre-installed and can be used in your implementations:
pyspark==2.4.0boto3==1.9.78msgpack==0.6.1numpy>=1.13.3,<2requirements-parser==0.2.0packaging==19.0.0
You can install additional PyPI packages and import your own Python packages. See Python Packages for more details.