Skip to content

AsIsCategoricalBucketer

Bases: BaseBucketer

The AsIsCategoricalBucketer treats unique values as categories.

Support: badge badge badge

It will assign each a bucket number in the order of appearance. If new data contains new, unknown labels they will be replaced by 'Other'.

This is bucketer is useful when you have data that is already sufficiented bucketed, but you would like to be able to bucket new data in the same way.

Example:

from skorecard import datasets
from skorecard.bucketers import AsIsCategoricalBucketer

X, y = datasets.load_uci_credit_card(return_X_y=True)
bucketer = AsIsCategoricalBucketer(variables=['EDUCATION'])
bucketer.fit_transform(X)
Source code in skorecard/bucketers/bucketers.py
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
class AsIsCategoricalBucketer(BaseBucketer):
    """
    The `AsIsCategoricalBucketer` treats unique values as categories.

    Support: ![badge](https://img.shields.io/badge/numerical-false-red) ![badge](https://img.shields.io/badge/categorical-true-green) ![badge](https://img.shields.io/badge/supervised-false-blue)

    It will assign each a bucket number in the order of appearance.
    If new data contains new, unknown labels they will be replaced by 'Other'.

    This is bucketer is useful when you have data that is already sufficiented bucketed,
    but you would like to be able to bucket new data in the same way.

    Example:

    ```python
    from skorecard import datasets
    from skorecard.bucketers import AsIsCategoricalBucketer

    X, y = datasets.load_uci_credit_card(return_X_y=True)
    bucketer = AsIsCategoricalBucketer(variables=['EDUCATION'])
    bucketer.fit_transform(X)
    ```
    """  # noqa

    def __init__(
        self, variables=[], specials={}, missing_treatment="separate", remainder="passthrough", get_statistics=True
    ):
        """Init the class.

        Args:
            variables (list): The features to bucket. Uses all features if not defined.
            specials: (nested) dictionary of special values that require their own binning.
                The dictionary has the following format:
                 {"<column name>" : {"name of special bucket" : <list with 1 or more values>}}
                For every feature that needs a special value, a dictionary must be passed as value.
                This dictionary contains a name of a bucket (key) and an array of unique values that should be put
                in that bucket.
                When special values are defined, they are not considered in the fitting procedure.
            missing_treatment: Defines how we treat the missing values present in the data.
                If a string, it must be one of the following options:
                    separate: Missing values get put in a separate 'Other' bucket: `-1`
                    most_risky: Missing values are put into the bucket containing the largest percentage of Class 1.
                    least_risky: Missing values are put into the bucket containing the largest percentage of Class 0.
                    most_frequent: Missing values are put into the most common bucket.
                    neutral: Missing values are put into the bucket with WoE closest to 0.
                    similar: Missing values are put into the bucket with WoE closest to the bucket with only missing values.
                    passthrough: Leaves missing values untouched.
                If a dict, it must be of the following format:
                    {"<column name>": <bucket_number>}
                    This bucket number is where we will put the missing values.
            remainder: How we want the non-specified columns to be transformed. It must be in ["passthrough", "drop"].
                passthrough (Default): all columns that were not specified in "variables" will be passed through.
                drop: all remaining columns that were not specified in "variables" will be dropped.
        """  # noqa
        self.variables = variables
        self.specials = specials
        self.missing_treatment = missing_treatment
        self.remainder = remainder
        self.get_statistics = get_statistics

    @property
    def variables_type(self):
        """
        Signals variables type supported by this bucketer.
        """
        return "categorical"

    def _get_feature_splits(self, feature, X, y, X_unfiltered=None):
        """
        Finds the splits for a single feature.

        X and y have already been preprocessed, and have specials removed.

        Args:
            feature (str): Name of the feature.
            X (pd.Series): df with single column of feature to bucket
            y (np.ndarray): array with target
            X_unfiltered (pd.Series): df with single column of feature to bucket before any filtering was applied

        Returns:
            splits, right (tuple): The splits (dict or array), and whether right=True or False.
        """
        unq = X.unique().tolist()
        mapping = dict(zip(unq, range(0, len(unq))))

        # Note that right is set to True, but this is not used at all for categoricals
        return (mapping, True)

variables_type property

Signals variables type supported by this bucketer.

__init__(variables=[], specials={}, missing_treatment='separate', remainder='passthrough', get_statistics=True)

Init the class.

Parameters:

Name Type Description Default
variables list

The features to bucket. Uses all features if not defined.

[]
specials

(nested) dictionary of special values that require their own binning. The dictionary has the following format: {"" : {"name of special bucket" : }} For every feature that needs a special value, a dictionary must be passed as value. This dictionary contains a name of a bucket (key) and an array of unique values that should be put in that bucket. When special values are defined, they are not considered in the fitting procedure.

{}
missing_treatment

Defines how we treat the missing values present in the data. If a string, it must be one of the following options: separate: Missing values get put in a separate 'Other' bucket: -1 most_risky: Missing values are put into the bucket containing the largest percentage of Class 1. least_risky: Missing values are put into the bucket containing the largest percentage of Class 0. most_frequent: Missing values are put into the most common bucket. neutral: Missing values are put into the bucket with WoE closest to 0. similar: Missing values are put into the bucket with WoE closest to the bucket with only missing values. passthrough: Leaves missing values untouched. If a dict, it must be of the following format: {"": } This bucket number is where we will put the missing values.

'separate'
remainder

How we want the non-specified columns to be transformed. It must be in ["passthrough", "drop"]. passthrough (Default): all columns that were not specified in "variables" will be passed through. drop: all remaining columns that were not specified in "variables" will be dropped.

'passthrough'
Source code in skorecard/bucketers/bucketers.py
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
def __init__(
    self, variables=[], specials={}, missing_treatment="separate", remainder="passthrough", get_statistics=True
):
    """Init the class.

    Args:
        variables (list): The features to bucket. Uses all features if not defined.
        specials: (nested) dictionary of special values that require their own binning.
            The dictionary has the following format:
             {"<column name>" : {"name of special bucket" : <list with 1 or more values>}}
            For every feature that needs a special value, a dictionary must be passed as value.
            This dictionary contains a name of a bucket (key) and an array of unique values that should be put
            in that bucket.
            When special values are defined, they are not considered in the fitting procedure.
        missing_treatment: Defines how we treat the missing values present in the data.
            If a string, it must be one of the following options:
                separate: Missing values get put in a separate 'Other' bucket: `-1`
                most_risky: Missing values are put into the bucket containing the largest percentage of Class 1.
                least_risky: Missing values are put into the bucket containing the largest percentage of Class 0.
                most_frequent: Missing values are put into the most common bucket.
                neutral: Missing values are put into the bucket with WoE closest to 0.
                similar: Missing values are put into the bucket with WoE closest to the bucket with only missing values.
                passthrough: Leaves missing values untouched.
            If a dict, it must be of the following format:
                {"<column name>": <bucket_number>}
                This bucket number is where we will put the missing values.
        remainder: How we want the non-specified columns to be transformed. It must be in ["passthrough", "drop"].
            passthrough (Default): all columns that were not specified in "variables" will be passed through.
            drop: all remaining columns that were not specified in "variables" will be dropped.
    """  # noqa
    self.variables = variables
    self.specials = specials
    self.missing_treatment = missing_treatment
    self.remainder = remainder
    self.get_statistics = get_statistics