Skip to content

Functions

General Functions

pyjanitor's general-purpose data cleaning functions.

NOTE: Instructions for future contributors:

  1. Place the source code of the functions in a file named after the function.
  2. Place utility functions in the same file.
  3. If you use a utility function from another source file, please refactor it out to janitor.functions.utils.
  4. Import the function into this file so that it shows up in the top-level API.
  5. Sort the imports in alphabetical order.
  6. Try to group related functions together (e.g. see convert_date.py)
  7. Never import utils.

DropLabel dataclass

Helper class for removing labels within the select syntax.

label can be any of the types supported in the select, select_rows and select_columns functions. An array of integers not matching the labels is returned.

New in version 0.24.0

Parameters:

Name Type Description Default
label Any

Label(s) to be dropped from the index.

required
Source code in janitor/functions/utils.py
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
@dataclass
class DropLabel:
    """Helper class for removing labels within the `select` syntax.

    `label` can be any of the types supported in the `select`,
    `select_rows` and `select_columns` functions.
    An array of integers not matching the labels is returned.

    !!! info "New in version 0.24.0"

    Args:
        label: Label(s) to be dropped from the index.
    """

    label: Any

col

Helper class for column selection within an expression.

Parameters:

Name Type Description Default
column Hashable

The name of the column to be selected.

required

Raises:

Type Description
TypeError

If the column parameter is not hashable.

New in version 0.25.0

Warning

col is currently considered experimental. The implementation and parts of the API may change without warning.

Source code in janitor/functions/utils.py
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
class col:
    """Helper class for column selection within an expression.

    Args:
        column (Hashable): The name of the column to be selected.

    Raises:
        TypeError: If the `column` parameter is not hashable.

    !!! info "New in version 0.25.0"

    !!! warning

        `col` is currently considered experimental.
        The implementation and parts of the API
        may change without warning.

    """

    def __init__(self, column: Hashable):
        self.cols = column
        check("column", self.cols, [Hashable])
        self.join_args = None

    def __gt__(self, other):
        """Implements the greater-than comparison operator (`>`).

        Args:
            other (col): The other `col` object to compare to.

        Returns:
            col: The current `col` object.
        """
        self.join_args = (self.cols, other.cols, ">")
        return self

    def __ge__(self, other):
        """Implements the greater-than-or-equal-to comparison operator (`>=`).

        Args:
            other (col): The other `col` object to compare to.

        Returns:
            col: The current `col` object.
        """
        self.join_args = (self.cols, other.cols, ">=")
        return self

    def __lt__(self, other):
        """Implements the less-than comparison operator (`<`).

        Args:
            other (col): The other `col` object to compare to.

        Returns:
            col: The current `col` object.
        """
        self.join_args = (self.cols, other.cols, "<")
        return self

    def __le__(self, other):
        """Implements the less-than-or-equal-to comparison operator (`<=`).

        Args:
            other (col): The other `col` object to compare to.

        Returns:
            col: The current `col` object.
        """
        self.join_args = (self.cols, other.cols, "<=")
        return self

    def __ne__(self, other):
        """Implements the not-equal-to comparison operator (`!=`).

        Args:
            other (col): The other `col` object to compare to.

        Returns:
            col: The current `col` object.
        """
        self.join_args = (self.cols, other.cols, "!=")
        return self

    def __eq__(self, other):
        """Implements the equal-to comparison operator (`==`).

        Args:
            other (col): The other `col` object to compare to.

        Returns:
            col: The current `col` object.
        """
        self.join_args = (self.cols, other.cols, "==")
        return self

__eq__(other)

Implements the equal-to comparison operator (==).

Parameters:

Name Type Description Default
other col

The other col object to compare to.

required

Returns:

Name Type Description
col

The current col object.

Source code in janitor/functions/utils.py
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
def __eq__(self, other):
    """Implements the equal-to comparison operator (`==`).

    Args:
        other (col): The other `col` object to compare to.

    Returns:
        col: The current `col` object.
    """
    self.join_args = (self.cols, other.cols, "==")
    return self

__ge__(other)

Implements the greater-than-or-equal-to comparison operator (>=).

Parameters:

Name Type Description Default
other col

The other col object to compare to.

required

Returns:

Name Type Description
col

The current col object.

Source code in janitor/functions/utils.py
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
def __ge__(self, other):
    """Implements the greater-than-or-equal-to comparison operator (`>=`).

    Args:
        other (col): The other `col` object to compare to.

    Returns:
        col: The current `col` object.
    """
    self.join_args = (self.cols, other.cols, ">=")
    return self

__gt__(other)

Implements the greater-than comparison operator (>).

Parameters:

Name Type Description Default
other col

The other col object to compare to.

required

Returns:

Name Type Description
col

The current col object.

Source code in janitor/functions/utils.py
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
def __gt__(self, other):
    """Implements the greater-than comparison operator (`>`).

    Args:
        other (col): The other `col` object to compare to.

    Returns:
        col: The current `col` object.
    """
    self.join_args = (self.cols, other.cols, ">")
    return self

__le__(other)

Implements the less-than-or-equal-to comparison operator (<=).

Parameters:

Name Type Description Default
other col

The other col object to compare to.

required

Returns:

Name Type Description
col

The current col object.

Source code in janitor/functions/utils.py
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
def __le__(self, other):
    """Implements the less-than-or-equal-to comparison operator (`<=`).

    Args:
        other (col): The other `col` object to compare to.

    Returns:
        col: The current `col` object.
    """
    self.join_args = (self.cols, other.cols, "<=")
    return self

__lt__(other)

Implements the less-than comparison operator (<).

Parameters:

Name Type Description Default
other col

The other col object to compare to.

required

Returns:

Name Type Description
col

The current col object.

Source code in janitor/functions/utils.py
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
def __lt__(self, other):
    """Implements the less-than comparison operator (`<`).

    Args:
        other (col): The other `col` object to compare to.

    Returns:
        col: The current `col` object.
    """
    self.join_args = (self.cols, other.cols, "<")
    return self

__ne__(other)

Implements the not-equal-to comparison operator (!=).

Parameters:

Name Type Description Default
other col

The other col object to compare to.

required

Returns:

Name Type Description
col

The current col object.

Source code in janitor/functions/utils.py
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
def __ne__(self, other):
    """Implements the not-equal-to comparison operator (`!=`).

    Args:
        other (col): The other `col` object to compare to.

    Returns:
        col: The current `col` object.
    """
    self.join_args = (self.cols, other.cols, "!=")
    return self

convert_excel_date(df, column_name)

Convert Excel's serial date format into Python datetime format.

This method mutates the original DataFrame.

Implementation is also from Stack Overflow.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"date": [39690, 39690, 37118]})
>>> df
    date
0  39690
1  39690
2  37118
>>> df.convert_excel_date('date')
        date
0 2008-08-30
1 2008-08-30
2 2001-08-15

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_name Hashable

A column name.

required

Raises:

Type Description
ValueError

If there are non numeric values in the column.

Returns:

Type Description
DataFrame

A pandas DataFrame with corrected dates.

Source code in janitor/functions/convert_date.py
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
@pf.register_dataframe_method
@deprecated_alias(column="column_name")
def convert_excel_date(
    df: pd.DataFrame, column_name: Hashable
) -> pd.DataFrame:
    """Convert Excel's serial date format into Python datetime format.

    This method mutates the original DataFrame.

    Implementation is also from
    [Stack Overflow](https://stackoverflow.com/questions/38454403/convert-excel-style-date-with-pandas).

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"date": [39690, 39690, 37118]})
        >>> df
            date
        0  39690
        1  39690
        2  37118
        >>> df.convert_excel_date('date')
                date
        0 2008-08-30
        1 2008-08-30
        2 2001-08-15

    Args:
        df: A pandas DataFrame.
        column_name: A column name.

    Raises:
        ValueError: If there are non numeric values in the column.

    Returns:
        A pandas DataFrame with corrected dates.
    """  # noqa: E501

    if not is_numeric_dtype(df[column_name]):
        raise ValueError(
            "There are non-numeric values in the column. "
            "All values must be numeric."
        )

    df[column_name] = pd.TimedeltaIndex(
        df[column_name], unit="d"
    ) + dt.datetime(
        1899, 12, 30
    )  # noqa: W503
    return df

convert_matlab_date(df, column_name)

Convert Matlab's serial date number into Python datetime format.

Implementation is also from Stack Overflow.

This method mutates the original DataFrame.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"date": [737125.0, 737124.815863, 737124.4985, 737124]})
>>> df
            date
0  737125.000000
1  737124.815863
2  737124.498500
3  737124.000000
>>> df.convert_matlab_date('date')
                        date
0 2018-03-06 00:00:00.000000
1 2018-03-05 19:34:50.563200
2 2018-03-05 11:57:50.399999
3 2018-03-05 00:00:00.000000

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_name Hashable

A column name.

required

Returns:

Type Description
DataFrame

A pandas DataFrame with corrected dates.

Source code in janitor/functions/convert_date.py
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
@pf.register_dataframe_method
@deprecated_alias(column="column_name")
def convert_matlab_date(
    df: pd.DataFrame, column_name: Hashable
) -> pd.DataFrame:
    """Convert Matlab's serial date number into Python datetime format.

    Implementation is also from
    [Stack Overflow](https://stackoverflow.com/questions/13965740/converting-matlabs-datenum-format-to-python).

    This method mutates the original DataFrame.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"date": [737125.0, 737124.815863, 737124.4985, 737124]})
        >>> df
                    date
        0  737125.000000
        1  737124.815863
        2  737124.498500
        3  737124.000000
        >>> df.convert_matlab_date('date')
                                date
        0 2018-03-06 00:00:00.000000
        1 2018-03-05 19:34:50.563200
        2 2018-03-05 11:57:50.399999
        3 2018-03-05 00:00:00.000000

    Args:
        df: A pandas DataFrame.
        column_name: A column name.

    Returns:
        A pandas DataFrame with corrected dates.
    """  # noqa: E501
    days = pd.Series([dt.timedelta(v % 1) for v in df[column_name]])
    df[column_name] = (
        df[column_name].astype(int).apply(dt.datetime.fromordinal)
        + days
        - dt.timedelta(days=366)
    )
    return df

convert_unix_date(df, column_name)

Convert unix epoch time into Python datetime format.

Note that this ignores local tz and convert all timestamps to naive datetime based on UTC!

This method mutates the original DataFrame.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"date": [1651510462, 53394822, 1126233195]})
>>> df
         date
0  1651510462
1    53394822
2  1126233195
>>> df.convert_unix_date('date')
                 date
0 2022-05-02 16:54:22
1 1971-09-10 23:53:42
2 2005-09-09 02:33:15

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_name Hashable

A column name.

required

Returns:

Type Description
DataFrame

A pandas DataFrame with corrected dates.

Source code in janitor/functions/convert_date.py
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
@pf.register_dataframe_method
@deprecated_alias(column="column_name")
def convert_unix_date(df: pd.DataFrame, column_name: Hashable) -> pd.DataFrame:
    """Convert unix epoch time into Python datetime format.

    Note that this ignores local tz and convert all timestamps to naive
    datetime based on UTC!

    This method mutates the original DataFrame.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"date": [1651510462, 53394822, 1126233195]})
        >>> df
                 date
        0  1651510462
        1    53394822
        2  1126233195
        >>> df.convert_unix_date('date')
                         date
        0 2022-05-02 16:54:22
        1 1971-09-10 23:53:42
        2 2005-09-09 02:33:15

    Args:
        df: A pandas DataFrame.
        column_name: A column name.

    Returns:
        A pandas DataFrame with corrected dates.
    """

    try:
        df[column_name] = pd.to_datetime(df[column_name], unit="s")
    except OutOfBoundsDatetime:  # Indicates time is in milliseconds.
        df[column_name] = pd.to_datetime(df[column_name], unit="ms")
    return df

fill_direction(df, **kwargs)

Provide a method-chainable function for filling missing values in selected columns.

It is a wrapper for pd.Series.ffill and pd.Series.bfill, and pairs the column name with one of up, down, updown, and downup.

Note

This function will be deprecated in a 1.x release. Please use pd.DataFrame.assign instead.

Examples:

>>> import pandas as pd
>>> import janitor as jn
>>> df = pd.DataFrame(
...    {
...        'col1': [1, 2, 3, 4],
...        'col2': [None, 5, 6, 7],
...        'col3': [8, 9, 10, None],
...        'col4': [None, None, 11, None],
...        'col5': [None, 12, 13, None]
...    }
... )
>>> df
   col1  col2  col3  col4  col5
0     1   NaN   8.0   NaN   NaN
1     2   5.0   9.0   NaN  12.0
2     3   6.0  10.0  11.0  13.0
3     4   7.0   NaN   NaN   NaN
>>> df.fill_direction(
... col2 = 'up',
... col3 = 'down',
... col4 = 'downup',
... col5 = 'updown'
... )
   col1  col2  col3  col4  col5
0     1   5.0   8.0  11.0  12.0
1     2   5.0   9.0  11.0  12.0
2     3   6.0  10.0  11.0  13.0
3     4   7.0  10.0  11.0  13.0

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
**kwargs Any

Key - value pairs of columns and directions. Directions can be either down, up, updown (fill up then down) and downup (fill down then up).

{}

Raises:

Type Description
ValueError

If direction supplied is not one of down, up, updown, or downup.

Returns:

Type Description
DataFrame

A pandas DataFrame with modified column(s).

Source code in janitor/functions/fill.py
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
@pf.register_dataframe_method
@refactored_function(
    message=(
        "This function will be deprecated in a 1.x release. "
        "Please use `pd.DataFrame.assign` instead."
    )
)
def fill_direction(df: pd.DataFrame, **kwargs: Any) -> pd.DataFrame:
    """Provide a method-chainable function for filling missing values
    in selected columns.

    It is a wrapper for `pd.Series.ffill` and `pd.Series.bfill`,
    and pairs the column name with one of `up`, `down`, `updown`,
    and `downup`.

    !!!note

        This function will be deprecated in a 1.x release.
        Please use `pd.DataFrame.assign` instead.

    Examples:
        >>> import pandas as pd
        >>> import janitor as jn
        >>> df = pd.DataFrame(
        ...    {
        ...        'col1': [1, 2, 3, 4],
        ...        'col2': [None, 5, 6, 7],
        ...        'col3': [8, 9, 10, None],
        ...        'col4': [None, None, 11, None],
        ...        'col5': [None, 12, 13, None]
        ...    }
        ... )
        >>> df
           col1  col2  col3  col4  col5
        0     1   NaN   8.0   NaN   NaN
        1     2   5.0   9.0   NaN  12.0
        2     3   6.0  10.0  11.0  13.0
        3     4   7.0   NaN   NaN   NaN
        >>> df.fill_direction(
        ... col2 = 'up',
        ... col3 = 'down',
        ... col4 = 'downup',
        ... col5 = 'updown'
        ... )
           col1  col2  col3  col4  col5
        0     1   5.0   8.0  11.0  12.0
        1     2   5.0   9.0  11.0  12.0
        2     3   6.0  10.0  11.0  13.0
        3     4   7.0  10.0  11.0  13.0

    Args:
        df: A pandas DataFrame.
        **kwargs: Key - value pairs of columns and directions.
            Directions can be either `down`, `up`, `updown`
            (fill up then down) and `downup` (fill down then up).

    Raises:
        ValueError: If direction supplied is not one of `down`, `up`,
            `updown`, or `downup`.

    Returns:
        A pandas DataFrame with modified column(s).
    """  # noqa: E501

    if not kwargs:
        return df

    fill_types = {fill.name for fill in _FILLTYPE}
    for column_name, fill_type in kwargs.items():
        check("column_name", column_name, [str])
        check("fill_type", fill_type, [str])
        if fill_type.upper() not in fill_types:
            raise ValueError(
                "fill_type should be one of up, down, updown, or downup."
            )

    check_column(df, kwargs)

    new_values = {}
    for column_name, fill_type in kwargs.items():
        direction = _FILLTYPE[f"{fill_type.upper()}"].value
        if len(direction) == 1:
            direction = methodcaller(direction[0])
            output = direction(df[column_name])
        else:
            direction = [methodcaller(entry) for entry in direction]
            output = _chain_func(df[column_name], *direction)
        new_values[column_name] = output

    return df.assign(**new_values)

fill_empty(df, column_names, value)

Fill NaN values in specified columns with a given value.

Super sugary syntax that wraps pandas.DataFrame.fillna.

This method mutates the original DataFrame.

Note

This function will be deprecated in a 1.x release. Please use jn.impute instead.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame(
...        {
...            'col1': [1, 2, 3],
...            'col2': [None, 4, None ],
...            'col3': [None, 5, 6]
...        }
...    )
>>> df
   col1  col2  col3
0     1   NaN   NaN
1     2   4.0   5.0
2     3   NaN   6.0
>>> df.fill_empty(column_names = 'col2', value = 0)
   col1  col2  col3
0     1   0.0   NaN
1     2   4.0   5.0
2     3   0.0   6.0
>>> df.fill_empty(column_names = ['col2', 'col3'], value = 0)
   col1  col2  col3
0     1   0.0   0.0
1     2   4.0   5.0
2     3   0.0   6.0

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_names Union[str, Iterable[str], Hashable]

A column name or an iterable (list or tuple) of column names. If a single column name is passed in, then only that column will be filled; if a list or tuple is passed in, then those columns will all be filled with the same value.

required
value Any

The value that replaces the NaN values.

required

Returns:

Type Description
DataFrame

A pandas DataFrame with NaN values filled.

Source code in janitor/functions/fill.py
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
@pf.register_dataframe_method
@refactored_function(
    message="This function will be deprecated in a 1.x release. "
    "Kindly use `jn.impute` instead."
)
@deprecated_alias(columns="column_names")
def fill_empty(
    df: pd.DataFrame,
    column_names: Union[str, Iterable[str], Hashable],
    value: Any,
) -> pd.DataFrame:
    """Fill `NaN` values in specified columns with a given value.

    Super sugary syntax that wraps `pandas.DataFrame.fillna`.

    This method mutates the original DataFrame.

    !!!note

        This function will be deprecated in a 1.x release.
        Please use [`jn.impute`][janitor.functions.impute.impute] instead.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame(
        ...        {
        ...            'col1': [1, 2, 3],
        ...            'col2': [None, 4, None ],
        ...            'col3': [None, 5, 6]
        ...        }
        ...    )
        >>> df
           col1  col2  col3
        0     1   NaN   NaN
        1     2   4.0   5.0
        2     3   NaN   6.0
        >>> df.fill_empty(column_names = 'col2', value = 0)
           col1  col2  col3
        0     1   0.0   NaN
        1     2   4.0   5.0
        2     3   0.0   6.0
        >>> df.fill_empty(column_names = ['col2', 'col3'], value = 0)
           col1  col2  col3
        0     1   0.0   0.0
        1     2   4.0   5.0
        2     3   0.0   6.0

    Args:
        df: A pandas DataFrame.
        column_names: A column name or an iterable (list
            or tuple) of column names. If a single column name is passed in,
            then only that column will be filled; if a list or tuple is passed
            in, then those columns will all be filled with the same value.
        value: The value that replaces the `NaN` values.

    Returns:
        A pandas DataFrame with `NaN` values filled.
    """

    check_column(df, column_names)
    return _fill_empty(df, column_names, value=value)

filter_column_isin(df, column_name, iterable, complement=False)

Filter a dataframe for values in a column that exist in the given iterable.

This method does not mutate the original DataFrame.

Assumes exact matching; fuzzy matching not implemented.

Examples:

Filter the dataframe to retain rows for which names are exactly James or John.

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"names": ["Jane", "Jeremy", "John"], "foo": list("xyz")})
>>> df
    names foo
0    Jane   x
1  Jeremy   y
2    John   z
>>> df.filter_column_isin(column_name="names", iterable=["James", "John"])
  names foo
2  John   z

This is the method-chaining alternative to:

df = df[df["names"].isin(["James", "John"])]

If complement=True, then we will only get rows for which the names are neither James nor John.

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_name Hashable

The column on which to filter.

required
iterable Iterable

An iterable. Could be a list, tuple, another pandas Series.

required
complement bool

Whether to return the complement of the selection or not.

False

Raises:

Type Description
ValueError

If iterable does not have a length of 1 or greater.

Returns:

Type Description
DataFrame

A filtered pandas DataFrame.

Source code in janitor/functions/filter.py
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
@pf.register_dataframe_method
@deprecated_alias(column="column_name")
def filter_column_isin(
    df: pd.DataFrame,
    column_name: Hashable,
    iterable: Iterable,
    complement: bool = False,
) -> pd.DataFrame:
    """Filter a dataframe for values in a column that exist in the given iterable.

    This method does not mutate the original DataFrame.

    Assumes exact matching; fuzzy matching not implemented.

    Examples:
        Filter the dataframe to retain rows for which `names`
        are exactly `James` or `John`.

        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"names": ["Jane", "Jeremy", "John"], "foo": list("xyz")})
        >>> df
            names foo
        0    Jane   x
        1  Jeremy   y
        2    John   z
        >>> df.filter_column_isin(column_name="names", iterable=["James", "John"])
          names foo
        2  John   z

        This is the method-chaining alternative to:

        ```python
        df = df[df["names"].isin(["James", "John"])]
        ```

        If `complement=True`, then we will only get rows for which the names
        are neither `James` nor `John`.

    Args:
        df: A pandas DataFrame.
        column_name: The column on which to filter.
        iterable: An iterable. Could be a list, tuple, another pandas
            Series.
        complement: Whether to return the complement of the selection or
            not.

    Raises:
        ValueError: If `iterable` does not have a length of `1`
            or greater.

    Returns:
        A filtered pandas DataFrame.
    """  # noqa: E501
    if len(iterable) == 0:
        raise ValueError(
            "`iterable` kwarg must be given an iterable of length 1 "
            "or greater."
        )
    criteria = df[column_name].isin(iterable)

    if complement:
        return df[~criteria]
    return df[criteria]

filter_date(df, column_name, start_date=None, end_date=None, years=None, months=None, days=None, column_date_options=None, format=None)

Filter a date-based column based on certain criteria.

This method does not mutate the original DataFrame.

Dates may be finicky and this function builds on top of the magic from the pandas to_datetime function that is able to parse dates well.

Additional options to parse the date type of your column may be found at the official pandas documentation.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({
...     "a": range(5, 9),
...     "dt": ["2021-11-12", "2021-12-15", "2022-01-03", "2022-01-09"],
... })
>>> df
   a          dt
0  5  2021-11-12
1  6  2021-12-15
2  7  2022-01-03
3  8  2022-01-09
>>> df.filter_date("dt", start_date="2021-12-01", end_date="2022-01-05")
   a         dt
1  6 2021-12-15
2  7 2022-01-03
>>> df.filter_date("dt", years=[2021], months=[12])
   a         dt
1  6 2021-12-15

Note

This method will cast your column to a Timestamp!

Note

This only affects the format of the start_date and end_date parameters. If there's an issue with the format of the DataFrame being parsed, you would pass {'format': your_format} to column_date_options.

Parameters:

Name Type Description Default
df DataFrame

The dataframe to filter on.

required
column_name Hashable

The column which to apply the fraction transformation.

required
start_date Optional[date]

The beginning date to use to filter the DataFrame.

None
end_date Optional[date]

The end date to use to filter the DataFrame.

None
years Optional[List]

The years to use to filter the DataFrame.

None
months Optional[List]

The months to use to filter the DataFrame.

None
days Optional[List]

The days to use to filter the DataFrame.

None
column_date_options Optional[Dict]

Special options to use when parsing the date column in the original DataFrame. The options may be found at the official Pandas documentation.

None
format Optional[str]

If you're using a format for start_date or end_date that is not recognized natively by pandas' to_datetime function, you may supply the format yourself. Python date and time formats may be found here.

None

Returns:

Type Description
DataFrame

A filtered pandas DataFrame.

Source code in janitor/functions/filter.py
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
@pf.register_dataframe_method
@deprecated_alias(column="column_name", start="start_date", end="end_date")
def filter_date(
    df: pd.DataFrame,
    column_name: Hashable,
    start_date: Optional[dt.date] = None,
    end_date: Optional[dt.date] = None,
    years: Optional[List] = None,
    months: Optional[List] = None,
    days: Optional[List] = None,
    column_date_options: Optional[Dict] = None,
    format: Optional[str] = None,  # skipcq: PYL-W0622
) -> pd.DataFrame:
    """Filter a date-based column based on certain criteria.

    This method does not mutate the original DataFrame.

    Dates may be finicky and this function builds on top of the *magic* from
    the pandas `to_datetime` function that is able to parse dates well.

    Additional options to parse the date type of your column may be found at
    the official pandas [documentation][datetime].

    [datetime]: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({
        ...     "a": range(5, 9),
        ...     "dt": ["2021-11-12", "2021-12-15", "2022-01-03", "2022-01-09"],
        ... })
        >>> df
           a          dt
        0  5  2021-11-12
        1  6  2021-12-15
        2  7  2022-01-03
        3  8  2022-01-09
        >>> df.filter_date("dt", start_date="2021-12-01", end_date="2022-01-05")
           a         dt
        1  6 2021-12-15
        2  7 2022-01-03
        >>> df.filter_date("dt", years=[2021], months=[12])
           a         dt
        1  6 2021-12-15

    !!!note

        This method will cast your column to a Timestamp!

    !!!note

        This only affects the format of the `start_date` and `end_date`
        parameters. If there's an issue with the format of the DataFrame being
        parsed, you would pass `{'format': your_format}` to `column_date_options`.

    Args:
        df: The dataframe to filter on.
        column_name: The column which to apply the fraction transformation.
        start_date: The beginning date to use to filter the DataFrame.
        end_date: The end date to use to filter the DataFrame.
        years: The years to use to filter the DataFrame.
        months: The months to use to filter the DataFrame.
        days: The days to use to filter the DataFrame.
        column_date_options: Special options to use when parsing the date
            column in the original DataFrame. The options may be found at the
            official Pandas documentation.
        format: If you're using a format for `start_date` or `end_date`
            that is not recognized natively by pandas' `to_datetime` function, you
            may supply the format yourself. Python date and time formats may be
            found [here](http://strftime.org/).

    Returns:
        A filtered pandas DataFrame.
    """  # noqa: E501

    def _date_filter_conditions(conditions):
        """Taken from: https://stackoverflow.com/a/13616382."""
        return reduce(np.logical_and, conditions)

    if column_date_options is None:
        column_date_options = {}
    df[column_name] = pd.to_datetime(df[column_name], **column_date_options)

    _filter_list = []

    if start_date:
        start_date = pd.to_datetime(start_date, format=format)
        _filter_list.append(df[column_name] >= start_date)

    if end_date:
        end_date = pd.to_datetime(end_date, format=format)
        _filter_list.append(df[column_name] <= end_date)

    if years:
        _filter_list.append(df[column_name].dt.year.isin(years))

    if months:
        _filter_list.append(df[column_name].dt.month.isin(months))

    if days:
        _filter_list.append(df[column_name].dt.day.isin(days))

    if start_date and end_date and start_date > end_date:
        warnings.warn(
            f"Your start date of {start_date} is after your end date of "
            f"{end_date}. Is this intended?"
        )

    return df.loc[_date_filter_conditions(_filter_list), :]

filter_on(df, criteria, complement=False)

Return a dataframe filtered on a particular criteria.

This method does not mutate the original DataFrame.

This is super-sugary syntax that wraps the pandas .query() API, enabling users to use strings to quickly specify filters for filtering their dataframe. The intent is that filter_on as a verb better matches the intent of a pandas user than the verb query.

This is intended to be the method-chaining equivalent of the following:

df = df[df["score"] < 3]

Note

This function will be deprecated in a 1.x release. Please use pd.DataFrame.query instead.

Examples:

Filter students who failed an exam (scored less than 50).

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({
...     "student_id": ["S1", "S2", "S3"],
...     "score": [40, 60, 85],
... })
>>> df
  student_id  score
0         S1     40
1         S2     60
2         S3     85
>>> df.filter_on("score < 50", complement=False)
  student_id  score
0         S1     40

Credit to Brant Peterson for the name.

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
criteria str

A filtering criteria that returns an array or Series of booleans, on which pandas can filter on.

required
complement bool

Whether to return the complement of the filter or not. If set to True, then the rows for which the criteria is False are retained instead.

False

Returns:

Type Description
DataFrame

A filtered pandas DataFrame.

Source code in janitor/functions/filter.py
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
@pf.register_dataframe_method
@refactored_function(
    message=(
        "This function will be deprecated in a 1.x release. "
        "Please use `pd.DataFrame.query` instead."
    )
)
def filter_on(
    df: pd.DataFrame,
    criteria: str,
    complement: bool = False,
) -> pd.DataFrame:
    """Return a dataframe filtered on a particular criteria.

    This method does not mutate the original DataFrame.

    This is super-sugary syntax that wraps the pandas `.query()` API, enabling
    users to use strings to quickly specify filters for filtering their
    dataframe. The intent is that `filter_on` as a verb better matches the
    intent of a pandas user than the verb `query`.

    This is intended to be the method-chaining equivalent of the following:

    ```python
    df = df[df["score"] < 3]
    ```

    !!!note

        This function will be deprecated in a 1.x release.
        Please use `pd.DataFrame.query` instead.


    Examples:
        Filter students who failed an exam (scored less than 50).

        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({
        ...     "student_id": ["S1", "S2", "S3"],
        ...     "score": [40, 60, 85],
        ... })
        >>> df
          student_id  score
        0         S1     40
        1         S2     60
        2         S3     85
        >>> df.filter_on("score < 50", complement=False)
          student_id  score
        0         S1     40

    Credit to Brant Peterson for the name.

    Args:
        df: A pandas DataFrame.
        criteria: A filtering criteria that returns an array or Series of
            booleans, on which pandas can filter on.
        complement: Whether to return the complement of the filter or not.
            If set to True, then the rows for which the criteria is False are
            retained instead.

    Returns:
        A filtered pandas DataFrame.
    """

    warnings.warn(
        "This function will be deprecated in a 1.x release. "
        "Kindly use `pd.DataFrame.query` instead.",
        DeprecationWarning,
        stacklevel=find_stack_level(),
    )

    if complement:
        return df.query(f"not ({criteria})")
    return df.query(criteria)

filter_string(df, column_name, search_string, complement=False, case=True, flags=0, na=None, regex=True)

Filter a string-based column according to whether it contains a substring.

This is super sugary syntax that builds on top of pandas.Series.str.contains. It is meant to be the method-chaining equivalent of the following:

df = df[df[column_name].str.contains(search_string)]]

This method does not mutate the original DataFrame.

Examples:

Retain rows whose column values contain a particular substring.

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"a": range(3, 6), "b": ["bear", "peeL", "sail"]})
>>> df
   a     b
0  3  bear
1  4  peeL
2  5  sail
>>> df.filter_string(column_name="b", search_string="ee")
   a     b
1  4  peeL
>>> df.filter_string(column_name="b", search_string="L", case=False)
   a     b
1  4  peeL
2  5  sail

Filter names does not contain '.' (disable regex mode).

>>> import pandas as pd
>>> import janitor
>>> df = pd.Series(["JoseChen", "Brian.Salvi"], name="Name").to_frame()
>>> df
          Name
0     JoseChen
1  Brian.Salvi
>>> df.filter_string(column_name="Name", search_string=".", regex=False, complement=True)
       Name
0  JoseChen

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_name Hashable

The column to filter. The column should contain strings.

required
search_string str

A regex pattern or a (sub-)string to search.

required
complement bool

Whether to return the complement of the filter or not. If set to True, then the rows for which the string search fails are retained instead.

False
case bool

If True, case sensitive.

True
flags int

Flags to pass through to the re module, e.g. re.IGNORECASE.

0
na Any

Fill value for missing values. The default depends on dtype of the array. For object-dtype, numpy.nan is used. For StringDtype, pandas.NA is used.

None
regex bool

If True, assumes search_string is a regular expression. If False, treats the search_string as a literal string.

True

Returns:

Type Description
DataFrame

A filtered pandas DataFrame.

Source code in janitor/functions/filter.py
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
@pf.register_dataframe_method
@deprecated_alias(column="column_name")
def filter_string(
    df: pd.DataFrame,
    column_name: Hashable,
    search_string: str,
    complement: bool = False,
    case: bool = True,
    flags: int = 0,
    na: Any = None,
    regex: bool = True,
) -> pd.DataFrame:
    """Filter a string-based column according to whether it contains a substring.

    This is super sugary syntax that builds on top of `pandas.Series.str.contains`.
    It is meant to be the method-chaining equivalent of the following:

    ```python
    df = df[df[column_name].str.contains(search_string)]]
    ```

    This method does not mutate the original DataFrame.

    Examples:
        Retain rows whose column values contain a particular substring.

        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"a": range(3, 6), "b": ["bear", "peeL", "sail"]})
        >>> df
           a     b
        0  3  bear
        1  4  peeL
        2  5  sail
        >>> df.filter_string(column_name="b", search_string="ee")
           a     b
        1  4  peeL
        >>> df.filter_string(column_name="b", search_string="L", case=False)
           a     b
        1  4  peeL
        2  5  sail

        Filter names does not contain `'.'` (disable regex mode).

        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.Series(["JoseChen", "Brian.Salvi"], name="Name").to_frame()
        >>> df
                  Name
        0     JoseChen
        1  Brian.Salvi
        >>> df.filter_string(column_name="Name", search_string=".", regex=False, complement=True)
               Name
        0  JoseChen

    Args:
        df: A pandas DataFrame.
        column_name: The column to filter. The column should contain strings.
        search_string: A regex pattern or a (sub-)string to search.
        complement: Whether to return the complement of the filter or not. If
            set to True, then the rows for which the string search fails are retained
            instead.
        case: If True, case sensitive.
        flags: Flags to pass through to the re module, e.g. re.IGNORECASE.
        na: Fill value for missing values. The default depends on dtype of
            the array. For object-dtype, `numpy.nan` is used. For `StringDtype`,
            `pandas.NA` is used.
        regex: If True, assumes `search_string` is a regular expression. If False,
            treats the `search_string` as a literal string.

    Returns:
        A filtered pandas DataFrame.
    """  # noqa: E501

    criteria = df[column_name].str.contains(
        pat=search_string,
        case=case,
        flags=flags,
        na=na,
        regex=regex,
    )

    if complement:
        return df[~criteria]

    return df[criteria]

get_columns(group, label)

Helper function for selecting columns on a grouped object, using the select syntax.

New in version 0.25.0

Parameters:

Name Type Description Default
group Union[DataFrameGroupBy, SeriesGroupBy]

A Pandas GroupBy object.

required
label

column(s) to select.

required

Returns:

Type Description

A pandas groupby object.

Source code in janitor/functions/utils.py
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
def get_columns(group: Union[DataFrameGroupBy, SeriesGroupBy], label):
    """
    Helper function for selecting columns on a grouped object,
    using the
    [`select`][janitor.functions.select.select] syntax.

    !!! info "New in version 0.25.0"

    Args:
        group: A Pandas GroupBy object.
        label: column(s) to select.

    Returns:
        A pandas groupby object.
    """
    check("groupby object", group, [DataFrameGroupBy, SeriesGroupBy])
    label = get_index_labels(label, group.obj, axis="columns")
    label = label if is_scalar(label) else list(label)
    return group[label]

get_index_labels(arg, df, axis)

Convenience function to get actual labels from column/index

New in version 0.25.0

Parameters:

Name Type Description Default
arg

Valid inputs include: an exact column name to look for, a shell-style glob string (e.g. *_thing_*), a regular expression, a callable, or variable arguments of all the aforementioned. A sequence of booleans is also acceptable. A dictionary can be used for selection on a MultiIndex on different levels.

required
df DataFrame

The pandas DataFrame object.

required
axis Literal['index', 'columns']

Should be either index or columns.

required

Returns:

Type Description
Index

A pandas Index.

Source code in janitor/functions/utils.py
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
def get_index_labels(
    arg, df: pd.DataFrame, axis: Literal["index", "columns"]
) -> pd.Index:
    """Convenience function to get actual labels from column/index

    !!! info "New in version 0.25.0"

    Args:
        arg: Valid inputs include: an exact column name to look for,
            a shell-style glob string (e.g. `*_thing_*`),
            a regular expression,
            a callable,
            or variable arguments of all the aforementioned.
            A sequence of booleans is also acceptable.
            A dictionary can be used for selection
            on a MultiIndex on different levels.
        df: The pandas DataFrame object.
        axis: Should be either `index` or `columns`.

    Returns:
        A pandas Index.
    """
    assert axis in {"index", "columns"}
    index = getattr(df, axis)
    return index[_select_index(arg, df, axis)]

get_join_indices(df, right, conditions, keep='all', use_numba=False, force=False)

Convenience function to return the matching indices from an inner join.

New in version 0.27.0

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
right Union[DataFrame, Series]

Named Series or DataFrame to join to.

required
conditions list[tuple[str]]

List of arguments of tuple(s) of the form (left_on, right_on, op), where left_on is the column label from df, right_on is the column label from right, while op is the operator. The col class is also supported. The operator can be any of ==, !=, <=, <, >=, >. For multiple conditions, the and(&) operator is used to combine the results of the individual conditions.

required
use_numba bool

Use numba, if installed, to accelerate the computation.

False
keep Literal['first', 'last', 'all']

Choose whether to return the first match, last match or all matches.

'all'
force bool

If True, force the non-equi join conditions to execute before the equi join.

False

Returns:

Type Description
tuple[ndarray, ndarray]

A tuple of indices for the rows in the dataframes that match.

Source code in janitor/functions/conditional_join.py
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
def get_join_indices(
    df: pd.DataFrame,
    right: Union[pd.DataFrame, pd.Series],
    conditions: list[tuple[str]],
    keep: Literal["first", "last", "all"] = "all",
    use_numba: bool = False,
    force: bool = False,
) -> tuple[np.ndarray, np.ndarray]:
    """Convenience function to return the matching indices from an inner join.

    !!! info "New in version 0.27.0"

    Args:
        df: A pandas DataFrame.
        right: Named Series or DataFrame to join to.
        conditions: List of arguments of tuple(s) of the form
            `(left_on, right_on, op)`, where `left_on` is the column
            label from `df`, `right_on` is the column label from `right`,
            while `op` is the operator.
            The `col` class is also supported. The operator can be any of
            `==`, `!=`, `<=`, `<`, `>=`, `>`. For multiple conditions,
            the and(`&`) operator is used to combine the results
            of the individual conditions.
        use_numba: Use numba, if installed, to accelerate the computation.
        keep: Choose whether to return the first match, last match or all matches.
        force: If `True`, force the non-equi join conditions
            to execute before the equi join.

    Returns:
        A tuple of indices for the rows in the dataframes that match.
    """
    return _conditional_join_compute(
        df=df,
        right=right,
        conditions=conditions,
        how="inner",
        sort_by_appearance=False,
        df_columns=None,
        right_columns=None,
        keep=keep,
        use_numba=use_numba,
        indicator=False,
        force=force,
        return_matching_indices=True,
    )

patterns(regex_pattern)

This function converts a string into a compiled regular expression.

It can be used to select columns in the index or columns_names arguments of pivot_longer function.

Warning

This function is deprecated. Kindly use re.compile instead.

Parameters:

Name Type Description Default
regex_pattern Union[str, Pattern]

String to be converted to compiled regular expression.

required

Returns:

Type Description
Pattern

A compile regular expression from provided regex_pattern.

Source code in janitor/functions/utils.py
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
def patterns(regex_pattern: Union[str, Pattern]) -> Pattern:
    """This function converts a string into a compiled regular expression.

    It can be used to select columns in the index or columns_names
    arguments of `pivot_longer` function.

    !!!warning

        This function is deprecated. Kindly use `re.compile` instead.

    Args:
        regex_pattern: String to be converted to compiled regular
            expression.

    Returns:
        A compile regular expression from provided `regex_pattern`.
    """
    warnings.warn(
        "This function is deprecated. Kindly use `re.compile` instead.",
        DeprecationWarning,
        stacklevel=find_stack_level(),
    )
    check("regular expression", regex_pattern, [str, Pattern])

    return re.compile(regex_pattern)

pivot_longer(df, index=None, column_names=None, names_to=None, values_to='value', column_level=None, names_sep=None, names_pattern=None, names_transform=None, dropna=False, sort_by_appearance=False, ignore_index=True)

Unpivots a DataFrame from wide to long format.

This method does not mutate the original DataFrame.

It is modeled after the pivot_longer function in R's tidyr package, and also takes inspiration from R's data.table package.

This function is useful to massage a DataFrame into a format where one or more columns are considered measured variables, and all other columns are considered as identifier variables.

All measured variables are unpivoted (and typically duplicated) along the row axis.

Column selection in index and column_names is possible using the select syntax.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame(
...     {
...         "Sepal.Length": [5.1, 5.9],
...         "Sepal.Width": [3.5, 3.0],
...         "Petal.Length": [1.4, 5.1],
...         "Petal.Width": [0.2, 1.8],
...         "Species": ["setosa", "virginica"],
...     }
... )
>>> df
   Sepal.Length  Sepal.Width  Petal.Length  Petal.Width    Species
0           5.1          3.5           1.4          0.2     setosa
1           5.9          3.0           5.1          1.8  virginica

Replicate pandas' melt:

>>> df.pivot_longer(index = 'Species')
     Species      variable  value
0     setosa  Sepal.Length    5.1
1  virginica  Sepal.Length    5.9
2     setosa   Sepal.Width    3.5
3  virginica   Sepal.Width    3.0
4     setosa  Petal.Length    1.4
5  virginica  Petal.Length    5.1
6     setosa   Petal.Width    0.2
7  virginica   Petal.Width    1.8

Convenient, flexible column selection in the index via the select syntax:

>>> from pandas.api.types import is_string_dtype
>>> df.pivot_longer(index = is_string_dtype)
     Species      variable  value
0     setosa  Sepal.Length    5.1
1  virginica  Sepal.Length    5.9
2     setosa   Sepal.Width    3.5
3  virginica   Sepal.Width    3.0
4     setosa  Petal.Length    1.4
5  virginica  Petal.Length    5.1
6     setosa   Petal.Width    0.2
7  virginica   Petal.Width    1.8

Split the column labels into parts:

>>> df.pivot_longer(
...     index = 'Species',
...     names_to = ('part', 'dimension'),
...     names_sep = '.',
...     sort_by_appearance = True,
... )
     Species   part dimension  value
0     setosa  Sepal    Length    5.1
1     setosa  Sepal     Width    3.5
2     setosa  Petal    Length    1.4
3     setosa  Petal     Width    0.2
4  virginica  Sepal    Length    5.9
5  virginica  Sepal     Width    3.0
6  virginica  Petal    Length    5.1
7  virginica  Petal     Width    1.8

Retain parts of the column names as headers:

>>> df.pivot_longer(
...     index = 'Species',
...     names_to = ('part', '.value'),
...     names_sep = '.',
...     sort_by_appearance = True,
... )
     Species   part  Length  Width
0     setosa  Sepal     5.1    3.5
1     setosa  Petal     1.4    0.2
2  virginica  Sepal     5.9    3.0
3  virginica  Petal     5.1    1.8

Split the column labels based on regex:

>>> df = pd.DataFrame({"id": [1], "new_sp_m5564": [2], "newrel_f65": [3]})
>>> df
   id  new_sp_m5564  newrel_f65
0   1             2           3
>>> df.pivot_longer(
...     index = 'id',
...     names_to = ('diagnosis', 'gender', 'age'),
...     names_pattern = r"new_?(.+)_(.)(\d+)",
... )
   id diagnosis gender   age  value
0   1        sp      m  5564      2
1   1       rel      f    65      3

Split the column labels for the above dataframe using named groups in names_pattern:

>>> df.pivot_longer(
...     index = 'id',
...     names_pattern = r"new_?(?P<diagnosis>.+)_(?P<gender>.)(?P<age>\d+)",
... )
    id diagnosis gender   age  value
0   1        sp      m  5564      2
1   1       rel      f    65      3

Convert the dtypes of specific columns with names_transform:

>>> result = (df
...          .pivot_longer(
...              index = 'id',
...              names_to = ('diagnosis', 'gender', 'age'),
...              names_pattern = r"new_?(.+)_(.)(\d+)",
...              names_transform = {'gender': 'category', 'age':'int'})
... )
>>> result.dtypes
id           int64
diagnosis   object
gender    category
age          int64
value        int64
dtype: object

Use multiple .value to reshape dataframe:

>>> df = pd.DataFrame(
...     [
...         {
...             "x_1_mean": 10,
...             "x_2_mean": 20,
...             "y_1_mean": 30,
...             "y_2_mean": 40,
...             "unit": 50,
...         }
...     ]
... )
>>> df
   x_1_mean  x_2_mean  y_1_mean  y_2_mean  unit
0        10        20        30        40    50
>>> df.pivot_longer(
...     index="unit",
...     names_to=(".value", "time", ".value"),
...     names_pattern=r"(x|y)_([0-9])(_mean)",
... )
   unit time  x_mean  y_mean
0    50    1      10      30
1    50    2      20      40

Replicate the above with named groups in names_pattern - use _ instead of .value:

>>> df.pivot_longer(
...     index="unit",
...     names_pattern=r"(?P<_>x|y)_(?P<time>[0-9])(?P<__>_mean)",
... )
   unit time  x_mean  y_mean
0    50    1      10      30
1    50    2      20      40

Convenient, flexible column selection in the column_names via select syntax:

>>> df.pivot_longer(
...     column_names="*mean",
...     names_to=(".value", "time", ".value"),
...     names_pattern=r"(x|y)_([0-9])(_mean)",
... )
   unit time  x_mean  y_mean
0    50    1      10      30
1    50    2      20      40
>>> df.pivot_longer(
...     column_names=slice("x_1_mean", "y_2_mean"),
...     names_to=(".value", "time", ".value"),
...     names_pattern=r"(x|y)_([0-9])(_mean)",
... )
   unit time  x_mean  y_mean
0    50    1      10      30
1    50    2      20      40

Reshape dataframe by passing a sequence to names_pattern:

>>> df = pd.DataFrame({'hr1': [514, 573],
...                    'hr2': [545, 526],
...                    'team': ['Red Sox', 'Yankees'],
...                    'year1': [2007, 2007],
...                    'year2': [2008, 2008]})
>>> df
   hr1  hr2     team  year1  year2
0  514  545  Red Sox   2007   2008
1  573  526  Yankees   2007   2008
>>> df.pivot_longer(
...     index = 'team',
...     names_to = ['year', 'hr'],
...     names_pattern = ['year', 'hr']
... )
      team   hr  year
0  Red Sox  514  2007
1  Yankees  573  2007
2  Red Sox  545  2008
3  Yankees  526  2008

Reshape above dataframe by passing a dictionary to names_pattern:

>>> df.pivot_longer(
...     index = 'team',
...     names_pattern = {"year":"year", "hr":"hr"}
... )
      team   hr  year
0  Red Sox  514  2007
1  Yankees  573  2007
2  Red Sox  545  2008
3  Yankees  526  2008

Multiple values_to:

>>> df = pd.DataFrame(
...         {
...             "City": ["Houston", "Austin", "Hoover"],
...             "State": ["Texas", "Texas", "Alabama"],
...             "Name": ["Aria", "Penelope", "Niko"],
...             "Mango": [4, 10, 90],
...             "Orange": [10, 8, 14],
...             "Watermelon": [40, 99, 43],
...             "Gin": [16, 200, 34],
...             "Vodka": [20, 33, 18],
...         },
...         columns=[
...             "City",
...             "State",
...             "Name",
...             "Mango",
...             "Orange",
...             "Watermelon",
...             "Gin",
...             "Vodka",
...         ],
...     )
>>> df
      City    State      Name  Mango  Orange  Watermelon  Gin  Vodka
0  Houston    Texas      Aria      4      10          40   16     20
1   Austin    Texas  Penelope     10       8          99  200     33
2   Hoover  Alabama      Niko     90      14          43   34     18
>>> df.pivot_longer(
...         index=["City", "State"],
...         column_names=slice("Mango", "Vodka"),
...         names_to=("Fruit", "Drink"),
...         values_to=("Pounds", "Ounces"),
...         names_pattern=["M|O|W", "G|V"],
...     )
      City    State       Fruit  Pounds  Drink  Ounces
0  Houston    Texas       Mango       4    Gin    16.0
1   Austin    Texas       Mango      10    Gin   200.0
2   Hoover  Alabama       Mango      90    Gin    34.0
3  Houston    Texas      Orange      10  Vodka    20.0
4   Austin    Texas      Orange       8  Vodka    33.0
5   Hoover  Alabama      Orange      14  Vodka    18.0
6  Houston    Texas  Watermelon      40   None     NaN
7   Austin    Texas  Watermelon      99   None     NaN
8   Hoover  Alabama  Watermelon      43   None     NaN

Replicate the above transformation with a nested dictionary passed to names_pattern - the outer keys in the names_pattern dictionary are passed to names_to, while the inner keys are passed to values_to:

>>> df.pivot_longer(
...     index=["City", "State"],
...     column_names=slice("Mango", "Vodka"),
...     names_pattern={
...         "Fruit": {"Pounds": "M|O|W"},
...         "Drink": {"Ounces": "G|V"},
...     },
... )
      City    State       Fruit  Pounds  Drink  Ounces
0  Houston    Texas       Mango       4    Gin    16.0
1   Austin    Texas       Mango      10    Gin   200.0
2   Hoover  Alabama       Mango      90    Gin    34.0
3  Houston    Texas      Orange      10  Vodka    20.0
4   Austin    Texas      Orange       8  Vodka    33.0
5   Hoover  Alabama      Orange      14  Vodka    18.0
6  Houston    Texas  Watermelon      40   None     NaN
7   Austin    Texas  Watermelon      99   None     NaN
8   Hoover  Alabama  Watermelon      43   None     NaN

Version Changed

  • 0.24.0
    • Added dropna parameter.
  • 0.24.1
    • names_pattern can accept a dictionary.
    • named groups supported in names_pattern.

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
index Optional[Union[list, tuple, str, Pattern]]

Name(s) of columns to use as identifier variables. Should be either a single column name, or a list/tuple of column names. index should be a list of tuples if the columns are a MultiIndex.

None
column_names Optional[Union[list, tuple, str, Pattern]]

Name(s) of columns to unpivot. Should be either a single column name or a list/tuple of column names. column_names should be a list of tuples if the columns are a MultiIndex.

None
names_to Optional[Union[list, tuple, str]]

Name of new column as a string that will contain what were previously the column names in column_names. The default is variable if no value is provided. It can also be a list/tuple of strings that will serve as new column names, if name_sep or names_pattern is provided. If .value is in names_to, new column names will be extracted from part of the existing column names and overridesvalues_to.

None
values_to Optional[str]

Name of new column as a string that will contain what were previously the values of the columns in column_names. values_to can also be a list/tuple and requires that names_pattern is also a list/tuple.

'value'
column_level Optional[Union[int, str]]

If columns are a MultiIndex, then use this level to unpivot the DataFrame. Provided for compatibility with pandas' melt, and applies only if neither names_sep nor names_pattern is provided.

None
names_sep Optional[Union[str, Pattern]]

Determines how the column name is broken up, if names_to contains multiple values. It takes the same specification as pandas' str.split method, and can be a string or regular expression. names_sep does not work with MultiIndex columns.

None
names_pattern Optional[Union[list, tuple, str, Pattern]]

Determines how the column name is broken up. It can be a regular expression containing matching groups. Under the hood it is processed with pandas' str.extract function. If it is a single regex, the number of groups must match the length of names_to. Named groups are supported, if names_to is none. _ is used instead of .value as a placeholder in named groups. _ can be overloaded for multiple .value calls - _, __, ___, ... names_pattern can also be a list/tuple of regular expressions It can also be a list/tuple of strings; the strings will be treated as regular expressions. Under the hood it is processed with pandas' str.contains function. For a list/tuple of regular expressions, names_to must also be a list/tuple and the lengths of both arguments must match. names_pattern can also be a dictionary, where the keys are the new column names, while the values can be a regular expression or a string which will be evaluated as a regular expression. Alternatively, a nested dictionary can be used, where the sub key(s) are associated with values_to. Please have a look at the examples for usage. names_pattern does not work with MultiIndex columns.

None
names_transform Optional[Union[str, Callable, dict]]

Use this option to change the types of columns that have been transformed to rows. This does not applies to the values' columns. Accepts any argument that is acceptable by pd.astype.

None
dropna bool

Determines whether or not to drop nulls from the values columns. Default is False.

False
sort_by_appearance Optional[bool]

Boolean value that determines the final look of the DataFrame. If True, the unpivoted DataFrame will be stacked in order of first appearance.

False
ignore_index Optional[bool]

If True, the original index is ignored. If False, the original index is retained and the index labels will be repeated as necessary.

True

Returns:

Type Description
DataFrame

A pandas DataFrame that has been unpivoted from wide to long format.

Source code in janitor/functions/pivot.py
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
@pf.register_dataframe_method
def pivot_longer(
    df: pd.DataFrame,
    index: Optional[Union[list, tuple, str, Pattern]] = None,
    column_names: Optional[Union[list, tuple, str, Pattern]] = None,
    names_to: Optional[Union[list, tuple, str]] = None,
    values_to: Optional[str] = "value",
    column_level: Optional[Union[int, str]] = None,
    names_sep: Optional[Union[str, Pattern]] = None,
    names_pattern: Optional[Union[list, tuple, str, Pattern]] = None,
    names_transform: Optional[Union[str, Callable, dict]] = None,
    dropna: bool = False,
    sort_by_appearance: Optional[bool] = False,
    ignore_index: Optional[bool] = True,
) -> pd.DataFrame:
    """Unpivots a DataFrame from *wide* to *long* format.

    This method does not mutate the original DataFrame.

    It is modeled after the `pivot_longer` function in R's tidyr package,
    and also takes inspiration from R's data.table package.

    This function is useful to massage a DataFrame into a format where
    one or more columns are considered measured variables, and all other
    columns are considered as identifier variables.

    All measured variables are *unpivoted* (and typically duplicated) along the
    row axis.

    Column selection in `index` and `column_names` is possible using the
    [`select`][janitor.functions.select.select] syntax.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame(
        ...     {
        ...         "Sepal.Length": [5.1, 5.9],
        ...         "Sepal.Width": [3.5, 3.0],
        ...         "Petal.Length": [1.4, 5.1],
        ...         "Petal.Width": [0.2, 1.8],
        ...         "Species": ["setosa", "virginica"],
        ...     }
        ... )
        >>> df
           Sepal.Length  Sepal.Width  Petal.Length  Petal.Width    Species
        0           5.1          3.5           1.4          0.2     setosa
        1           5.9          3.0           5.1          1.8  virginica

        Replicate pandas' melt:
        >>> df.pivot_longer(index = 'Species')
             Species      variable  value
        0     setosa  Sepal.Length    5.1
        1  virginica  Sepal.Length    5.9
        2     setosa   Sepal.Width    3.5
        3  virginica   Sepal.Width    3.0
        4     setosa  Petal.Length    1.4
        5  virginica  Petal.Length    5.1
        6     setosa   Petal.Width    0.2
        7  virginica   Petal.Width    1.8

        Convenient, flexible column selection in the `index` via the
        [`select`][janitor.functions.select.select] syntax:
        >>> from pandas.api.types import is_string_dtype
        >>> df.pivot_longer(index = is_string_dtype)
             Species      variable  value
        0     setosa  Sepal.Length    5.1
        1  virginica  Sepal.Length    5.9
        2     setosa   Sepal.Width    3.5
        3  virginica   Sepal.Width    3.0
        4     setosa  Petal.Length    1.4
        5  virginica  Petal.Length    5.1
        6     setosa   Petal.Width    0.2
        7  virginica   Petal.Width    1.8

        Split the column labels into parts:
        >>> df.pivot_longer(
        ...     index = 'Species',
        ...     names_to = ('part', 'dimension'),
        ...     names_sep = '.',
        ...     sort_by_appearance = True,
        ... )
             Species   part dimension  value
        0     setosa  Sepal    Length    5.1
        1     setosa  Sepal     Width    3.5
        2     setosa  Petal    Length    1.4
        3     setosa  Petal     Width    0.2
        4  virginica  Sepal    Length    5.9
        5  virginica  Sepal     Width    3.0
        6  virginica  Petal    Length    5.1
        7  virginica  Petal     Width    1.8

        Retain parts of the column names as headers:
        >>> df.pivot_longer(
        ...     index = 'Species',
        ...     names_to = ('part', '.value'),
        ...     names_sep = '.',
        ...     sort_by_appearance = True,
        ... )
             Species   part  Length  Width
        0     setosa  Sepal     5.1    3.5
        1     setosa  Petal     1.4    0.2
        2  virginica  Sepal     5.9    3.0
        3  virginica  Petal     5.1    1.8

        Split the column labels based on regex:
        >>> df = pd.DataFrame({"id": [1], "new_sp_m5564": [2], "newrel_f65": [3]})
        >>> df
           id  new_sp_m5564  newrel_f65
        0   1             2           3
        >>> df.pivot_longer(
        ...     index = 'id',
        ...     names_to = ('diagnosis', 'gender', 'age'),
        ...     names_pattern = r"new_?(.+)_(.)(\\d+)",
        ... )
           id diagnosis gender   age  value
        0   1        sp      m  5564      2
        1   1       rel      f    65      3

        Split the column labels for the above dataframe using named groups in `names_pattern`:
        >>> df.pivot_longer(
        ...     index = 'id',
        ...     names_pattern = r"new_?(?P<diagnosis>.+)_(?P<gender>.)(?P<age>\\d+)",
        ... )
            id diagnosis gender   age  value
        0   1        sp      m  5564      2
        1   1       rel      f    65      3

        Convert the dtypes of specific columns with `names_transform`:
        >>> result = (df
        ...          .pivot_longer(
        ...              index = 'id',
        ...              names_to = ('diagnosis', 'gender', 'age'),
        ...              names_pattern = r"new_?(.+)_(.)(\\d+)",
        ...              names_transform = {'gender': 'category', 'age':'int'})
        ... )
        >>> result.dtypes
        id           int64
        diagnosis   object
        gender    category
        age          int64
        value        int64
        dtype: object

        Use multiple `.value` to reshape dataframe:
        >>> df = pd.DataFrame(
        ...     [
        ...         {
        ...             "x_1_mean": 10,
        ...             "x_2_mean": 20,
        ...             "y_1_mean": 30,
        ...             "y_2_mean": 40,
        ...             "unit": 50,
        ...         }
        ...     ]
        ... )
        >>> df
           x_1_mean  x_2_mean  y_1_mean  y_2_mean  unit
        0        10        20        30        40    50
        >>> df.pivot_longer(
        ...     index="unit",
        ...     names_to=(".value", "time", ".value"),
        ...     names_pattern=r"(x|y)_([0-9])(_mean)",
        ... )
           unit time  x_mean  y_mean
        0    50    1      10      30
        1    50    2      20      40

        Replicate the above with named groups in `names_pattern` - use `_` instead of `.value`:
        >>> df.pivot_longer(
        ...     index="unit",
        ...     names_pattern=r"(?P<_>x|y)_(?P<time>[0-9])(?P<__>_mean)",
        ... )
           unit time  x_mean  y_mean
        0    50    1      10      30
        1    50    2      20      40

        Convenient, flexible column selection in the `column_names` via
        [`select`][janitor.functions.select.select] syntax:
        >>> df.pivot_longer(
        ...     column_names="*mean",
        ...     names_to=(".value", "time", ".value"),
        ...     names_pattern=r"(x|y)_([0-9])(_mean)",
        ... )
           unit time  x_mean  y_mean
        0    50    1      10      30
        1    50    2      20      40

        >>> df.pivot_longer(
        ...     column_names=slice("x_1_mean", "y_2_mean"),
        ...     names_to=(".value", "time", ".value"),
        ...     names_pattern=r"(x|y)_([0-9])(_mean)",
        ... )
           unit time  x_mean  y_mean
        0    50    1      10      30
        1    50    2      20      40

        Reshape dataframe by passing a sequence to `names_pattern`:
        >>> df = pd.DataFrame({'hr1': [514, 573],
        ...                    'hr2': [545, 526],
        ...                    'team': ['Red Sox', 'Yankees'],
        ...                    'year1': [2007, 2007],
        ...                    'year2': [2008, 2008]})
        >>> df
           hr1  hr2     team  year1  year2
        0  514  545  Red Sox   2007   2008
        1  573  526  Yankees   2007   2008
        >>> df.pivot_longer(
        ...     index = 'team',
        ...     names_to = ['year', 'hr'],
        ...     names_pattern = ['year', 'hr']
        ... )
              team   hr  year
        0  Red Sox  514  2007
        1  Yankees  573  2007
        2  Red Sox  545  2008
        3  Yankees  526  2008


        Reshape above dataframe by passing a dictionary to `names_pattern`:
        >>> df.pivot_longer(
        ...     index = 'team',
        ...     names_pattern = {"year":"year", "hr":"hr"}
        ... )
              team   hr  year
        0  Red Sox  514  2007
        1  Yankees  573  2007
        2  Red Sox  545  2008
        3  Yankees  526  2008

        Multiple values_to:
        >>> df = pd.DataFrame(
        ...         {
        ...             "City": ["Houston", "Austin", "Hoover"],
        ...             "State": ["Texas", "Texas", "Alabama"],
        ...             "Name": ["Aria", "Penelope", "Niko"],
        ...             "Mango": [4, 10, 90],
        ...             "Orange": [10, 8, 14],
        ...             "Watermelon": [40, 99, 43],
        ...             "Gin": [16, 200, 34],
        ...             "Vodka": [20, 33, 18],
        ...         },
        ...         columns=[
        ...             "City",
        ...             "State",
        ...             "Name",
        ...             "Mango",
        ...             "Orange",
        ...             "Watermelon",
        ...             "Gin",
        ...             "Vodka",
        ...         ],
        ...     )
        >>> df
              City    State      Name  Mango  Orange  Watermelon  Gin  Vodka
        0  Houston    Texas      Aria      4      10          40   16     20
        1   Austin    Texas  Penelope     10       8          99  200     33
        2   Hoover  Alabama      Niko     90      14          43   34     18
        >>> df.pivot_longer(
        ...         index=["City", "State"],
        ...         column_names=slice("Mango", "Vodka"),
        ...         names_to=("Fruit", "Drink"),
        ...         values_to=("Pounds", "Ounces"),
        ...         names_pattern=["M|O|W", "G|V"],
        ...     )
              City    State       Fruit  Pounds  Drink  Ounces
        0  Houston    Texas       Mango       4    Gin    16.0
        1   Austin    Texas       Mango      10    Gin   200.0
        2   Hoover  Alabama       Mango      90    Gin    34.0
        3  Houston    Texas      Orange      10  Vodka    20.0
        4   Austin    Texas      Orange       8  Vodka    33.0
        5   Hoover  Alabama      Orange      14  Vodka    18.0
        6  Houston    Texas  Watermelon      40   None     NaN
        7   Austin    Texas  Watermelon      99   None     NaN
        8   Hoover  Alabama  Watermelon      43   None     NaN

        Replicate the above transformation with a nested dictionary passed to `names_pattern`
        - the outer keys in the `names_pattern` dictionary are passed to `names_to`,
        while the inner keys are passed to `values_to`:
        >>> df.pivot_longer(
        ...     index=["City", "State"],
        ...     column_names=slice("Mango", "Vodka"),
        ...     names_pattern={
        ...         "Fruit": {"Pounds": "M|O|W"},
        ...         "Drink": {"Ounces": "G|V"},
        ...     },
        ... )
              City    State       Fruit  Pounds  Drink  Ounces
        0  Houston    Texas       Mango       4    Gin    16.0
        1   Austin    Texas       Mango      10    Gin   200.0
        2   Hoover  Alabama       Mango      90    Gin    34.0
        3  Houston    Texas      Orange      10  Vodka    20.0
        4   Austin    Texas      Orange       8  Vodka    33.0
        5   Hoover  Alabama      Orange      14  Vodka    18.0
        6  Houston    Texas  Watermelon      40   None     NaN
        7   Austin    Texas  Watermelon      99   None     NaN
        8   Hoover  Alabama  Watermelon      43   None     NaN

    !!! abstract "Version Changed"

        - 0.24.0
            - Added `dropna` parameter.
        - 0.24.1
            - `names_pattern` can accept a dictionary.
            - named groups supported in `names_pattern`.

    Args:
        df: A pandas DataFrame.
        index: Name(s) of columns to use as identifier variables.
            Should be either a single column name, or a list/tuple of
            column names.
            `index` should be a list of tuples if the columns are a MultiIndex.
        column_names: Name(s) of columns to unpivot. Should be either
            a single column name or a list/tuple of column names.
            `column_names` should be a list of tuples
            if the columns are a MultiIndex.
        names_to: Name of new column as a string that will contain
            what were previously the column names in `column_names`.
            The default is `variable` if no value is provided. It can
            also be a list/tuple of strings that will serve as new column
            names, if `name_sep` or `names_pattern` is provided.
            If `.value` is in `names_to`, new column names will be extracted
            from part of the existing column names and overrides`values_to`.
        values_to: Name of new column as a string that will contain what
            were previously the values of the columns in `column_names`.
            values_to can also be a list/tuple
            and requires that names_pattern is also a list/tuple.
        column_level: If columns are a MultiIndex, then use this level to
            unpivot the DataFrame. Provided for compatibility with pandas' melt,
            and applies only if neither `names_sep` nor `names_pattern` is
            provided.
        names_sep: Determines how the column name is broken up, if
            `names_to` contains multiple values. It takes the same
            specification as pandas' `str.split` method, and can be a string
            or regular expression. `names_sep` does not work with MultiIndex
            columns.
        names_pattern: Determines how the column name is broken up.
            It can be a regular expression containing matching groups.
            Under the hood it is processed with pandas' `str.extract` function.
            If it is a single regex, the number of groups must match
            the length of `names_to`.
            Named groups are supported, if `names_to` is none. `_` is used
            instead of `.value` as a placeholder in named groups.
            `_` can be overloaded for multiple `.value`
            calls - `_`, `__`, `___`, ...
            `names_pattern` can also be a list/tuple of regular expressions
            It can also be a list/tuple of strings;
            the strings will be treated as regular expressions.
            Under the hood it is processed with pandas' `str.contains` function.
            For a list/tuple of regular expressions,
            `names_to` must also be a list/tuple and the lengths of both
            arguments must match.
            `names_pattern` can also be a dictionary, where the keys are
            the new column names, while the values can be a regular expression
            or a string which will be evaluated as a regular expression.
            Alternatively, a nested dictionary can be used, where the sub
            key(s) are associated with `values_to`. Please have a look
            at the examples for usage.
            `names_pattern` does not work with MultiIndex columns.
        names_transform: Use this option to change the types of columns that
            have been transformed to rows. This does not applies to the values' columns.
            Accepts any argument that is acceptable by `pd.astype`.
        dropna: Determines whether or not to drop nulls
            from the values columns. Default is `False`.
        sort_by_appearance: Boolean value that determines
            the final look of the DataFrame. If `True`, the unpivoted DataFrame
            will be stacked in order of first appearance.
        ignore_index: If `True`,
            the original index is ignored. If `False`, the original index
            is retained and the index labels will be repeated as necessary.

    Returns:
        A pandas DataFrame that has been unpivoted from wide to long
            format.
    """  # noqa: E501

    # this code builds on the wonderful work of @benjaminjack’s PR
    # https://github.com/benjaminjack/pyjanitor/commit/e3df817903c20dd21634461c8a92aec137963ed0

    return _computations_pivot_longer(
        df=df,
        index=index,
        column_names=column_names,
        column_level=column_level,
        names_to=names_to,
        values_to=values_to,
        names_sep=names_sep,
        names_pattern=names_pattern,
        names_transform=names_transform,
        dropna=dropna,
        sort_by_appearance=sort_by_appearance,
        ignore_index=ignore_index,
    )

pivot_wider(df, index=None, names_from=None, values_from=None, flatten_levels=True, names_sep='_', names_glue=None, reset_index=True, names_expand=False, index_expand=False)

Reshapes data from long to wide form.

Note

This function will be deprecated in a 1.x release. Please use pd.DataFrame.pivot instead.

The number of columns are increased, while decreasing the number of rows. It is the inverse of the pivot_longer method, and is a wrapper around pd.DataFrame.pivot method.

This method does not mutate the original DataFrame.

Column selection in index, names_from and values_from is possible using the select syntax.

A ValueError is raised if the combination of the index and names_from is not unique.

By default, values from values_from are always at the top level if the columns are not flattened. If flattened, the values from values_from are usually at the start of each label in the columns.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = [{'dep': 5.5, 'step': 1, 'a': 20, 'b': 30},
...       {'dep': 5.5, 'step': 2, 'a': 25, 'b': 37},
...       {'dep': 6.1, 'step': 1, 'a': 22, 'b': 19},
...       {'dep': 6.1, 'step': 2, 'a': 18, 'b': 29}]
>>> df = pd.DataFrame(df)
>>> df
   dep  step   a   b
0  5.5     1  20  30
1  5.5     2  25  37
2  6.1     1  22  19
3  6.1     2  18  29

Pivot and flatten columns:

>>> df.pivot_wider(
...     index = "dep",
...     names_from = "step",
... )
   dep  a_1  a_2  b_1  b_2
0  5.5   20   25   30   37
1  6.1   22   18   19   29

Modify columns with names_sep:

>>> df.pivot_wider(
...     index = "dep",
...     names_from = "step",
...     names_sep = "",
... )
   dep  a1  a2  b1  b2
0  5.5  20  25  30  37
1  6.1  22  18  19  29

Modify columns with names_glue:

>>> df.pivot_wider(
...     index = "dep",
...     names_from = "step",
...     names_glue = "{_value}_step{step}",
... )
   dep  a_step1  a_step2  b_step1  b_step2
0  5.5       20       25       30       37
1  6.1       22       18       19       29

Expand columns to expose implicit missing values - this applies only to categorical columns:

>>> weekdays = ("Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun")
>>> daily = pd.DataFrame(
...     {
...         "day": pd.Categorical(
...             values=("Tue", "Thu", "Fri", "Mon"), categories=weekdays
...         ),
...         "value": (2, 3, 1, 5),
...     },
... index=[0, 0, 0, 0],
... )
>>> daily
   day  value
0  Tue      2
0  Thu      3
0  Fri      1
0  Mon      5
>>> daily.pivot_wider(names_from='day', values_from='value')
   Tue  Thu  Fri  Mon
0    2    3    1    5
>>> (daily
... .pivot_wider(
...     names_from='day',
...     values_from='value',
...     names_expand=True)
... )
   Mon  Tue  Wed  Thu  Fri  Sat  Sun
0    5    2  NaN    3    1  NaN  NaN

Expand the index to expose implicit missing values - this applies only to categorical columns:

>>> daily = daily.assign(letter = list('ABBA'))
>>> daily
   day  value letter
0  Tue      2      A
0  Thu      3      B
0  Fri      1      B
0  Mon      5      A
>>> daily.pivot_wider(index='day',names_from='letter',values_from='value')
   day    A    B
0  Tue  2.0  NaN
1  Thu  NaN  3.0
2  Fri  NaN  1.0
3  Mon  5.0  NaN
>>> (daily
... .pivot_wider(
...     index='day',
...     names_from='letter',
...     values_from='value',
...     index_expand=True)
... )
   day    A    B
0  Mon  5.0  NaN
1  Tue  2.0  NaN
2  Wed  NaN  NaN
3  Thu  NaN  3.0
4  Fri  NaN  1.0
5  Sat  NaN  NaN
6  Sun  NaN  NaN

Version Changed

  • 0.24.0
    • Added reset_index, names_expand and index_expand parameters.

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
index Optional[Union[list, str]]

Name(s) of columns to use as identifier variables. It should be either a single column name, or a list of column names. If index is not provided, the DataFrame's index is used.

None
names_from Optional[Union[list, str]]

Name(s) of column(s) to use to make the new DataFrame's columns. Should be either a single column name, or a list of column names.

None
values_from Optional[Union[list, str]]

Name(s) of column(s) that will be used for populating the new DataFrame's values. If values_from is not specified, all remaining columns will be used.

None
flatten_levels Optional[bool]

If False, the DataFrame stays as a MultiIndex.

True
names_sep str

If names_from or values_from contain multiple variables, this will be used to join the values into a single string to use as a column name. Default is _. Applicable only if flatten_levels is True.

'_'
names_glue str

A string to control the output of the flattened columns. It offers more flexibility in creating custom column names, and uses python's str.format_map under the hood. Simply create the string template, using the column labels in names_from, and special _value as a placeholder for values_from. Applicable only if flatten_levels is True.

None
reset_index bool

Determines whether to restore index as a column/columns. Applicable only if index is provided, and flatten_levels is True.

True
names_expand bool

Expand columns to show all the categories. Applies only if names_from is a categorical column.

False
index_expand bool

Expand the index to show all the categories. Applies only if index is a categorical column.

False

Returns:

Type Description
DataFrame

A pandas DataFrame that has been unpivoted from long to wide form.

Source code in janitor/functions/pivot.py
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
@pf.register_dataframe_method
@refactored_function(
    message=(
        "This function will be deprecated in a 1.x release. "
        "Please use `pd.DataFrame.pivot` instead."
    )
)
def pivot_wider(
    df: pd.DataFrame,
    index: Optional[Union[list, str]] = None,
    names_from: Optional[Union[list, str]] = None,
    values_from: Optional[Union[list, str]] = None,
    flatten_levels: Optional[bool] = True,
    names_sep: str = "_",
    names_glue: str = None,
    reset_index: bool = True,
    names_expand: bool = False,
    index_expand: bool = False,
) -> pd.DataFrame:
    """Reshapes data from *long* to *wide* form.

    !!!note

        This function will be deprecated in a 1.x release.
        Please use `pd.DataFrame.pivot` instead.

    The number of columns are increased, while decreasing
    the number of rows. It is the inverse of the
    [`pivot_longer`][janitor.functions.pivot.pivot_longer]
    method, and is a wrapper around `pd.DataFrame.pivot` method.

    This method does not mutate the original DataFrame.

    Column selection in `index`, `names_from` and `values_from`
    is possible using the
    [`select`][janitor.functions.select.select] syntax.

    A ValueError is raised if the combination
    of the `index` and `names_from` is not unique.

    By default, values from `values_from` are always
    at the top level if the columns are not flattened.
    If flattened, the values from `values_from` are usually
    at the start of each label in the columns.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = [{'dep': 5.5, 'step': 1, 'a': 20, 'b': 30},
        ...       {'dep': 5.5, 'step': 2, 'a': 25, 'b': 37},
        ...       {'dep': 6.1, 'step': 1, 'a': 22, 'b': 19},
        ...       {'dep': 6.1, 'step': 2, 'a': 18, 'b': 29}]
        >>> df = pd.DataFrame(df)
        >>> df
           dep  step   a   b
        0  5.5     1  20  30
        1  5.5     2  25  37
        2  6.1     1  22  19
        3  6.1     2  18  29

        Pivot and flatten columns:
        >>> df.pivot_wider( # doctest: +SKIP
        ...     index = "dep",
        ...     names_from = "step",
        ... )
           dep  a_1  a_2  b_1  b_2
        0  5.5   20   25   30   37
        1  6.1   22   18   19   29

        Modify columns with `names_sep`:
        >>> df.pivot_wider( # doctest: +SKIP
        ...     index = "dep",
        ...     names_from = "step",
        ...     names_sep = "",
        ... )
           dep  a1  a2  b1  b2
        0  5.5  20  25  30  37
        1  6.1  22  18  19  29

        Modify columns with `names_glue`:
        >>> df.pivot_wider( # doctest: +SKIP
        ...     index = "dep",
        ...     names_from = "step",
        ...     names_glue = "{_value}_step{step}",
        ... )
           dep  a_step1  a_step2  b_step1  b_step2
        0  5.5       20       25       30       37
        1  6.1       22       18       19       29

        Expand columns to expose implicit missing values
        - this applies only to categorical columns:
        >>> weekdays = ("Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun")
        >>> daily = pd.DataFrame(
        ...     {
        ...         "day": pd.Categorical(
        ...             values=("Tue", "Thu", "Fri", "Mon"), categories=weekdays
        ...         ),
        ...         "value": (2, 3, 1, 5),
        ...     },
        ... index=[0, 0, 0, 0],
        ... )
        >>> daily
           day  value
        0  Tue      2
        0  Thu      3
        0  Fri      1
        0  Mon      5
        >>> daily.pivot_wider(names_from='day', values_from='value') # doctest: +SKIP
           Tue  Thu  Fri  Mon
        0    2    3    1    5
        >>> (daily # doctest: +SKIP
        ... .pivot_wider(
        ...     names_from='day',
        ...     values_from='value',
        ...     names_expand=True)
        ... )
           Mon  Tue  Wed  Thu  Fri  Sat  Sun
        0    5    2  NaN    3    1  NaN  NaN

        Expand the index to expose implicit missing values
        - this applies only to categorical columns:
        >>> daily = daily.assign(letter = list('ABBA'))
        >>> daily
           day  value letter
        0  Tue      2      A
        0  Thu      3      B
        0  Fri      1      B
        0  Mon      5      A
        >>> daily.pivot_wider(index='day',names_from='letter',values_from='value') # doctest: +SKIP
           day    A    B
        0  Tue  2.0  NaN
        1  Thu  NaN  3.0
        2  Fri  NaN  1.0
        3  Mon  5.0  NaN
        >>> (daily # doctest: +SKIP
        ... .pivot_wider(
        ...     index='day',
        ...     names_from='letter',
        ...     values_from='value',
        ...     index_expand=True)
        ... )
           day    A    B
        0  Mon  5.0  NaN
        1  Tue  2.0  NaN
        2  Wed  NaN  NaN
        3  Thu  NaN  3.0
        4  Fri  NaN  1.0
        5  Sat  NaN  NaN
        6  Sun  NaN  NaN


    !!! abstract "Version Changed"

        - 0.24.0
            - Added `reset_index`, `names_expand` and `index_expand` parameters.

    Args:
        df: A pandas DataFrame.
        index: Name(s) of columns to use as identifier variables.
            It should be either a single column name, or a list of column names.
            If `index` is not provided, the DataFrame's index is used.
        names_from: Name(s) of column(s) to use to make the new
            DataFrame's columns. Should be either a single column name,
            or a list of column names.
        values_from: Name(s) of column(s) that will be used for populating
            the new DataFrame's values.
            If `values_from` is not specified,  all remaining columns
            will be used.
        flatten_levels: If `False`, the DataFrame stays as a MultiIndex.
        names_sep: If `names_from` or `values_from` contain multiple
            variables, this will be used to join the values into a single string
            to use as a column name. Default is `_`.
            Applicable only if `flatten_levels` is `True`.
        names_glue: A string to control the output of the flattened columns.
            It offers more flexibility in creating custom column names,
            and uses python's `str.format_map` under the hood.
            Simply create the string template,
            using the column labels in `names_from`,
            and special `_value` as a placeholder for `values_from`.
            Applicable only if `flatten_levels` is `True`.
        reset_index: Determines whether to restore `index`
            as a column/columns. Applicable only if `index` is provided,
            and `flatten_levels` is `True`.
        names_expand: Expand columns to show all the categories.
            Applies only if `names_from` is a categorical column.
        index_expand: Expand the index to show all the categories.
            Applies only if `index` is a categorical column.

    Returns:
        A pandas DataFrame that has been unpivoted from long to wide form.
    """  # noqa: E501

    # no need for an explicit copy --> df = df.copy()
    # `pd.pivot` creates one
    return _computations_pivot_wider(
        df,
        index,
        names_from,
        values_from,
        flatten_levels,
        names_sep,
        names_glue,
        reset_index,
        names_expand,
        index_expand,
    )

rename_column(df, old_column_name, new_column_name)

Rename a column in place.

This method does not mutate the original DataFrame.

Note

This function will be deprecated in a 1.x release. Please use pd.DataFrame.rename instead.

This is just syntactic sugar/a convenience function for renaming one column at a time. If you are convinced that there are multiple columns in need of changing, then use the pandas.DataFrame.rename method.

Examples:

Change the name of column 'a' to 'a_new'.

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"a": list(range(3)), "b": list("abc")})
>>> df.rename_column(old_column_name='a', new_column_name='a_new')
   a_new  b
0      0  a
1      1  b
2      2  c

Parameters:

Name Type Description Default
df DataFrame

The pandas DataFrame object.

required
old_column_name str

The old column name.

required
new_column_name str

The new column name.

required

Returns:

Type Description
DataFrame

A pandas DataFrame with renamed columns.

Source code in janitor/functions/rename_columns.py
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
@pf.register_dataframe_method
@refactored_function(
    message=(
        "This function will be deprecated in a 1.x release. "
        "Please use `pd.DataFrame.rename` instead."
    )
)
@deprecated_alias(old="old_column_name", new="new_column_name")
def rename_column(
    df: pd.DataFrame,
    old_column_name: str,
    new_column_name: str,
) -> pd.DataFrame:
    """Rename a column in place.

    This method does not mutate the original DataFrame.

    !!!note

        This function will be deprecated in a 1.x release.
        Please use `pd.DataFrame.rename` instead.

    This is just syntactic sugar/a convenience function for renaming one column at a time.
    If you are convinced that there are multiple columns in need of changing,
    then use the `pandas.DataFrame.rename` method.

    Examples:
        Change the name of column 'a' to 'a_new'.

        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"a": list(range(3)), "b": list("abc")})
        >>> df.rename_column(old_column_name='a', new_column_name='a_new')
           a_new  b
        0      0  a
        1      1  b
        2      2  c

    Args:
        df: The pandas DataFrame object.
        old_column_name: The old column name.
        new_column_name: The new column name.

    Returns:
        A pandas DataFrame with renamed columns.
    """  # noqa: E501

    check_column(df, [old_column_name])

    return df.rename(columns={old_column_name: new_column_name})

select_columns(df, *args, invert=False)

Method-chainable selection of columns.

It accepts a string, shell-like glob strings (*string*), regex, slice, array-like object, or a list of the previous options.

Selection on a MultiIndex on a level, or multiple levels, is possible with a dictionary.

This method does not mutate the original DataFrame.

Optional ability to invert selection of columns available as well.

Note

The preferred option when selecting columns or rows in a Pandas DataFrame is with .loc or .iloc methods. select_columns is primarily for convenience.

Note

This function will be deprecated in a 1.x release. Please use jn.select instead.

Examples:

>>> import pandas as pd
>>> import janitor
>>> from numpy import nan
>>> pd.set_option("display.max_columns", None)
>>> pd.set_option("display.expand_frame_repr", False)
>>> pd.set_option("max_colwidth", None)
>>> data = {'name': ['Cheetah','Owl monkey','Mountain beaver',
...                  'Greater short-tailed shrew','Cow'],
...         'genus': ['Acinonyx', 'Aotus', 'Aplodontia', 'Blarina', 'Bos'],
...         'vore': ['carni', 'omni', 'herbi', 'omni', 'herbi'],
...         'order': ['Carnivora','Primates','Rodentia','Soricomorpha','Artiodactyla'],
...         'conservation': ['lc', nan, 'nt', 'lc', 'domesticated'],
...         'sleep_total': [12.1, 17.0, 14.4, 14.9, 4.0],
...         'sleep_rem': [nan, 1.8, 2.4, 2.3, 0.7],
...         'sleep_cycle': [nan, nan, nan, 0.133333333, 0.666666667],
...         'awake': [11.9, 7.0, 9.6, 9.1, 20.0],
...         'brainwt': [nan, 0.0155, nan, 0.00029, 0.423],
...         'bodywt': [50.0, 0.48, 1.35, 0.019, 600.0]}
>>> df = pd.DataFrame(data)
>>> df
                         name       genus   vore         order  conservation  sleep_total  sleep_rem  sleep_cycle  awake  brainwt   bodywt
0                     Cheetah    Acinonyx  carni     Carnivora            lc         12.1        NaN          NaN   11.9      NaN   50.000
1                  Owl monkey       Aotus   omni      Primates           NaN         17.0        1.8          NaN    7.0  0.01550    0.480
2             Mountain beaver  Aplodontia  herbi      Rodentia            nt         14.4        2.4          NaN    9.6      NaN    1.350
3  Greater short-tailed shrew     Blarina   omni  Soricomorpha            lc         14.9        2.3     0.133333    9.1  0.00029    0.019
4                         Cow         Bos  herbi  Artiodactyla  domesticated          4.0        0.7     0.666667   20.0  0.42300  600.000

Explicit label selection:

>>> df.select_columns('name', 'order')
                         name         order
0                     Cheetah     Carnivora
1                  Owl monkey      Primates
2             Mountain beaver      Rodentia
3  Greater short-tailed shrew  Soricomorpha
4                         Cow  Artiodactyla

Selection via globbing:

>>> df.select_columns("sleep*", "*wt")
   sleep_total  sleep_rem  sleep_cycle  brainwt   bodywt
0         12.1        NaN          NaN      NaN   50.000
1         17.0        1.8          NaN  0.01550    0.480
2         14.4        2.4          NaN      NaN    1.350
3         14.9        2.3     0.133333  0.00029    0.019
4          4.0        0.7     0.666667  0.42300  600.000

Selection via regex:

>>> import re
>>> df.select_columns(re.compile(r"o.+er"))
          order  conservation
0     Carnivora            lc
1      Primates           NaN
2      Rodentia            nt
3  Soricomorpha            lc
4  Artiodactyla  domesticated

Selection via slicing:

>>> df.select_columns(slice('name','order'), slice('sleep_total','sleep_cycle'))
                         name       genus   vore         order  sleep_total  sleep_rem  sleep_cycle
0                     Cheetah    Acinonyx  carni     Carnivora         12.1        NaN          NaN
1                  Owl monkey       Aotus   omni      Primates         17.0        1.8          NaN
2             Mountain beaver  Aplodontia  herbi      Rodentia         14.4        2.4          NaN
3  Greater short-tailed shrew     Blarina   omni  Soricomorpha         14.9        2.3     0.133333
4                         Cow         Bos  herbi  Artiodactyla          4.0        0.7     0.666667

Selection via callable:

>>> from pandas.api.types import is_numeric_dtype
>>> df.select_columns(is_numeric_dtype)
   sleep_total  sleep_rem  sleep_cycle  awake  brainwt   bodywt
0         12.1        NaN          NaN   11.9      NaN   50.000
1         17.0        1.8          NaN    7.0  0.01550    0.480
2         14.4        2.4          NaN    9.6      NaN    1.350
3         14.9        2.3     0.133333    9.1  0.00029    0.019
4          4.0        0.7     0.666667   20.0  0.42300  600.000
>>> df.select_columns(lambda f: f.isna().any())
   conservation  sleep_rem  sleep_cycle  brainwt
0            lc        NaN          NaN      NaN
1           NaN        1.8          NaN  0.01550
2            nt        2.4          NaN      NaN
3            lc        2.3     0.133333  0.00029
4  domesticated        0.7     0.666667  0.42300

Exclude columns with the invert parameter:

>>> df.select_columns(is_numeric_dtype, invert=True)
                         name       genus   vore         order  conservation
0                     Cheetah    Acinonyx  carni     Carnivora            lc
1                  Owl monkey       Aotus   omni      Primates           NaN
2             Mountain beaver  Aplodontia  herbi      Rodentia            nt
3  Greater short-tailed shrew     Blarina   omni  Soricomorpha            lc
4                         Cow         Bos  herbi  Artiodactyla  domesticated

Exclude columns with the DropLabel class:

>>> from janitor import DropLabel
>>> df.select_columns(DropLabel(slice("name", "awake")), "conservation")
   brainwt   bodywt  conservation
0      NaN   50.000            lc
1  0.01550    0.480           NaN
2      NaN    1.350            nt
3  0.00029    0.019            lc
4  0.42300  600.000  domesticated

Selection on MultiIndex columns:

>>> d = {'num_legs': [4, 4, 2, 2],
...      'num_wings': [0, 0, 2, 2],
...      'class': ['mammal', 'mammal', 'mammal', 'bird'],
...      'animal': ['cat', 'dog', 'bat', 'penguin'],
...      'locomotion': ['walks', 'walks', 'flies', 'walks']}
>>> df = pd.DataFrame(data=d)
>>> df = df.set_index(['class', 'animal', 'locomotion']).T
>>> df
class      mammal                bird
animal        cat   dog   bat penguin
locomotion  walks walks flies   walks
num_legs        4     4     2       2
num_wings       0     0     2       2

Selection with a scalar:

>>> df.select_columns('mammal')
class      mammal
animal        cat   dog   bat
locomotion  walks walks flies
num_legs        4     4     2
num_wings       0     0     2

Selection with a tuple:

>>> df.select_columns(('mammal','bat'))
class      mammal
animal        bat
locomotion  flies
num_legs        2
num_wings       2

Selection within a level is possible with a dictionary, where the key is either a level name or number:

>>> df.select_columns({'animal':'cat'})
class      mammal
animal        cat
locomotion  walks
num_legs        4
num_wings       0
>>> df.select_columns({1:["bat", "cat"]})
class      mammal
animal        bat   cat
locomotion  flies walks
num_legs        2     4
num_wings       2     0

Selection on multiple levels:

>>> df.select_columns({"class":"mammal", "locomotion":"flies"})
class      mammal
animal        bat
locomotion  flies
num_legs        2
num_wings       2

Selection with a regex on a level:

>>> df.select_columns({"animal":re.compile(".+t$")})
class      mammal
animal        cat   bat
locomotion  walks flies
num_legs        4     2
num_wings       0     2

Selection with a callable on a level:

>>> df.select_columns({"animal":lambda f: f.str.endswith('t')})
class      mammal
animal        cat   bat
locomotion  walks flies
num_legs        4     2
num_wings       0     2

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
*args Any

Valid inputs include: an exact column name to look for, a shell-style glob string (e.g. *_thing_*), a regular expression, a callable, or variable arguments of all the aforementioned. A sequence of booleans is also acceptable. A dictionary can be used for selection on a MultiIndex on different levels.

()
invert bool

Whether or not to invert the selection. This will result in the selection of the complement of the columns provided.

False

Returns:

Type Description
DataFrame

A pandas DataFrame with the specified columns selected.

Source code in janitor/functions/select.py
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
@pf.register_dataframe_method
@refactored_function(
    message=(
        "This function will be deprecated in a 1.x release. "
        "Please use `jn.select` instead."
    )
)
def select_columns(
    df: pd.DataFrame,
    *args: Any,
    invert: bool = False,
) -> pd.DataFrame:
    """Method-chainable selection of columns.

    It accepts a string, shell-like glob strings `(*string*)`,
    regex, slice, array-like object, or a list of the previous options.

    Selection on a MultiIndex on a level, or multiple levels,
    is possible with a dictionary.

    This method does not mutate the original DataFrame.

    Optional ability to invert selection of columns available as well.

    !!!note

        The preferred option when selecting columns or rows in a Pandas DataFrame
        is with `.loc` or `.iloc` methods.
        `select_columns` is primarily for convenience.

    !!!note

        This function will be deprecated in a 1.x release.
        Please use `jn.select` instead.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> from numpy import nan
        >>> pd.set_option("display.max_columns", None)
        >>> pd.set_option("display.expand_frame_repr", False)
        >>> pd.set_option("max_colwidth", None)
        >>> data = {'name': ['Cheetah','Owl monkey','Mountain beaver',
        ...                  'Greater short-tailed shrew','Cow'],
        ...         'genus': ['Acinonyx', 'Aotus', 'Aplodontia', 'Blarina', 'Bos'],
        ...         'vore': ['carni', 'omni', 'herbi', 'omni', 'herbi'],
        ...         'order': ['Carnivora','Primates','Rodentia','Soricomorpha','Artiodactyla'],
        ...         'conservation': ['lc', nan, 'nt', 'lc', 'domesticated'],
        ...         'sleep_total': [12.1, 17.0, 14.4, 14.9, 4.0],
        ...         'sleep_rem': [nan, 1.8, 2.4, 2.3, 0.7],
        ...         'sleep_cycle': [nan, nan, nan, 0.133333333, 0.666666667],
        ...         'awake': [11.9, 7.0, 9.6, 9.1, 20.0],
        ...         'brainwt': [nan, 0.0155, nan, 0.00029, 0.423],
        ...         'bodywt': [50.0, 0.48, 1.35, 0.019, 600.0]}
        >>> df = pd.DataFrame(data)
        >>> df
                                 name       genus   vore         order  conservation  sleep_total  sleep_rem  sleep_cycle  awake  brainwt   bodywt
        0                     Cheetah    Acinonyx  carni     Carnivora            lc         12.1        NaN          NaN   11.9      NaN   50.000
        1                  Owl monkey       Aotus   omni      Primates           NaN         17.0        1.8          NaN    7.0  0.01550    0.480
        2             Mountain beaver  Aplodontia  herbi      Rodentia            nt         14.4        2.4          NaN    9.6      NaN    1.350
        3  Greater short-tailed shrew     Blarina   omni  Soricomorpha            lc         14.9        2.3     0.133333    9.1  0.00029    0.019
        4                         Cow         Bos  herbi  Artiodactyla  domesticated          4.0        0.7     0.666667   20.0  0.42300  600.000

        Explicit label selection:
        >>> df.select_columns('name', 'order')
                                 name         order
        0                     Cheetah     Carnivora
        1                  Owl monkey      Primates
        2             Mountain beaver      Rodentia
        3  Greater short-tailed shrew  Soricomorpha
        4                         Cow  Artiodactyla

        Selection via globbing:
        >>> df.select_columns("sleep*", "*wt")
           sleep_total  sleep_rem  sleep_cycle  brainwt   bodywt
        0         12.1        NaN          NaN      NaN   50.000
        1         17.0        1.8          NaN  0.01550    0.480
        2         14.4        2.4          NaN      NaN    1.350
        3         14.9        2.3     0.133333  0.00029    0.019
        4          4.0        0.7     0.666667  0.42300  600.000

        Selection via regex:
        >>> import re
        >>> df.select_columns(re.compile(r"o.+er"))
                  order  conservation
        0     Carnivora            lc
        1      Primates           NaN
        2      Rodentia            nt
        3  Soricomorpha            lc
        4  Artiodactyla  domesticated

        Selection via slicing:
        >>> df.select_columns(slice('name','order'), slice('sleep_total','sleep_cycle'))
                                 name       genus   vore         order  sleep_total  sleep_rem  sleep_cycle
        0                     Cheetah    Acinonyx  carni     Carnivora         12.1        NaN          NaN
        1                  Owl monkey       Aotus   omni      Primates         17.0        1.8          NaN
        2             Mountain beaver  Aplodontia  herbi      Rodentia         14.4        2.4          NaN
        3  Greater short-tailed shrew     Blarina   omni  Soricomorpha         14.9        2.3     0.133333
        4                         Cow         Bos  herbi  Artiodactyla          4.0        0.7     0.666667

        Selection via callable:
        >>> from pandas.api.types import is_numeric_dtype
        >>> df.select_columns(is_numeric_dtype)
           sleep_total  sleep_rem  sleep_cycle  awake  brainwt   bodywt
        0         12.1        NaN          NaN   11.9      NaN   50.000
        1         17.0        1.8          NaN    7.0  0.01550    0.480
        2         14.4        2.4          NaN    9.6      NaN    1.350
        3         14.9        2.3     0.133333    9.1  0.00029    0.019
        4          4.0        0.7     0.666667   20.0  0.42300  600.000
        >>> df.select_columns(lambda f: f.isna().any())
           conservation  sleep_rem  sleep_cycle  brainwt
        0            lc        NaN          NaN      NaN
        1           NaN        1.8          NaN  0.01550
        2            nt        2.4          NaN      NaN
        3            lc        2.3     0.133333  0.00029
        4  domesticated        0.7     0.666667  0.42300

        Exclude columns with the `invert` parameter:
        >>> df.select_columns(is_numeric_dtype, invert=True)
                                 name       genus   vore         order  conservation
        0                     Cheetah    Acinonyx  carni     Carnivora            lc
        1                  Owl monkey       Aotus   omni      Primates           NaN
        2             Mountain beaver  Aplodontia  herbi      Rodentia            nt
        3  Greater short-tailed shrew     Blarina   omni  Soricomorpha            lc
        4                         Cow         Bos  herbi  Artiodactyla  domesticated

        Exclude columns with the `DropLabel` class:
        >>> from janitor import DropLabel
        >>> df.select_columns(DropLabel(slice("name", "awake")), "conservation")
           brainwt   bodywt  conservation
        0      NaN   50.000            lc
        1  0.01550    0.480           NaN
        2      NaN    1.350            nt
        3  0.00029    0.019            lc
        4  0.42300  600.000  domesticated

        Selection on MultiIndex columns:
        >>> d = {'num_legs': [4, 4, 2, 2],
        ...      'num_wings': [0, 0, 2, 2],
        ...      'class': ['mammal', 'mammal', 'mammal', 'bird'],
        ...      'animal': ['cat', 'dog', 'bat', 'penguin'],
        ...      'locomotion': ['walks', 'walks', 'flies', 'walks']}
        >>> df = pd.DataFrame(data=d)
        >>> df = df.set_index(['class', 'animal', 'locomotion']).T
        >>> df
        class      mammal                bird
        animal        cat   dog   bat penguin
        locomotion  walks walks flies   walks
        num_legs        4     4     2       2
        num_wings       0     0     2       2

        Selection with a scalar:
        >>> df.select_columns('mammal')
        class      mammal
        animal        cat   dog   bat
        locomotion  walks walks flies
        num_legs        4     4     2
        num_wings       0     0     2

        Selection with a tuple:
        >>> df.select_columns(('mammal','bat'))
        class      mammal
        animal        bat
        locomotion  flies
        num_legs        2
        num_wings       2

        Selection within a level is possible with a dictionary,
        where the key is either a level name or number:
        >>> df.select_columns({'animal':'cat'})
        class      mammal
        animal        cat
        locomotion  walks
        num_legs        4
        num_wings       0
        >>> df.select_columns({1:["bat", "cat"]})
        class      mammal
        animal        bat   cat
        locomotion  flies walks
        num_legs        2     4
        num_wings       2     0

        Selection on multiple levels:
        >>> df.select_columns({"class":"mammal", "locomotion":"flies"})
        class      mammal
        animal        bat
        locomotion  flies
        num_legs        2
        num_wings       2

        Selection with a regex on a level:
        >>> df.select_columns({"animal":re.compile(".+t$")})
        class      mammal
        animal        cat   bat
        locomotion  walks flies
        num_legs        4     2
        num_wings       0     2

        Selection with a callable on a level:
        >>> df.select_columns({"animal":lambda f: f.str.endswith('t')})
        class      mammal
        animal        cat   bat
        locomotion  walks flies
        num_legs        4     2
        num_wings       0     2

    Args:
        df: A pandas DataFrame.
        *args: Valid inputs include: an exact column name to look for,
            a shell-style glob string (e.g. `*_thing_*`),
            a regular expression,
            a callable,
            or variable arguments of all the aforementioned.
            A sequence of booleans is also acceptable.
            A dictionary can be used for selection
            on a MultiIndex on different levels.
        invert: Whether or not to invert the selection.
            This will result in the selection
            of the complement of the columns provided.

    Returns:
        A pandas DataFrame with the specified columns selected.
    """  # noqa: E501

    return _select(df, columns=list(args), invert=invert)

select_rows(df, *args, invert=False)

Method-chainable selection of rows.

It accepts a string, shell-like glob strings (*string*), regex, slice, array-like object, or a list of the previous options.

Selection on a MultiIndex on a level, or multiple levels, is possible with a dictionary.

This method does not mutate the original DataFrame.

Optional ability to invert selection of rows available as well.

New in version 0.24.0

Note

The preferred option when selecting columns or rows in a Pandas DataFrame is with .loc or .iloc methods, as they are generally performant. select_rows is primarily for convenience.

Note

This function will be deprecated in a 1.x release. Please use jn.select instead.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = {"col1": [1, 2], "foo": [3, 4], "col2": [5, 6]}
>>> df = pd.DataFrame.from_dict(df, orient='index')
>>> df
      0  1
col1  1  2
foo   3  4
col2  5  6
>>> df.select_rows("col*")
      0  1
col1  1  2
col2  5  6

More examples can be found in the select_columns section.

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
*args Any

Valid inputs include: an exact index name to look for, a shell-style glob string (e.g. *_thing_*), a regular expression, a callable, or variable arguments of all the aforementioned. A sequence of booleans is also acceptable. A dictionary can be used for selection on a MultiIndex on different levels.

()
invert bool

Whether or not to invert the selection. This will result in the selection of the complement of the rows provided.

False

Returns:

Type Description
DataFrame

A pandas DataFrame with the specified rows selected.

Source code in janitor/functions/select.py
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
@pf.register_dataframe_method
@refactored_function(
    message=(
        "This function will be deprecated in a 1.x release. "
        "Please use `jn.select` instead."
    )
)
def select_rows(
    df: pd.DataFrame,
    *args: Any,
    invert: bool = False,
) -> pd.DataFrame:
    """Method-chainable selection of rows.

    It accepts a string, shell-like glob strings `(*string*)`,
    regex, slice, array-like object, or a list of the previous options.

    Selection on a MultiIndex on a level, or multiple levels,
    is possible with a dictionary.

    This method does not mutate the original DataFrame.

    Optional ability to invert selection of rows available as well.


    !!! info "New in version 0.24.0"

    !!!note

        The preferred option when selecting columns or rows in a Pandas DataFrame
        is with `.loc` or `.iloc` methods, as they are generally performant.
        `select_rows` is primarily for convenience.

    !!!note

        This function will be deprecated in a 1.x release.
        Please use `jn.select` instead.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = {"col1": [1, 2], "foo": [3, 4], "col2": [5, 6]}
        >>> df = pd.DataFrame.from_dict(df, orient='index')
        >>> df
              0  1
        col1  1  2
        foo   3  4
        col2  5  6
        >>> df.select_rows("col*")
              0  1
        col1  1  2
        col2  5  6

    More examples can be found in the
    [`select_columns`][janitor.functions.select.select_columns] section.

    Args:
        df: A pandas DataFrame.
        *args: Valid inputs include: an exact index name to look for,
            a shell-style glob string (e.g. `*_thing_*`),
            a regular expression,
            a callable,
            or variable arguments of all the aforementioned.
            A sequence of booleans is also acceptable.
            A dictionary can be used for selection
            on a MultiIndex on different levels.
        invert: Whether or not to invert the selection.
            This will result in the selection
            of the complement of the rows provided.

    Returns:
        A pandas DataFrame with the specified rows selected.
    """  # noqa: E501
    return _select(df, rows=list(args), invert=invert)

transform_column(df, column_name, function, dest_column_name=None, elementwise=True)

Transform the given column using the provided function.

Meant to be the method-chaining equivalent of:

df[dest_column_name] = df[column_name].apply(function)

Functions can be applied in one of two ways:

  • Element-wise (default; elementwise=True). Then, the individual column elements will be passed in as the first argument of function.
  • Column-wise (elementwise=False). Then, function is expected to take in a pandas Series and return a sequence that is of identical length to the original.

If dest_column_name is provided, then the transformation result is stored in that column. Otherwise, the transformed result is stored under the name of the original column.

This method does not mutate the original DataFrame.

Examples:

Transform a column in-place with an element-wise function.

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({
...     "a": [2, 3, 4],
...     "b": ["area", "pyjanitor", "grapefruit"],
... })
>>> df
   a           b
0  2        area
1  3   pyjanitor
2  4  grapefruit
>>> df.transform_column(
...     column_name="a",
...     function=lambda x: x**2 - 1,
... )
    a           b
0   3        area
1   8   pyjanitor
2  15  grapefruit

Examples:

Transform a column in-place with an column-wise function.

>>> df.transform_column(
...     column_name="b",
...     function=lambda srs: srs.str[:5],
...     elementwise=False,
... )
   a      b
0  2   area
1  3  pyjan
2  4  grape

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_name Hashable

The column to transform.

required
function Callable

A function to apply on the column.

required
dest_column_name Optional[str]

The column name to store the transformation result in. Defaults to None, which will result in the original column name being overwritten. If a name is provided here, then a new column with the transformed values will be created.

None
elementwise bool

Whether to apply the function elementwise or not. If elementwise is True, then the function's first argument should be the data type of each datum in the column of data, and should return a transformed datum. If elementwise is False, then the function's should expect a pandas Series passed into it, and return a pandas Series.

True

Returns:

Type Description
DataFrame

A pandas DataFrame with a transformed column.

Source code in janitor/functions/transform_columns.py
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
@pf.register_dataframe_method
@deprecated_alias(col_name="column_name", dest_col_name="dest_column_name")
def transform_column(
    df: pd.DataFrame,
    column_name: Hashable,
    function: Callable,
    dest_column_name: Optional[str] = None,
    elementwise: bool = True,
) -> pd.DataFrame:
    """Transform the given column using the provided function.

    Meant to be the method-chaining equivalent of:
    ```python
    df[dest_column_name] = df[column_name].apply(function)
    ```

    Functions can be applied in one of two ways:

    - **Element-wise** (default; `elementwise=True`). Then, the individual
    column elements will be passed in as the first argument of `function`.
    - **Column-wise** (`elementwise=False`). Then, `function` is expected to
    take in a pandas Series and return a sequence that is of identical length
    to the original.

    If `dest_column_name` is provided, then the transformation result is stored
    in that column. Otherwise, the transformed result is stored under the name
    of the original column.

    This method does not mutate the original DataFrame.

    Examples:
        Transform a column in-place with an element-wise function.

        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({
        ...     "a": [2, 3, 4],
        ...     "b": ["area", "pyjanitor", "grapefruit"],
        ... })
        >>> df
           a           b
        0  2        area
        1  3   pyjanitor
        2  4  grapefruit
        >>> df.transform_column(
        ...     column_name="a",
        ...     function=lambda x: x**2 - 1,
        ... )
            a           b
        0   3        area
        1   8   pyjanitor
        2  15  grapefruit

    Examples:
        Transform a column in-place with an column-wise function.

        >>> df.transform_column(
        ...     column_name="b",
        ...     function=lambda srs: srs.str[:5],
        ...     elementwise=False,
        ... )
           a      b
        0  2   area
        1  3  pyjan
        2  4  grape

    Args:
        df: A pandas DataFrame.
        column_name: The column to transform.
        function: A function to apply on the column.
        dest_column_name: The column name to store the transformation result
            in. Defaults to None, which will result in the original column
            name being overwritten. If a name is provided here, then a new
            column with the transformed values will be created.
        elementwise: Whether to apply the function elementwise or not.
            If `elementwise` is True, then the function's first argument
            should be the data type of each datum in the column of data,
            and should return a transformed datum.
            If `elementwise` is False, then the function's should expect
            a pandas Series passed into it, and return a pandas Series.

    Returns:
        A pandas DataFrame with a transformed column.
    """
    check_column(df, column_name)

    if dest_column_name is None:
        dest_column_name = column_name
    elif dest_column_name != column_name:
        # If `dest_column_name` is provided and equals `column_name`, then we
        # assume that the user's intent is to perform an in-place
        # transformation (Same behaviour as when `dest_column_name` = None).
        # Otherwise we throw an error if `dest_column_name` already exists in
        # df.
        check_column(df, dest_column_name, present=False)

    result = _get_transform_column_result(
        df[column_name],
        function,
        elementwise,
    )

    return df.assign(**{dest_column_name: result})

truncate_datetime_dataframe(df, datepart)

Truncate times down to a user-specified precision of year, month, day, hour, minute, or second.

This method does not mutate the original DataFrame.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({
...     "foo": ["xxxx", "yyyy", "zzzz"],
...     "dt": pd.date_range("2020-03-11", periods=3, freq="15H"),
... })
>>> df
    foo                  dt
0  xxxx 2020-03-11 00:00:00
1  yyyy 2020-03-11 15:00:00
2  zzzz 2020-03-12 06:00:00
>>> df.truncate_datetime_dataframe("day")
    foo         dt
0  xxxx 2020-03-11
1  yyyy 2020-03-11
2  zzzz 2020-03-12

Parameters:

Name Type Description Default
df DataFrame

The pandas DataFrame on which to truncate datetime.

required
datepart str

Truncation precision, YEAR, MONTH, DAY, HOUR, MINUTE, SECOND. (String is automagically capitalized)

required

Raises:

Type Description
ValueError

If an invalid datepart precision is passed in.

Returns:

Type Description
DataFrame

A pandas DataFrame with all valid datetimes truncated down to the specified precision.

Source code in janitor/functions/truncate_datetime.py
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
@pf.register_dataframe_method
def truncate_datetime_dataframe(
    df: pd.DataFrame,
    datepart: str,
) -> pd.DataFrame:
    """Truncate times down to a user-specified precision of
    year, month, day, hour, minute, or second.

    This method does not mutate the original DataFrame.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({
        ...     "foo": ["xxxx", "yyyy", "zzzz"],
        ...     "dt": pd.date_range("2020-03-11", periods=3, freq="15H"),
        ... })
        >>> df
            foo                  dt
        0  xxxx 2020-03-11 00:00:00
        1  yyyy 2020-03-11 15:00:00
        2  zzzz 2020-03-12 06:00:00
        >>> df.truncate_datetime_dataframe("day")
            foo         dt
        0  xxxx 2020-03-11
        1  yyyy 2020-03-11
        2  zzzz 2020-03-12

    Args:
        df: The pandas DataFrame on which to truncate datetime.
        datepart: Truncation precision, YEAR, MONTH, DAY,
            HOUR, MINUTE, SECOND. (String is automagically
            capitalized)

    Raises:
        ValueError: If an invalid `datepart` precision is passed in.

    Returns:
        A pandas DataFrame with all valid datetimes truncated down
            to the specified precision.
    """
    # idea from Stack Overflow
    # https://stackoverflow.com/a/28783971/7175713
    # https://numpy.org/doc/stable/reference/arrays.datetime.html
    ACCEPTABLE_DATEPARTS = {
        "YEAR": "datetime64[Y]",
        "MONTH": "datetime64[M]",
        "DAY": "datetime64[D]",
        "HOUR": "datetime64[h]",
        "MINUTE": "datetime64[m]",
        "SECOND": "datetime64[s]",
    }
    datepart = datepart.upper()
    if datepart not in ACCEPTABLE_DATEPARTS:
        raise ValueError(
            "Received an invalid `datepart` precision. "
            f"Please enter any one of {ACCEPTABLE_DATEPARTS}."
        )

    dictionary = {}

    for label, series in df.items():
        if is_datetime64_any_dtype(series):
            dtype = ACCEPTABLE_DATEPARTS[datepart]
            # TODO: add branch for pyarrow arrays
            series = np.array(series._values, dtype=dtype)
        dictionary[label] = series

    return pd.DataFrame(dictionary)

unionize_dataframe_categories(*dataframes, column_names=None)

Given a group of dataframes which contain some categorical columns, for each categorical column present, find all the possible categories across all the dataframes which have that column. Update each dataframes' corresponding column with a new categorical object that contains the original data but has labels for all the possible categories from all dataframes. This is useful when concatenating a list of dataframes which all have the same categorical columns into one dataframe.

If, for a given categorical column, all input dataframes do not have at least one instance of all the possible categories, Pandas will change the output dtype of that column from category to object, losing out on dramatic speed gains you get from the former format.

Examples:

Usage example for concatenation of categorical column-containing dataframes:

Instead of:

concatenated_df = pd.concat([df1, df2, df3], ignore_index=True)

which in your case has resulted in category -> object conversion, use:

unionized_dataframes = unionize_dataframe_categories(df1, df2, df2)
concatenated_df = pd.concat(unionized_dataframes, ignore_index=True)

Parameters:

Name Type Description Default
*dataframes Any

The dataframes you wish to unionize the categorical objects for.

()
column_names Optional[Iterable[CategoricalDtype]]

If supplied, only unionize this subset of columns.

None

Raises:

Type Description
TypeError

If any of the inputs are not pandas DataFrames.

Returns:

Type Description
List[DataFrame]

A list of the category-unioned dataframes in the same order they were provided.

Source code in janitor/functions/utils.py
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
def unionize_dataframe_categories(
    *dataframes: Any,
    column_names: Optional[Iterable[pd.CategoricalDtype]] = None,
) -> List[pd.DataFrame]:
    """
    Given a group of dataframes which contain some categorical columns, for
    each categorical column present, find all the possible categories across
    all the dataframes which have that column.
    Update each dataframes' corresponding column with a new categorical object
    that contains the original data
    but has labels for all the possible categories from all dataframes.
    This is useful when concatenating a list of dataframes which all have the
    same categorical columns into one dataframe.

    If, for a given categorical column, all input dataframes do not have at
    least one instance of all the possible categories,
    Pandas will change the output dtype of that column from `category` to
    `object`, losing out on dramatic speed gains you get from the former
    format.

    Examples:
        Usage example for concatenation of categorical column-containing
        dataframes:

        Instead of:

        ```python
        concatenated_df = pd.concat([df1, df2, df3], ignore_index=True)
        ```

        which in your case has resulted in `category` -> `object` conversion,
        use:

        ```python
        unionized_dataframes = unionize_dataframe_categories(df1, df2, df2)
        concatenated_df = pd.concat(unionized_dataframes, ignore_index=True)
        ```

    Args:
        *dataframes: The dataframes you wish to unionize the categorical
            objects for.
        column_names: If supplied, only unionize this subset of columns.

    Raises:
        TypeError: If any of the inputs are not pandas DataFrames.

    Returns:
        A list of the category-unioned dataframes in the same order they
            were provided.
    """

    if any(not isinstance(df, pd.DataFrame) for df in dataframes):
        raise TypeError("Inputs must all be dataframes.")

    if column_names is None:
        # Find all columns across all dataframes that are categorical

        column_names = set()

        for dataframe in dataframes:
            column_names = column_names.union(
                [
                    column_name
                    for column_name in dataframe.columns
                    if isinstance(
                        dataframe[column_name].dtype, pd.CategoricalDtype
                    )
                ]
            )

    else:
        column_names = [column_names]
    # For each categorical column, find all possible values across the DFs

    category_unions = {
        column_name: union_categoricals(
            [df[column_name] for df in dataframes if column_name in df.columns]
        )
        for column_name in column_names
    }

    # Make a shallow copy of all DFs and modify the categorical columns
    # such that they can encode the union of all possible categories for each.

    refactored_dfs = []

    for df in dataframes:
        df = df.copy(deep=False)

        for column_name, categorical in category_unions.items():
            if column_name in df.columns:
                df[column_name] = pd.Categorical(
                    df[column_name], categories=categorical.categories
                )

        refactored_dfs.append(df)

    return refactored_dfs

add_columns

add_column(df, column_name, value, fill_remaining=False)

Add a column to the dataframe.

Intended to be the method-chaining alternative to:

df[column_name] = value

Note

This function will be deprecated in a 1.x release. Please use pd.DataFrame.assign instead.

Examples:

Add a column of constant values to the dataframe.

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"a": list(range(3)), "b": list("abc")})
>>> df.add_column(column_name="c", value=1)
   a  b  c
0  0  a  1
1  1  b  1
2  2  c  1

Add a column of different values to the dataframe.

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"a": list(range(3)), "b": list("abc")})
>>> df.add_column(column_name="c", value=list("efg"))
   a  b  c
0  0  a  e
1  1  b  f
2  2  c  g

Add a column using an iterator.

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"a": list(range(3)), "b": list("abc")})
>>> df.add_column(column_name="c", value=range(4, 7))
   a  b  c
0  0  a  4
1  1  b  5
2  2  c  6

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_name str

Name of the new column. Should be a string, in order for the column name to be compatible with the Feather binary format (this is a useful thing to have).

required
value Union[List[Any], Tuple[Any], Any]

Either a single value, or a list/tuple of values.

required
fill_remaining bool

If value is a tuple or list that is smaller than the number of rows in the DataFrame, repeat the list or tuple (R-style) to the end of the DataFrame.

False

Raises:

Type Description
ValueError

If attempting to add a column that already exists.

ValueError

If value has more elements that number of rows in the DataFrame.

ValueError

If attempting to add an iterable of values with a length not equal to the number of DataFrame rows.

ValueError

If value has length of 0.

Returns:

Type Description
DataFrame

A pandas DataFrame with an added column.

Source code in janitor/functions/add_columns.py
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
@pf.register_dataframe_method
@refactored_function(
    message=(
        "This function will be deprecated in a 1.x release. "
        "Please use `pd.DataFrame.assign` instead."
    )
)
@deprecated_alias(col_name="column_name")
def add_column(
    df: pd.DataFrame,
    column_name: str,
    value: Union[List[Any], Tuple[Any], Any],
    fill_remaining: bool = False,
) -> pd.DataFrame:
    """Add a column to the dataframe.

    Intended to be the method-chaining alternative to:

    ```python
    df[column_name] = value
    ```

    !!!note

        This function will be deprecated in a 1.x release.
        Please use `pd.DataFrame.assign` instead.

    Examples:
        Add a column of constant values to the dataframe.

        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"a": list(range(3)), "b": list("abc")})
        >>> df.add_column(column_name="c", value=1)
           a  b  c
        0  0  a  1
        1  1  b  1
        2  2  c  1

        Add a column of different values to the dataframe.

        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"a": list(range(3)), "b": list("abc")})
        >>> df.add_column(column_name="c", value=list("efg"))
           a  b  c
        0  0  a  e
        1  1  b  f
        2  2  c  g

        Add a column using an iterator.

        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"a": list(range(3)), "b": list("abc")})
        >>> df.add_column(column_name="c", value=range(4, 7))
           a  b  c
        0  0  a  4
        1  1  b  5
        2  2  c  6

    Args:
        df: A pandas DataFrame.
        column_name: Name of the new column. Should be a string, in order
            for the column name to be compatible with the Feather binary
            format (this is a useful thing to have).
        value: Either a single value, or a list/tuple of values.
        fill_remaining: If value is a tuple or list that is smaller than
            the number of rows in the DataFrame, repeat the list or tuple
            (R-style) to the end of the DataFrame.

    Raises:
        ValueError: If attempting to add a column that already exists.
        ValueError: If `value` has more elements that number of
            rows in the DataFrame.
        ValueError: If attempting to add an iterable of values with
            a length not equal to the number of DataFrame rows.
        ValueError: If `value` has length of `0`.

    Returns:
        A pandas DataFrame with an added column.
    """
    check("column_name", column_name, [str])

    if column_name in df.columns:
        raise ValueError(
            f"Attempted to add column that already exists: " f"{column_name}."
        )

    nrows = len(df)

    if hasattr(value, "__len__") and not isinstance(
        value, (str, bytes, bytearray)
    ):
        len_value = len(value)

        # if `value` is a list, ndarray, etc.
        if len_value > nrows:
            raise ValueError(
                "`value` has more elements than number of rows "
                f"in your `DataFrame`. vals: {len_value}, "
                f"df: {nrows}"
            )
        if len_value != nrows and not fill_remaining:
            raise ValueError(
                "Attempted to add iterable of values with length"
                " not equal to number of DataFrame rows"
            )
        if not len_value:
            raise ValueError(
                "`value` has to be an iterable of minimum length 1"
            )

    elif fill_remaining:
        # relevant if a scalar val was passed, yet fill_remaining == True
        len_value = 1
        value = [value]

    df = df.copy()
    if fill_remaining:
        times_to_loop = int(np.ceil(nrows / len_value))
        fill_values = list(value) * times_to_loop
        df[column_name] = fill_values[:nrows]
    else:
        df[column_name] = value

    return df

add_columns(df, fill_remaining=False, **kwargs)

Add multiple columns to the dataframe.

This method does not mutate the original DataFrame.

Method to augment add_column with ability to add multiple columns in one go. This replaces the need for multiple add_column calls.

Usage is through supplying kwargs where the key is the col name and the values correspond to the values of the new DataFrame column.

Values passed can be scalar or iterable (list, ndarray, etc.)

Note

This function will be deprecated in a 1.x release. Please use pd.DataFrame.assign instead.

Examples:

Inserting two more columns into a dataframe.

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"a": list(range(3)), "b": list("abc")})
>>> df.add_columns(x=4, y=list("def"))
   a  b  x  y
0  0  a  4  d
1  1  b  4  e
2  2  c  4  f

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
fill_remaining bool

If value is a tuple or list that is smaller than the number of rows in the DataFrame, repeat the list or tuple (R-style) to the end of the DataFrame. (Passed to add_column)

False
**kwargs Any

Column, value pairs which are looped through in add_column calls.

{}

Returns:

Type Description
DataFrame

A pandas DataFrame with added columns.

Source code in janitor/functions/add_columns.py
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
@pf.register_dataframe_method
@refactored_function(
    message=(
        "This function will be deprecated in a 1.x release. "
        "Please use `pd.DataFrame.assign` instead."
    )
)
def add_columns(
    df: pd.DataFrame,
    fill_remaining: bool = False,
    **kwargs: Any,
) -> pd.DataFrame:
    """Add multiple columns to the dataframe.

    This method does not mutate the original DataFrame.

    Method to augment
    [`add_column`][janitor.functions.add_columns.add_column]
    with ability to add multiple columns in
    one go. This replaces the need for multiple
    [`add_column`][janitor.functions.add_columns.add_column] calls.

    Usage is through supplying kwargs where the key is the col name and the
    values correspond to the values of the new DataFrame column.

    Values passed can be scalar or iterable (list, ndarray, etc.)

    !!!note

        This function will be deprecated in a 1.x release.
        Please use `pd.DataFrame.assign` instead.

    Examples:
        Inserting two more columns into a dataframe.

        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"a": list(range(3)), "b": list("abc")})
        >>> df.add_columns(x=4, y=list("def"))
           a  b  x  y
        0  0  a  4  d
        1  1  b  4  e
        2  2  c  4  f

    Args:
        df: A pandas DataFrame.
        fill_remaining: If value is a tuple or list that is smaller than
            the number of rows in the DataFrame, repeat the list or tuple
            (R-style) to the end of the DataFrame. (Passed to
            [`add_column`][janitor.functions.add_columns.add_column])
        **kwargs: Column, value pairs which are looped through in
            [`add_column`][janitor.functions.add_columns.add_column] calls.

    Returns:
        A pandas DataFrame with added columns.
    """
    # Note: error checking can pretty much be handled in `add_column`

    for col_name, values in kwargs.items():
        df = df.add_column(col_name, values, fill_remaining=fill_remaining)

    return df

also

Implementation source for chainable function also.

also(df, func, *args, **kwargs)

Run a function with side effects.

This function allows you to run an arbitrary function in the pyjanitor method chain. Doing so will let you do things like save the dataframe to disk midway while continuing to modify the dataframe afterwards.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = (
...     pd.DataFrame({"a": [1, 2, 3], "b": list("abc")})
...     .query("a > 1")
...     .also(lambda df: print(f"DataFrame shape is: {df.shape}"))
...     .rename_column(old_column_name="a", new_column_name="a_new")
...     .also(lambda df: df.to_csv("midpoint.csv"))
...     .also(
...         lambda df: print(f"Columns: {df.columns}")
...     )
... )
DataFrame shape is: (2, 2)
Columns: Index(['a_new', 'b'], dtype='object')

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
func Callable

A function you would like to run in the method chain. It should take one DataFrame object as a parameter and have no return. If there is a return, it will be ignored.

required
*args Any

Optional arguments for func.

()
**kwargs Any

Optional keyword arguments for func.

{}

Returns:

Type Description
DataFrame

The input pandas DataFrame, unmodified.

Source code in janitor/functions/also.py
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
@pf.register_dataframe_method
def also(
    df: pd.DataFrame, func: Callable, *args: Any, **kwargs: Any
) -> pd.DataFrame:
    """Run a function with side effects.

    This function allows you to run an arbitrary function
    in the `pyjanitor` method chain.
    Doing so will let you do things like save the dataframe to disk midway
    while continuing to modify the dataframe afterwards.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = (
        ...     pd.DataFrame({"a": [1, 2, 3], "b": list("abc")})
        ...     .query("a > 1")
        ...     .also(lambda df: print(f"DataFrame shape is: {df.shape}"))
        ...     .rename_column(old_column_name="a", new_column_name="a_new")
        ...     .also(lambda df: df.to_csv("midpoint.csv"))
        ...     .also(
        ...         lambda df: print(f"Columns: {df.columns}")
        ...     )
        ... )
        DataFrame shape is: (2, 2)
        Columns: Index(['a_new', 'b'], dtype='object')

    Args:
        df: A pandas DataFrame.
        func: A function you would like to run in the method chain.
            It should take one DataFrame object as a parameter and have no return.
            If there is a return, it will be ignored.
        *args: Optional arguments for `func`.
        **kwargs: Optional keyword arguments for `func`.

    Returns:
        The input pandas DataFrame, unmodified.
    """  # noqa: E501
    func(df.copy(), *args, **kwargs)
    return df

bin_numeric

Implementation source for bin_numeric.

bin_numeric(df, from_column_name, to_column_name, bins=5, **kwargs)

Generate a new column that labels bins for a specified numeric column.

This method does not mutate the original DataFrame.

A wrapper around the pandas cut() function to bin data of one column, generating a new column with the results.

Examples:

Binning a numeric column with specific bin edges.

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"a": [3, 6, 9, 12, 15]})
>>> df.bin_numeric(
...     from_column_name="a", to_column_name="a_binned",
...     bins=[0, 5, 11, 15],
... )
    a  a_binned
0   3    (0, 5]
1   6   (5, 11]
2   9   (5, 11]
3  12  (11, 15]
4  15  (11, 15]

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
from_column_name str

The column whose data you want binned.

required
to_column_name str

The new column to be created with the binned data.

required
bins Optional[Union[int, ScalarSequence, IntervalIndex]]

The binning strategy to be utilized. Read the pd.cut documentation for more details.

5
**kwargs Any

Additional kwargs to pass to pd.cut, except retbins.

{}

Raises:

Type Description
ValueError

If retbins is passed in as a kwarg.

Returns:

Type Description
DataFrame

A pandas DataFrame.

Source code in janitor/functions/bin_numeric.py
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
@pf.register_dataframe_method
@deprecated_alias(
    from_column="from_column_name",
    to_column="to_column_name",
    num_bins="bins",
)
def bin_numeric(
    df: pd.DataFrame,
    from_column_name: str,
    to_column_name: str,
    bins: Optional[Union[int, ScalarSequence, pd.IntervalIndex]] = 5,
    **kwargs: Any,
) -> pd.DataFrame:
    """Generate a new column that labels bins for a specified numeric column.

    This method does not mutate the original DataFrame.

    A wrapper around the pandas [`cut()`][pd_cut_docs] function to bin data of
    one column, generating a new column with the results.

    [pd_cut_docs]: https://pandas.pydata.org/docs/reference/api/pandas.cut.html

    Examples:
        Binning a numeric column with specific bin edges.

        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"a": [3, 6, 9, 12, 15]})
        >>> df.bin_numeric(
        ...     from_column_name="a", to_column_name="a_binned",
        ...     bins=[0, 5, 11, 15],
        ... )
            a  a_binned
        0   3    (0, 5]
        1   6   (5, 11]
        2   9   (5, 11]
        3  12  (11, 15]
        4  15  (11, 15]

    Args:
        df: A pandas DataFrame.
        from_column_name: The column whose data you want binned.
        to_column_name: The new column to be created with the binned data.
        bins: The binning strategy to be utilized. Read the `pd.cut`
            documentation for more details.
        **kwargs: Additional kwargs to pass to `pd.cut`, except `retbins`.

    Raises:
        ValueError: If `retbins` is passed in as a kwarg.

    Returns:
        A pandas DataFrame.
    """
    if "retbins" in kwargs:
        raise ValueError("`retbins` is not an acceptable keyword argument.")

    check("from_column_name", from_column_name, [str])
    check("to_column_name", to_column_name, [str])
    check_column(df, from_column_name)

    df = df.assign(
        **{
            to_column_name: pd.cut(df[from_column_name], bins=bins, **kwargs),
        }
    )

    return df

case_when

Implementation source for case_when.

case_when(df, *args, default=None, column_name)

Create a column based on a condition or multiple conditions.

Similar to SQL and dplyr's case_when with inspiration from pydatatable if_else function.

If your scenario requires direct replacement of values, pandas' replace method or map method should be better suited and more efficient; if the conditions check if a value is within a range of values, pandas' cut or qcut should be more efficient; np.where/np.select are also performant options.

This function relies on pd.Series.mask method.

When multiple conditions are satisfied, the first one is used.

The variable *args parameters takes arguments of the form : condition0, value0, condition1, value1, ..., default. If condition0 evaluates to True, then assign value0 to column_name, if condition1 evaluates to True, then assign value1 to column_name, and so on. If none of the conditions evaluate to True, assign default to column_name.

This function can be likened to SQL's case_when:

CASE WHEN condition0 THEN value0
    WHEN condition1 THEN value1
    --- more conditions
    ELSE default
    END AS column_name

compared to python's if-elif-else:

if condition0:
    value0
elif condition1:
    value1
# more elifs
else:
    default

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame(
...     {
...         "a": [0, 0, 1, 2, "hi"],
...         "b": [0, 3, 4, 5, "bye"],
...         "c": [6, 7, 8, 9, "wait"],
...     }
... )
>>> df
    a    b     c
0   0    0     6
1   0    3     7
2   1    4     8
3   2    5     9
4  hi  bye  wait
>>> df.case_when(
...     ((df.a == 0) & (df.b != 0)) | (df.c == "wait"), df.a,
...     (df.b == 0) & (df.a == 0), "x",
...     default = df.c,
...     column_name = "value",
... )
    a    b     c value
0   0    0     6     x
1   0    3     7     0
2   1    4     8     8
3   2    5     9     9
4  hi  bye  wait    hi

Version Changed

  • 0.24.0
    • Added default parameter.

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
*args Any

Variable argument of conditions and expected values. Takes the form condition0, value0, condition1, value1, ... . condition can be a 1-D boolean array, a callable, or a string. If condition is a callable, it should evaluate to a 1-D boolean array. The array should have the same length as the DataFrame. If it is a string, it is computed on the dataframe, via df.eval, and should return a 1-D boolean array. result can be a scalar, a 1-D array, or a callable. If result is a callable, it should evaluate to a 1-D array. For a 1-D array, it should have the same length as the DataFrame.

()
default Any

This is the element inserted in the output when all conditions evaluate to False. Can be scalar, 1-D array or callable. If callable, it should evaluate to a 1-D array. The 1-D array should be the same length as the DataFrame.

None
column_name str

Name of column to assign results to. A new column is created if it does not already exist in the DataFrame.

required

Raises:

Type Description
ValueError

If condition/value fails to evaluate.

Returns:

Type Description
DataFrame

A pandas DataFrame.

Source code in janitor/functions/case_when.py
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
@pf.register_dataframe_method
@refactored_function(
    message=(
        "This function will be deprecated in a 1.x release. "
        "Please use `pd.Series.case_when` instead."
    )
)
def case_when(
    df: pd.DataFrame, *args: Any, default: Any = None, column_name: str
) -> pd.DataFrame:
    """Create a column based on a condition or multiple conditions.

    Similar to SQL and dplyr's case_when
    with inspiration from `pydatatable` if_else function.

    If your scenario requires direct replacement of values,
    pandas' `replace` method or `map` method should be better
    suited and more efficient; if the conditions check
    if a value is within a range of values, pandas' `cut` or `qcut`
    should be more efficient; `np.where/np.select` are also
    performant options.

    This function relies on `pd.Series.mask` method.

    When multiple conditions are satisfied, the first one is used.

    The variable `*args` parameters takes arguments of the form :
    `condition0`, `value0`, `condition1`, `value1`, ..., `default`.
    If `condition0` evaluates to `True`, then assign `value0` to
    `column_name`, if `condition1` evaluates to `True`, then
    assign `value1` to `column_name`, and so on. If none of the
    conditions evaluate to `True`, assign `default` to
    `column_name`.

    This function can be likened to SQL's `case_when`:

    ```sql
    CASE WHEN condition0 THEN value0
        WHEN condition1 THEN value1
        --- more conditions
        ELSE default
        END AS column_name
    ```

    compared to python's `if-elif-else`:

    ```python
    if condition0:
        value0
    elif condition1:
        value1
    # more elifs
    else:
        default
    ```

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame(
        ...     {
        ...         "a": [0, 0, 1, 2, "hi"],
        ...         "b": [0, 3, 4, 5, "bye"],
        ...         "c": [6, 7, 8, 9, "wait"],
        ...     }
        ... )
        >>> df
            a    b     c
        0   0    0     6
        1   0    3     7
        2   1    4     8
        3   2    5     9
        4  hi  bye  wait
        >>> df.case_when(
        ...     ((df.a == 0) & (df.b != 0)) | (df.c == "wait"), df.a,
        ...     (df.b == 0) & (df.a == 0), "x",
        ...     default = df.c,
        ...     column_name = "value",
        ... )
            a    b     c value
        0   0    0     6     x
        1   0    3     7     0
        2   1    4     8     8
        3   2    5     9     9
        4  hi  bye  wait    hi

    !!! abstract "Version Changed"

        - 0.24.0
            - Added `default` parameter.

    Args:
        df: A pandas DataFrame.
        *args: Variable argument of conditions and expected values.
            Takes the form
            `condition0`, `value0`, `condition1`, `value1`, ... .
            `condition` can be a 1-D boolean array, a callable, or a string.
            If `condition` is a callable, it should evaluate
            to a 1-D boolean array. The array should have the same length
            as the DataFrame. If it is a string, it is computed on the dataframe,
            via `df.eval`, and should return a 1-D boolean array.
            `result` can be a scalar, a 1-D array, or a callable.
            If `result` is a callable, it should evaluate to a 1-D array.
            For a 1-D array, it should have the same length as the DataFrame.
        default: This is the element inserted in the output
            when all conditions evaluate to False.
            Can be scalar, 1-D array or callable.
            If callable, it should evaluate to a 1-D array.
            The 1-D array should be the same length as the DataFrame.
        column_name: Name of column to assign results to. A new column
            is created if it does not already exist in the DataFrame.

    Raises:
        ValueError: If condition/value fails to evaluate.

    Returns:
        A pandas DataFrame.
    """  # noqa: E501
    # Preliminary checks on the case_when function.
    # The bare minimum checks are done; the remaining checks
    # are done within `pd.Series.mask`.
    check("column_name", column_name, [str])
    len_args = len(args)
    if len_args < 2:
        raise ValueError(
            "At least two arguments are required for the `args` parameter"
        )

    if len_args % 2:
        if default is None:
            warnings.warn(
                "The last argument in the variable arguments "
                "has been assigned as the default. "
                "Note however that this will be deprecated "
                "in a future release; use an even number "
                "of boolean conditions and values, "
                "and pass the default argument to the `default` "
                "parameter instead.",
                DeprecationWarning,
                stacklevel=find_stack_level(),
            )
            *args, default = args
        else:
            raise ValueError(
                "The number of conditions and values do not match. "
                f"There are {len_args - len_args//2} conditions "
                f"and {len_args//2} values."
            )

    booleans = []
    replacements = []

    for index, value in enumerate(args):
        if index % 2:
            if callable(value):
                value = apply_if_callable(value, df)
            replacements.append(value)
        else:
            if callable(value):
                value = apply_if_callable(value, df)
            elif isinstance(value, str):
                value = df.eval(value)
            booleans.append(value)

    if callable(default):
        default = apply_if_callable(default, df)
    if is_scalar(default):
        default = pd.Series([default]).repeat(len(df))
    if not hasattr(default, "shape"):
        default = pd.Series([*default])
    if isinstance(default, pd.Index):
        arr_ndim = default.nlevels
    else:
        arr_ndim = default.ndim
    if arr_ndim != 1:
        raise ValueError(
            "The argument for the `default` parameter "
            "should either be a 1-D array, a scalar, "
            "or a callable that can evaluate to a 1-D array."
        )
    if not isinstance(default, pd.Series):
        default = pd.Series(default)
    default.index = df.index
    # actual computation
    # ensures value assignment is on a first come basis
    booleans = booleans[::-1]
    replacements = replacements[::-1]
    for index, (condition, value) in enumerate(zip(booleans, replacements)):
        try:
            default = default.mask(condition, value)
        # error `feedoff` idea from SO
        # https://stackoverflow.com/a/46091127/7175713
        except Exception as error:
            raise ValueError(
                f"condition{index} and value{index} failed to evaluate. "
                f"Original error message: {error}"
            ) from error

    return df.assign(**{column_name: default})

change_index_dtype

Implementation of the change_index_dtype function.

change_index_dtype(df, dtype, axis='index')

Cast an index to a specified dtype dtype.

This method does not mutate the original DataFrame.

Examples:

>>> import pandas as pd
>>> import numpy as np
>>> import janitor
>>> rng = np.random.default_rng(seed=0)
>>> np.random.seed(0)
>>> tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
...             'foo', 'foo', 'qux', 'qux'],
...              [1.0, 2.0, 1.0, 2.0,
...               1.0, 2.0, 1.0, 2.0]]))
>>> idx = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
>>> df = pd.DataFrame(np.random.randn(8, 2), index=idx, columns=['A', 'B'])
>>> df
                     A         B
first second
bar   1.0     1.764052  0.400157
      2.0     0.978738  2.240893
baz   1.0     1.867558 -0.977278
      2.0     0.950088 -0.151357
foo   1.0    -0.103219  0.410599
      2.0     0.144044  1.454274
qux   1.0     0.761038  0.121675
      2.0     0.443863  0.333674
>>> outcome=df.change_index_dtype(dtype=str)
>>> outcome
                     A         B
first second
bar   1.0     1.764052  0.400157
      2.0     0.978738  2.240893
baz   1.0     1.867558 -0.977278
      2.0     0.950088 -0.151357
foo   1.0    -0.103219  0.410599
      2.0     0.144044  1.454274
qux   1.0     0.761038  0.121675
      2.0     0.443863  0.333674
>>> outcome.index.dtypes
first     object
second    object
dtype: object
>>> outcome=df.change_index_dtype(dtype={'second':int})
>>> outcome
                     A         B
first second
bar   1       1.764052  0.400157
      2       0.978738  2.240893
baz   1       1.867558 -0.977278
      2       0.950088 -0.151357
foo   1      -0.103219  0.410599
      2       0.144044  1.454274
qux   1       0.761038  0.121675
      2       0.443863  0.333674
>>> outcome.index.dtypes
first     object
second     int64
dtype: object
>>> outcome=df.change_index_dtype(dtype={0:'category',1:int})
>>> outcome
                     A         B
first second
bar   1       1.764052  0.400157
      2       0.978738  2.240893
baz   1       1.867558 -0.977278
      2       0.950088 -0.151357
foo   1      -0.103219  0.410599
      2       0.144044  1.454274
qux   1       0.761038  0.121675
      2       0.443863  0.333674
>>> outcome.index.dtypes
first     category
second       int64
dtype: object

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
dtype

Use a str or dtype to cast the entire Index to the same type. Alternatively, use a dictionary to change the MultiIndex to new dtypes.

required
axis str

Determines which axis to change the dtype(s). Should be either 'index' or 'columns'.

'index'

Returns:

Type Description
DataFrame

A pandas DataFrame with new Index.

Source code in janitor/functions/change_index_dtype.py
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
@pf.register_dataframe_method
def change_index_dtype(
    df: pd.DataFrame, dtype: Union[str, dict], axis: str = "index"
) -> pd.DataFrame:
    """Cast an index to a specified dtype ``dtype``.

    This method does not mutate the original DataFrame.

    Examples:
        >>> import pandas as pd
        >>> import numpy as np
        >>> import janitor
        >>> rng = np.random.default_rng(seed=0)
        >>> np.random.seed(0)
        >>> tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
        ...             'foo', 'foo', 'qux', 'qux'],
        ...              [1.0, 2.0, 1.0, 2.0,
        ...               1.0, 2.0, 1.0, 2.0]]))
        >>> idx = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
        >>> df = pd.DataFrame(np.random.randn(8, 2), index=idx, columns=['A', 'B'])
        >>> df
                             A         B
        first second
        bar   1.0     1.764052  0.400157
              2.0     0.978738  2.240893
        baz   1.0     1.867558 -0.977278
              2.0     0.950088 -0.151357
        foo   1.0    -0.103219  0.410599
              2.0     0.144044  1.454274
        qux   1.0     0.761038  0.121675
              2.0     0.443863  0.333674
        >>> outcome=df.change_index_dtype(dtype=str)
        >>> outcome
                             A         B
        first second
        bar   1.0     1.764052  0.400157
              2.0     0.978738  2.240893
        baz   1.0     1.867558 -0.977278
              2.0     0.950088 -0.151357
        foo   1.0    -0.103219  0.410599
              2.0     0.144044  1.454274
        qux   1.0     0.761038  0.121675
              2.0     0.443863  0.333674
        >>> outcome.index.dtypes
        first     object
        second    object
        dtype: object
        >>> outcome=df.change_index_dtype(dtype={'second':int})
        >>> outcome
                             A         B
        first second
        bar   1       1.764052  0.400157
              2       0.978738  2.240893
        baz   1       1.867558 -0.977278
              2       0.950088 -0.151357
        foo   1      -0.103219  0.410599
              2       0.144044  1.454274
        qux   1       0.761038  0.121675
              2       0.443863  0.333674
        >>> outcome.index.dtypes
        first     object
        second     int64
        dtype: object
        >>> outcome=df.change_index_dtype(dtype={0:'category',1:int})
        >>> outcome
                             A         B
        first second
        bar   1       1.764052  0.400157
              2       0.978738  2.240893
        baz   1       1.867558 -0.977278
              2       0.950088 -0.151357
        foo   1      -0.103219  0.410599
              2       0.144044  1.454274
        qux   1       0.761038  0.121675
              2       0.443863  0.333674
        >>> outcome.index.dtypes
        first     category
        second       int64
        dtype: object

    Args:
        df: A pandas DataFrame.
        dtype : Use a str or dtype to cast the entire Index
            to the same type.
            Alternatively, use a dictionary to change the MultiIndex
            to new dtypes.
        axis: Determines which axis to change the dtype(s).
            Should be either 'index' or 'columns'.

    Returns:
        A pandas DataFrame with new Index.
    """  # noqa: E501

    check("axis", axis, [str])
    if axis not in {"index", "columns"}:
        raise ValueError("axis should be either index or columns.")

    df = df[:]
    current_index = getattr(df, axis)
    if not isinstance(current_index, pd.MultiIndex):
        if isinstance(dtype, dict):
            raise TypeError(
                "Changing the dtype via a dictionary "
                "is not supported for a single index."
            )
        current_index = current_index.astype(dtype)
        setattr(df, axis, current_index)
        return df

    if not isinstance(dtype, dict):
        dtype = {
            level_number: dtype
            for level_number in range(current_index.nlevels)
        }

    all_str = all(isinstance(level, str) for level in dtype)
    all_int = all(isinstance(level, int) for level in dtype)
    if not all_str | all_int:
        raise TypeError(
            "The levels in the dictionary "
            "should be either all strings or all integers."
        )

    dtype = {
        current_index._get_level_number(label): _dtype
        for label, _dtype in dtype.items()
    }

    new_levels = []
    codes = current_index.codes
    levels = current_index.levels

    for level_number in range(current_index.nlevels):
        _index = levels[level_number]
        if level_number in dtype:
            _dtype = dtype[level_number]
            _index = _index.astype(_dtype)
        new_levels.append(_index)

    current_index = pd.MultiIndex(
        levels=new_levels,
        codes=codes,
        names=current_index.names,
        copy=False,
        verify_integrity=False,
    )
    setattr(df, axis, current_index)
    return df

change_type

change_type(df, column_name, dtype, ignore_exception=False)

Change the type of a column.

This method does not mutate the original DataFrame.

Exceptions that are raised can be ignored. For example, if one has a mixed dtype column that has non-integer strings and integers, and you want to coerce everything to integers, you can optionally ignore the non-integer strings and replace them with NaN or keep the original value.

Intended to be the method-chaining alternative to:

df[col] = df[col].astype(dtype)

Note

This function will be deprecated in a 1.x release. Please use pd.DataFrame.astype instead.

Examples:

Change the type of a column.

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"col1": range(3), "col2": ["m", 5, True]})
>>> df
   col1  col2
0     0     m
1     1     5
2     2  True
>>> df.change_type(
...     "col1", dtype=str,
... ).change_type(
...     "col2", dtype=float, ignore_exception="fillna",
... )
  col1  col2
0    0   NaN
1    1   5.0
2    2   1.0

Change the type of multiple columns. To change the type of all columns, please use DataFrame.astype instead.

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"col1": range(3), "col2": ["m", 5, True]})
>>> df.change_type(['col1', 'col2'], str)
  col1  col2
0    0     m
1    1     5
2    2  True

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_name Hashable | list[Hashable] | Index

The column(s) in the dataframe.

required
dtype type

The datatype to convert to. Should be one of the standard Python types, or a numpy datatype.

required
ignore_exception bool

One of {False, "fillna", "keep_values"}.

False

Raises:

Type Description
ValueError

If unknown option provided for ignore_exception.

Returns:

Type Description
DataFrame

A pandas DataFrame with changed column types.

Source code in janitor/functions/change_type.py
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
@pf.register_dataframe_method
@refactored_function(
    message=(
        "This function will be deprecated in a 1.x release. "
        "Please use `pd.DataFrame.astype` instead."
    )
)
@deprecated_alias(column="column_name")
def change_type(
    df: pd.DataFrame,
    column_name: Hashable | list[Hashable] | pd.Index,
    dtype: type,
    ignore_exception: bool = False,
) -> pd.DataFrame:
    """Change the type of a column.

    This method does not mutate the original DataFrame.

    Exceptions that are raised can be ignored. For example, if one has a mixed
    dtype column that has non-integer strings and integers, and you want to
    coerce everything to integers, you can optionally ignore the non-integer
    strings and replace them with `NaN` or keep the original value.

    Intended to be the method-chaining alternative to:

    ```python
    df[col] = df[col].astype(dtype)
    ```

    !!!note

        This function will be deprecated in a 1.x release.
        Please use `pd.DataFrame.astype` instead.

    Examples:
        Change the type of a column.

        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"col1": range(3), "col2": ["m", 5, True]})
        >>> df
           col1  col2
        0     0     m
        1     1     5
        2     2  True
        >>> df.change_type(
        ...     "col1", dtype=str,
        ... ).change_type(
        ...     "col2", dtype=float, ignore_exception="fillna",
        ... )
          col1  col2
        0    0   NaN
        1    1   5.0
        2    2   1.0

        Change the type of multiple columns. To change the type of all columns,
        please use `DataFrame.astype` instead.

        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"col1": range(3), "col2": ["m", 5, True]})
        >>> df.change_type(['col1', 'col2'], str)
          col1  col2
        0    0     m
        1    1     5
        2    2  True

    Args:
        df: A pandas DataFrame.
        column_name: The column(s) in the dataframe.
        dtype: The datatype to convert to. Should be one of the standard
            Python types, or a numpy datatype.
        ignore_exception: One of `{False, "fillna", "keep_values"}`.

    Raises:
        ValueError: If unknown option provided for `ignore_exception`.

    Returns:
        A pandas DataFrame with changed column types.
    """  # noqa: E501

    df = df.copy()  # avoid mutating the original DataFrame
    if not ignore_exception:
        df[column_name] = df[column_name].astype(dtype)
    elif ignore_exception == "keep_values":
        df[column_name] = df[column_name].astype(dtype, errors="ignore")
    elif ignore_exception == "fillna":
        if isinstance(column_name, Hashable):
            column_name = [column_name]
        df[column_name] = df[column_name].map(_convert, dtype=dtype)
    else:
        raise ValueError("Unknown option for ignore_exception")

    return df

clean_names

Functions for cleaning columns names.

clean_names(df, axis='columns', column_names=None, strip_underscores=None, case_type='lower', remove_special=False, strip_accents=True, preserve_original_labels=True, enforce_string=True, truncate_limit=None)

Clean column/index names. It can also be applied to column values.

Takes all column names, converts them to lowercase, then replaces all spaces with underscores.

By default, column names are converted to string types. This can be switched off by passing in enforce_string=False.

This method does not mutate the original DataFrame.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame(
...     {
...         "Aloha": range(3),
...         "Bell Chart": range(3),
...         "Animals@#$%^": range(3)
...     }
... )
>>> df
   Aloha  Bell Chart  Animals@#$%^
0      0           0             0
1      1           1             1
2      2           2             2
>>> df.clean_names()
   aloha  bell_chart  animals@#$%^
0      0           0             0
1      1           1             1
2      2           2             2
>>> df.clean_names(remove_special=True)
   aloha  bell_chart  animals
0      0           0        0
1      1           1        1
2      2           2        2

Version Changed

  • 0.26.0
    • Added axis and column_names parameters.

Parameters:

Name Type Description Default
df DataFrame

The pandas DataFrame object.

required
axis Union[str, None]

Whether to clean the labels on the index or columns. If None, applies to a defined column or columns in column_names.

'columns'
column_names Union[str, list]

Clean the values in a column. axis should be None. Column selection is possible using the select syntax.

None
strip_underscores Optional[Union[str, bool]]

Removes the outer underscores from all column names. Default None keeps outer underscores. Values can be either 'left', 'right' or 'both' or the respective shorthand 'l', 'r' and True.

None
case_type str

Whether to make columns lower or uppercase. Current case may be preserved with 'preserve', while snake case conversion (from CamelCase or camelCase only) can be turned on using "snake". Default 'lower' makes all characters lowercase.

'lower'
remove_special bool

Remove special characters from columns. Only letters, numbers and underscores are preserved.

False
strip_accents bool

Whether or not to remove accents from columns names.

True
preserve_original_labels bool

Preserve original names. This is later retrievable using df.original_labels. Applies if axis is not None.

True
enforce_string bool

Whether or not to convert all column names to string type. Defaults to True, but can be turned off. Columns with >1 levels will not be converted by default.

True
truncate_limit int

Truncates formatted column names to the specified length. Default None does not truncate.

None

Raises:

Type Description
ValueError

If axis=None and column_names=None.

Returns:

Type Description
DataFrame

A pandas DataFrame.

Source code in janitor/functions/clean_names.py
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
@pf.register_dataframe_method
@deprecated_alias(preserve_original_columns="preserve_original_labels")
def clean_names(
    df: pd.DataFrame,
    axis: Union[str, None] = "columns",
    column_names: Union[str, list] = None,
    strip_underscores: Optional[Union[str, bool]] = None,
    case_type: str = "lower",
    remove_special: bool = False,
    strip_accents: bool = True,
    preserve_original_labels: bool = True,
    enforce_string: bool = True,
    truncate_limit: int = None,
) -> pd.DataFrame:
    """Clean column/index names. It can also be applied to column values.

    Takes all column names, converts them to lowercase,
    then replaces all spaces with underscores.

    By default, column names are converted to string types.
    This can be switched off by passing in `enforce_string=False`.

    This method does not mutate the original DataFrame.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame(
        ...     {
        ...         "Aloha": range(3),
        ...         "Bell Chart": range(3),
        ...         "Animals@#$%^": range(3)
        ...     }
        ... )
        >>> df
           Aloha  Bell Chart  Animals@#$%^
        0      0           0             0
        1      1           1             1
        2      2           2             2
        >>> df.clean_names()
           aloha  bell_chart  animals@#$%^
        0      0           0             0
        1      1           1             1
        2      2           2             2
        >>> df.clean_names(remove_special=True)
           aloha  bell_chart  animals
        0      0           0        0
        1      1           1        1
        2      2           2        2

    !!! summary "Version Changed"

        - 0.26.0
             - Added `axis` and `column_names` parameters.

    Args:
        df: The pandas DataFrame object.
        axis: Whether to clean the labels on the index or columns.
            If `None`, applies to a defined column
            or columns in `column_names`.
        column_names: Clean the values in a column.
            `axis` should be `None`.
            Column selection is possible using the
            [`select`][janitor.functions.select.select] syntax.
        strip_underscores: Removes the outer underscores from all
            column names. Default None keeps outer underscores. Values can be
            either 'left', 'right' or 'both' or the respective shorthand 'l',
            'r' and True.
        case_type: Whether to make columns lower or uppercase.
            Current case may be preserved with 'preserve',
            while snake case conversion (from CamelCase or camelCase only)
            can be turned on using "snake".
            Default 'lower' makes all characters lowercase.
        remove_special: Remove special characters from columns.
            Only letters, numbers and underscores are preserved.
        strip_accents: Whether or not to remove accents from
            columns names.
        preserve_original_labels: Preserve original names.
            This is later retrievable using `df.original_labels`.
            Applies if `axis` is not None.
        enforce_string: Whether or not to convert all column names
            to string type. Defaults to True, but can be turned off.
            Columns with >1 levels will not be converted by default.
        truncate_limit: Truncates formatted column names to
            the specified length. Default None does not truncate.

    Raises:
        ValueError: If `axis=None` and `column_names=None`.

    Returns:
        A pandas DataFrame.
    """
    if not axis and not column_names:
        raise ValueError(
            "Kindly provide an argument to `column_names`, if axis is None."
        )
    if axis is None:
        column_names = get_index_labels(
            arg=column_names, df=df, axis="columns"
        )
        if is_scalar(column_names):
            column_names = [column_names]
        df = df.copy()
        for column_name in column_names:
            df[column_name] = _clean_names_single_object(
                obj=df[column_name],
                enforce_string=enforce_string,
                case_type=case_type,
                remove_special=remove_special,
                strip_accents=strip_accents,
                strip_underscores=strip_underscores,
                truncate_limit=truncate_limit,
            )
        return df

    assert axis in {"index", "columns"}
    df = df[:]
    target_axis = getattr(df, axis)
    if isinstance(target_axis, pd.MultiIndex):
        target_axis = [
            target_axis.get_level_values(number)
            for number in range(target_axis.nlevels)
        ]
        target_axis = [
            _clean_names_single_object(
                obj=obj,
                enforce_string=enforce_string,
                case_type=case_type,
                remove_special=remove_special,
                strip_accents=strip_accents,
                strip_underscores=strip_underscores,
                truncate_limit=truncate_limit,
            )
            for obj in target_axis
        ]
    else:
        target_axis = _clean_names_single_object(
            obj=target_axis,
            enforce_string=enforce_string,
            case_type=case_type,
            remove_special=remove_special,
            strip_accents=strip_accents,
            strip_underscores=strip_underscores,
            truncate_limit=truncate_limit,
        )
    # Store the original column names, if enabled by user
    if preserve_original_labels:
        df.__dict__["original_labels"] = getattr(df, axis)
    setattr(df, axis, target_axis)
    return df

coalesce

Function for performing coalesce.

coalesce(df, *column_names, target_column_name=None, default_value=None)

Coalesce two or more columns of data in order of column names provided.

Given the variable arguments of column names, coalesce finds and returns the first non-missing value from these columns, for every row in the input dataframe. If all the column values are null for a particular row, then the default_value will be filled in.

If target_column_name is not provided, then the first column is coalesced.

This method does not mutate the original DataFrame.

The select syntax can be used in column_names.

Examples:

Use coalesce with 3 columns, "a", "b" and "c".

>>> import pandas as pd
>>> import numpy as np
>>> import janitor
>>> df = pd.DataFrame({
...     "a": [np.nan, 1, np.nan],
...     "b": [2, 3, np.nan],
...     "c": [4, np.nan, np.nan],
... })
>>> df.coalesce("a", "b", "c")
     a    b    c
0  2.0  2.0  4.0
1  1.0  3.0  NaN
2  NaN  NaN  NaN

Provide a target_column_name.

>>> df.coalesce("a", "b", "c", target_column_name="new_col")
     a    b    c  new_col
0  NaN  2.0  4.0      2.0
1  1.0  3.0  NaN      1.0
2  NaN  NaN  NaN      NaN

Provide a default value.

>>> import pandas as pd
>>> import numpy as np
>>> import janitor
>>> df = pd.DataFrame({
...     "a": [1, np.nan, np.nan],
...     "b": [2, 3, np.nan],
... })
>>> df.coalesce(
...     "a", "b",
...     target_column_name="new_col",
...     default_value=-1,
... )
     a    b  new_col
0  1.0  2.0      1.0
1  NaN  3.0      3.0
2  NaN  NaN     -1.0

This is more syntactic diabetes! For R users, this should look familiar to dplyr's coalesce function; for Python users, the interface should be more intuitive than the pandas.Series.combine_first method.

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_names Any

A list of column names.

()
target_column_name Optional[str]

The new column name after combining. If None, then the first column in column_names is updated, with the Null values replaced.

None
default_value Optional[Union[int, float, str]]

A scalar to replace any remaining nulls after coalescing.

None

Raises:

Type Description
ValueError

If length of column_names is less than 2.

Returns:

Type Description
DataFrame

A pandas DataFrame with coalesced columns.

Source code in janitor/functions/coalesce.py
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
@pf.register_dataframe_method
@deprecated_alias(columns="column_names", new_column_name="target_column_name")
def coalesce(
    df: pd.DataFrame,
    *column_names: Any,
    target_column_name: Optional[str] = None,
    default_value: Optional[Union[int, float, str]] = None,
) -> pd.DataFrame:
    """Coalesce two or more columns of data in order of column names provided.

    Given the variable arguments of column names,
    `coalesce` finds and returns the first non-missing value
    from these columns, for every row in the input dataframe.
    If all the column values are null for a particular row,
    then the `default_value` will be filled in.

    If `target_column_name` is not provided,
    then the first column is coalesced.

    This method does not mutate the original DataFrame.

    The [`select`][janitor.functions.select.select] syntax
    can be used in `column_names`.

    Examples:
        Use `coalesce` with 3 columns, "a", "b" and "c".

        >>> import pandas as pd
        >>> import numpy as np
        >>> import janitor
        >>> df = pd.DataFrame({
        ...     "a": [np.nan, 1, np.nan],
        ...     "b": [2, 3, np.nan],
        ...     "c": [4, np.nan, np.nan],
        ... })
        >>> df.coalesce("a", "b", "c")
             a    b    c
        0  2.0  2.0  4.0
        1  1.0  3.0  NaN
        2  NaN  NaN  NaN

        Provide a target_column_name.

        >>> df.coalesce("a", "b", "c", target_column_name="new_col")
             a    b    c  new_col
        0  NaN  2.0  4.0      2.0
        1  1.0  3.0  NaN      1.0
        2  NaN  NaN  NaN      NaN

        Provide a default value.

        >>> import pandas as pd
        >>> import numpy as np
        >>> import janitor
        >>> df = pd.DataFrame({
        ...     "a": [1, np.nan, np.nan],
        ...     "b": [2, 3, np.nan],
        ... })
        >>> df.coalesce(
        ...     "a", "b",
        ...     target_column_name="new_col",
        ...     default_value=-1,
        ... )
             a    b  new_col
        0  1.0  2.0      1.0
        1  NaN  3.0      3.0
        2  NaN  NaN     -1.0

    This is more syntactic diabetes! For R users, this should look familiar to
    `dplyr`'s `coalesce` function; for Python users, the interface
    should be more intuitive than the `pandas.Series.combine_first`
    method.

    Args:
        df: A pandas DataFrame.
        column_names: A list of column names.
        target_column_name: The new column name after combining.
            If `None`, then the first column in `column_names` is updated,
            with the Null values replaced.
        default_value: A scalar to replace any remaining nulls
            after coalescing.

    Raises:
        ValueError: If length of `column_names` is less than 2.

    Returns:
        A pandas DataFrame with coalesced columns.
    """

    if not column_names:
        return df

    indexers = _select_index([*column_names], df, axis="columns")

    if len(indexers) < 2:
        raise ValueError(
            "The number of columns to coalesce should be a minimum of 2."
        )

    if target_column_name:
        check("target_column_name", target_column_name, [str])

    if default_value:
        check("default_value", default_value, [int, float, str])

    df = df.copy()

    outcome = df.iloc[:, indexers[0]]

    for num in range(1, len(indexers)):
        position = indexers[num]
        replacement = df.iloc[:, position]
        outcome = outcome.fillna(replacement)

    if outcome.hasnans and (default_value is not None):
        outcome = outcome.fillna(default_value)

    if target_column_name is None:
        df.iloc[:, indexers[0]] = outcome
    else:
        df[target_column_name] = outcome

    return df

collapse_levels

Implementation of the collapse_levels function.

collapse_levels(df, sep=None, glue=None, axis='columns')

Flatten multi-level index/column dataframe to a single level.

This method does not mutate the original DataFrame.

Given a DataFrame containing multi-level index/columns, flatten to single-level by string-joining the labels in each level.

After a groupby / aggregate operation where .agg() is passed a list of multiple aggregation functions, a multi-level DataFrame is returned with the name of the function applied in the second level.

It is sometimes convenient for later indexing to flatten out this multi-level configuration back into a single level. This function does this through a simple string-joining of all the names across different levels in a single column.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({
...     "class": ["bird", "bird", "bird", "mammal", "mammal"],
...     "max_speed": [389, 389, 24, 80, 21],
...     "type": ["falcon", "falcon", "parrot", "Lion", "Monkey"],
... })
>>> df
    class  max_speed    type
0    bird        389  falcon
1    bird        389  falcon
2    bird         24  parrot
3  mammal         80    Lion
4  mammal         21  Monkey
>>> grouped_df = df.groupby("class")[['max_speed']].agg(["mean", "median"])
>>> grouped_df
         max_speed
              mean median
class
bird    267.333333  389.0
mammal   50.500000   50.5
>>> grouped_df.collapse_levels(sep="_")
        max_speed_mean  max_speed_median
class
bird        267.333333             389.0
mammal       50.500000              50.5

Before applying .collapse_levels, the .agg operation returns a multi-level column DataFrame whose columns are (level 1, level 2):

[("max_speed", "mean"), ("max_speed", "median")]

.collapse_levels then flattens the column MultiIndex into a single level index with names:

["max_speed_mean", "max_speed_median"]

For more control, a glue specification can be passed, where the names of the levels are used to control the output of the flattened index:

>>> (grouped_df
...  .rename_axis(columns=['column_name', 'agg_name'])
...  .collapse_levels(glue="{agg_name}_{column_name}")
... )
        mean_max_speed  median_max_speed
class
bird        267.333333             389.0
mammal       50.500000              50.5

Note that for glue to work, the keyword arguments in the glue specification should be the names of the levels in the MultiIndex.

Version Changed

  • 0.27.0
    • Added glue and axis parameters.

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
sep Union[str, None]

String separator used to join the column level names.

None
glue Union[str, None]

A specification on how the column levels should be combined. It allows for a more granular composition, and serves as an alternative to sep.

None
axis

Determines whether to collapse the levels on the index or columns.

'columns'

Returns:

Type Description
DataFrame

A pandas DataFrame with single-level column index.

Source code in janitor/functions/collapse_levels.py
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
@pf.register_dataframe_method
def collapse_levels(
    df: pd.DataFrame,
    sep: Union[str, None] = None,
    glue: Union[str, None] = None,
    axis="columns",
) -> pd.DataFrame:
    """Flatten multi-level index/column dataframe to a single level.

    This method does not mutate the original DataFrame.

    Given a DataFrame containing multi-level index/columns, flatten to single-level
    by string-joining the labels in each level.

    After a `groupby` / `aggregate` operation where `.agg()` is passed a
    list of multiple aggregation functions, a multi-level DataFrame is
    returned with the name of the function applied in the second level.

    It is sometimes convenient for later indexing to flatten out this
    multi-level configuration back into a single level. This function does
    this through a simple string-joining of all the names across different
    levels in a single column.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({
        ...     "class": ["bird", "bird", "bird", "mammal", "mammal"],
        ...     "max_speed": [389, 389, 24, 80, 21],
        ...     "type": ["falcon", "falcon", "parrot", "Lion", "Monkey"],
        ... })
        >>> df
            class  max_speed    type
        0    bird        389  falcon
        1    bird        389  falcon
        2    bird         24  parrot
        3  mammal         80    Lion
        4  mammal         21  Monkey
        >>> grouped_df = df.groupby("class")[['max_speed']].agg(["mean", "median"])
        >>> grouped_df  # doctest: +NORMALIZE_WHITESPACE
                 max_speed
                      mean median
        class
        bird    267.333333  389.0
        mammal   50.500000   50.5
        >>> grouped_df.collapse_levels(sep="_")  # doctest: +NORMALIZE_WHITESPACE
                max_speed_mean  max_speed_median
        class
        bird        267.333333             389.0
        mammal       50.500000              50.5

        Before applying `.collapse_levels`, the `.agg` operation returns a
        multi-level column DataFrame whose columns are `(level 1, level 2)`:

        ```python
        [("max_speed", "mean"), ("max_speed", "median")]
        ```

        `.collapse_levels` then flattens the column MultiIndex into a single
        level index with names:

        ```python
        ["max_speed_mean", "max_speed_median"]
        ```

        For more control, a `glue` specification can be passed,
        where the names of the levels are used to control the output of the
        flattened index:
        >>> (grouped_df
        ...  .rename_axis(columns=['column_name', 'agg_name'])
        ...  .collapse_levels(glue="{agg_name}_{column_name}")
        ... )
                mean_max_speed  median_max_speed
        class
        bird        267.333333             389.0
        mammal       50.500000              50.5

        Note that for `glue` to work, the keyword arguments
        in the glue specification
        should be the names of the levels in the MultiIndex.

    !!! abstract "Version Changed"

        - 0.27.0
            - Added `glue` and `axis` parameters.

    Args:
        df: A pandas DataFrame.
        sep: String separator used to join the column level names.
        glue: A specification on how the column levels should be combined.
            It allows for a more granular composition,
            and serves as an alternative to `sep`.
        axis: Determines whether to collapse the
            levels on the index or columns.

    Returns:
        A pandas DataFrame with single-level column index.
    """  # noqa: E501
    if (sep is not None) and (glue is not None):
        raise ValueError("Only one of sep or glue should be provided.")
    if sep is not None:
        check("sep", sep, [str])
    if glue is not None:
        check("glue", glue, [str])
    check("axis", axis, [str])
    if axis not in {"index", "columns"}:
        raise ValueError(
            "axis argument should be either 'index' or 'columns'."
        )

    if not isinstance(getattr(df, axis), pd.MultiIndex):
        return df

    # TODO: Pyarrow offers faster string computations
    # future work should take this into consideration,
    # which would require a different route from python's string.join
    # since work is only on the columns
    # it is safe, and more efficient to slice/view the dataframe
    # plus Pandas creates a new Index altogether
    # as such, the original dataframe is not modified
    df = df[:]
    new_index = getattr(df, axis)
    if glue is not None:
        new_index = [dict(zip(new_index.names, entry)) for entry in new_index]
        new_index = [glue.format_map(mapping) for mapping in new_index]
        setattr(df, axis, new_index)
        return df
    sep = "_" if sep is None else sep
    levels = [level for level in new_index.levels]
    all_strings = all(map(is_string_dtype, levels))
    if all_strings:
        no_empty_string = all((entry != "").all() for entry in levels)
        if no_empty_string:
            new_index = new_index.map(sep.join)
            setattr(df, axis, new_index)
            return df
    new_index = (map(str, entry) for entry in new_index)
    new_index = [
        # faster to use a list comprehension within string.join
        # compared to a generator
        # https://stackoverflow.com/a/37782238
        sep.join([entry for entry in word if entry])
        for word in new_index
    ]
    setattr(df, axis, new_index)
    return df

complete

complete(df, *columns, sort=False, by=None, fill_value=None, explicit=True)

Complete a data frame with missing combinations of data.

It is modeled after tidyr's complete function, and is a wrapper around expand_grid, pd.merge and pd.fillna. In a way, it is the inverse of pd.dropna, as it exposes implicitly missing rows.

Combinations of column names or a list/tuple of column names, or even a dictionary of column names and new values are possible. If a dictionary is passed, the user is required to ensure that the values are unique 1-D arrays. The keys in a dictionary must be present in the dataframe.

Examples:

>>> import pandas as pd
>>> import janitor
>>> import numpy as np
>>> df = pd.DataFrame(
...     {
...         "Year": [1999, 2000, 2004, 1999, 2004],
...         "Taxon": [
...             "Saccharina",
...             "Saccharina",
...             "Saccharina",
...             "Agarum",
...             "Agarum",
...         ],
...         "Abundance": [4, 5, 2, 1, 8],
...     }
... )
>>> df
   Year       Taxon  Abundance
0  1999  Saccharina          4
1  2000  Saccharina          5
2  2004  Saccharina          2
3  1999      Agarum          1
4  2004      Agarum          8

Expose missing pairings of Year and Taxon:

>>> df.complete("Year", "Taxon", sort=True)
   Year       Taxon  Abundance
0  1999      Agarum        1.0
1  1999  Saccharina        4.0
2  2000      Agarum        NaN
3  2000  Saccharina        5.0
4  2004      Agarum        8.0
5  2004  Saccharina        2.0

Expose missing years from 1999 to 2004:

>>> df.complete(
...     {"Year": range(df.Year.min(), df.Year.max() + 1)},
...     "Taxon",
...     sort=True
... )
    Year       Taxon  Abundance
0   1999      Agarum        1.0
1   1999  Saccharina        4.0
2   2000      Agarum        NaN
3   2000  Saccharina        5.0
4   2001      Agarum        NaN
5   2001  Saccharina        NaN
6   2002      Agarum        NaN
7   2002  Saccharina        NaN
8   2003      Agarum        NaN
9   2003  Saccharina        NaN
10  2004      Agarum        8.0
11  2004  Saccharina        2.0

Fill missing values:

>>> df = pd.DataFrame(
...     dict(
...         group=(1, 2, 1, 2),
...         item_id=(1, 2, 2, 3),
...         item_name=("a", "a", "b", "b"),
...         value1=(1, np.nan, 3, 4),
...         value2=range(4, 8),
...     )
... )
>>> df
   group  item_id item_name  value1  value2
0      1        1         a     1.0       4
1      2        2         a     NaN       5
2      1        2         b     3.0       6
3      2        3         b     4.0       7
>>> df.complete(
...     "group",
...     ("item_id", "item_name"),
...     fill_value={"value1": 0, "value2": 99},
...     sort=True
... )
   group  item_id item_name  value1  value2
0      1        1         a     1.0     4.0
1      1        2         a     0.0    99.0
2      1        2         b     3.0     6.0
3      1        3         b     0.0    99.0
4      2        1         a     0.0    99.0
5      2        2         a     0.0     5.0
6      2        2         b     0.0    99.0
7      2        3         b     4.0     7.0

Limit the fill to only implicit missing values by setting explicit to False:

>>> df.complete(
...     "group",
...     ("item_id", "item_name"),
...     fill_value={"value1": 0, "value2": 99},
...     explicit=False,
...     sort=True
... )
   group  item_id item_name  value1  value2
0      1        1         a     1.0     4.0
1      1        2         a     0.0    99.0
2      1        2         b     3.0     6.0
3      1        3         b     0.0    99.0
4      2        1         a     0.0    99.0
5      2        2         a     NaN     5.0
6      2        2         b     0.0    99.0
7      2        3         b     4.0     7.0

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
*columns Any

This refers to the columns to be completed. It could be column labels (string type), a list/tuple of column labels, or a dictionary that pairs column labels with new values.

()
sort bool

Sort DataFrame based on *columns.

False
by Optional[Union[list, str]]

Label or list of labels to group by. The explicit missing rows are returned per group.

None
fill_value Optional[Union[Dict, Any]]

Scalar value to use instead of NaN for missing combinations. A dictionary, mapping columns names to a scalar value is also accepted.

None
explicit bool

Determines if only implicitly missing values should be filled (False), or all nulls existing in the dataframe (True). explicit is applicable only if fill_value is not None.

True

Returns:

Type Description
DataFrame

A pandas DataFrame with explicit missing rows, if any.

Source code in janitor/functions/complete.py
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
@pf.register_dataframe_method
def complete(
    df: pd.DataFrame,
    *columns: Any,
    sort: bool = False,
    by: Optional[Union[list, str]] = None,
    fill_value: Optional[Union[Dict, Any]] = None,
    explicit: bool = True,
) -> pd.DataFrame:
    """Complete a data frame with missing combinations of data.

    It is modeled after tidyr's `complete` function, and is a wrapper around
    [`expand_grid`][janitor.functions.expand_grid.expand_grid], `pd.merge`
    and `pd.fillna`. In a way, it is the inverse of `pd.dropna`, as it exposes
    implicitly missing rows.

    Combinations of column names or a list/tuple of column names, or even a
    dictionary of column names and new values are possible.
    If a dictionary is passed,
    the user is required to ensure that the values are unique 1-D arrays.
    The keys in a dictionary must be present in the dataframe.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> import numpy as np
        >>> df = pd.DataFrame(
        ...     {
        ...         "Year": [1999, 2000, 2004, 1999, 2004],
        ...         "Taxon": [
        ...             "Saccharina",
        ...             "Saccharina",
        ...             "Saccharina",
        ...             "Agarum",
        ...             "Agarum",
        ...         ],
        ...         "Abundance": [4, 5, 2, 1, 8],
        ...     }
        ... )
        >>> df
           Year       Taxon  Abundance
        0  1999  Saccharina          4
        1  2000  Saccharina          5
        2  2004  Saccharina          2
        3  1999      Agarum          1
        4  2004      Agarum          8

        Expose missing pairings of `Year` and `Taxon`:
        >>> df.complete("Year", "Taxon", sort=True)
           Year       Taxon  Abundance
        0  1999      Agarum        1.0
        1  1999  Saccharina        4.0
        2  2000      Agarum        NaN
        3  2000  Saccharina        5.0
        4  2004      Agarum        8.0
        5  2004  Saccharina        2.0

        Expose missing years from 1999 to 2004:
        >>> df.complete(
        ...     {"Year": range(df.Year.min(), df.Year.max() + 1)},
        ...     "Taxon",
        ...     sort=True
        ... )
            Year       Taxon  Abundance
        0   1999      Agarum        1.0
        1   1999  Saccharina        4.0
        2   2000      Agarum        NaN
        3   2000  Saccharina        5.0
        4   2001      Agarum        NaN
        5   2001  Saccharina        NaN
        6   2002      Agarum        NaN
        7   2002  Saccharina        NaN
        8   2003      Agarum        NaN
        9   2003  Saccharina        NaN
        10  2004      Agarum        8.0
        11  2004  Saccharina        2.0

        Fill missing values:
        >>> df = pd.DataFrame(
        ...     dict(
        ...         group=(1, 2, 1, 2),
        ...         item_id=(1, 2, 2, 3),
        ...         item_name=("a", "a", "b", "b"),
        ...         value1=(1, np.nan, 3, 4),
        ...         value2=range(4, 8),
        ...     )
        ... )
        >>> df
           group  item_id item_name  value1  value2
        0      1        1         a     1.0       4
        1      2        2         a     NaN       5
        2      1        2         b     3.0       6
        3      2        3         b     4.0       7
        >>> df.complete(
        ...     "group",
        ...     ("item_id", "item_name"),
        ...     fill_value={"value1": 0, "value2": 99},
        ...     sort=True
        ... )
           group  item_id item_name  value1  value2
        0      1        1         a     1.0     4.0
        1      1        2         a     0.0    99.0
        2      1        2         b     3.0     6.0
        3      1        3         b     0.0    99.0
        4      2        1         a     0.0    99.0
        5      2        2         a     0.0     5.0
        6      2        2         b     0.0    99.0
        7      2        3         b     4.0     7.0

        Limit the fill to only implicit missing values
        by setting explicit to `False`:
        >>> df.complete(
        ...     "group",
        ...     ("item_id", "item_name"),
        ...     fill_value={"value1": 0, "value2": 99},
        ...     explicit=False,
        ...     sort=True
        ... )
           group  item_id item_name  value1  value2
        0      1        1         a     1.0     4.0
        1      1        2         a     0.0    99.0
        2      1        2         b     3.0     6.0
        3      1        3         b     0.0    99.0
        4      2        1         a     0.0    99.0
        5      2        2         a     NaN     5.0
        6      2        2         b     0.0    99.0
        7      2        3         b     4.0     7.0

    Args:
        df: A pandas DataFrame.
        *columns: This refers to the columns to be completed.
            It could be column labels (string type),
            a list/tuple of column labels, or a dictionary that pairs
            column labels with new values.
        sort: Sort DataFrame based on *columns.
        by: Label or list of labels to group by.
            The explicit missing rows are returned per group.
        fill_value: Scalar value to use instead of NaN
            for missing combinations. A dictionary, mapping columns names
            to a scalar value is also accepted.
        explicit: Determines if only implicitly missing values
            should be filled (`False`), or all nulls existing in the dataframe
            (`True`). `explicit` is applicable only
            if `fill_value` is not `None`.

    Returns:
        A pandas DataFrame with explicit missing rows, if any.
    """  # noqa: E501

    if not columns:
        return df

    # no copy made of the original dataframe
    # since pd.merge (computed some lines below)
    # makes a new object - essentially a copy
    return _computations_complete(df, columns, sort, by, fill_value, explicit)

concatenate_columns

concatenate_columns(df, column_names, new_column_name, sep='-', ignore_empty=True)

Concatenates the set of columns into a single column.

Used to quickly generate an index based on a group of columns.

This method mutates the original DataFrame.

Examples:

Concatenate two columns row-wise.

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"a": [1, 3, 5], "b": list("xyz")})
>>> df
   a  b
0  1  x
1  3  y
2  5  z
>>> df.concatenate_columns(
...     column_names=["a", "b"], new_column_name="m",
... )
   a  b    m
0  1  x  1-x
1  3  y  3-y
2  5  z  5-z

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_names List[Hashable]

A list of columns to concatenate together.

required
new_column_name Hashable

The name of the new column.

required
sep str

The separator between each column's data.

'-'
ignore_empty bool

Ignore null values if exists.

True

Raises:

Type Description
JanitorError

If at least two columns are not provided within column_names.

Returns:

Type Description
DataFrame

A pandas DataFrame with concatenated columns.

Source code in janitor/functions/concatenate_columns.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
@pf.register_dataframe_method
@deprecated_alias(columns="column_names")
def concatenate_columns(
    df: pd.DataFrame,
    column_names: List[Hashable],
    new_column_name: Hashable,
    sep: str = "-",
    ignore_empty: bool = True,
) -> pd.DataFrame:
    """Concatenates the set of columns into a single column.

    Used to quickly generate an index based on a group of columns.

    This method mutates the original DataFrame.

    Examples:
        Concatenate two columns row-wise.

        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"a": [1, 3, 5], "b": list("xyz")})
        >>> df
           a  b
        0  1  x
        1  3  y
        2  5  z
        >>> df.concatenate_columns(
        ...     column_names=["a", "b"], new_column_name="m",
        ... )
           a  b    m
        0  1  x  1-x
        1  3  y  3-y
        2  5  z  5-z

    Args:
        df: A pandas DataFrame.
        column_names: A list of columns to concatenate together.
        new_column_name: The name of the new column.
        sep: The separator between each column's data.
        ignore_empty: Ignore null values if exists.

    Raises:
        JanitorError: If at least two columns are not provided
            within `column_names`.

    Returns:
        A pandas DataFrame with concatenated columns.
    """
    if len(column_names) < 2:
        raise JanitorError("At least two columns must be specified")

    df[new_column_name] = (
        df[column_names].astype(str).fillna("").agg(sep.join, axis=1)
    )

    if ignore_empty:

        def remove_empty_string(x):
            """Ignore empty/null string values from the concatenated output."""
            return sep.join(x for x in x.split(sep) if x)

        df[new_column_name] = df[new_column_name].transform(
            remove_empty_string
        )

    return df

conditional_join

conditional_join(df, right, *conditions, how='inner', sort_by_appearance=False, df_columns=slice(None), right_columns=slice(None), keep='all', use_numba=False, indicator=False, force=False)

The conditional_join function operates similarly to pd.merge, but supports joins on inequality operators, or a combination of equi and non-equi joins.

Joins solely on equality are not supported.

If the join is solely on equality, pd.merge function covers that; if you are interested in nearest joins, asof joins, or rolling joins, then pd.merge_asof covers that. There is also pandas' IntervalIndex, which is efficient for range joins, especially if the intervals do not overlap.

Column selection in df_columns and right_columns is possible using the select syntax.

Performance might be improved by setting use_numba to True. This assumes that numba is installed.

This function returns rows, if any, where values from df meet the condition(s) for values from right. The conditions are passed in as a variable argument of tuples, where the tuple is of the form (left_on, right_on, op); left_on is the column label from df, right_on is the column label from right, while op is the operator.

The col class is also supported in the conditional_join syntax.

For multiple conditions, the and(&) operator is used to combine the results of the individual conditions.

In some scenarios there might be performance gains if the less than join, or the greater than join condition, or the range condition is executed before the equi join - pass force=True to force this.

The operator can be any of ==, !=, <=, <, >=, >.

The join is done only on the columns.

For non-equi joins, only numeric, timedelta and date columns are supported.

inner, left, right and outer joins are supported.

If the columns from df and right have nothing in common, a single index column is returned; else, a MultiIndex column is returned.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df1 = pd.DataFrame({"value_1": [2, 5, 7, 1, 3, 4]})
>>> df2 = pd.DataFrame({"value_2A": [0, 3, 7, 12, 0, 2, 3, 1],
...                     "value_2B": [1, 5, 9, 15, 1, 4, 6, 3],
...                    })
>>> df1
   value_1
0        2
1        5
2        7
3        1
4        3
5        4
>>> df2
   value_2A  value_2B
0         0         1
1         3         5
2         7         9
3        12        15
4         0         1
5         2         4
6         3         6
7         1         3
>>> df1.conditional_join(
...     df2,
...     ("value_1", "value_2A", ">"),
...     ("value_1", "value_2B", "<")
... )
   value_1  value_2A  value_2B
0        2         1         3
1        5         3         6
2        3         2         4
3        4         3         5
4        4         3         6

Use the col class:

>>> df1.conditional_join(
...     df2,
...     col("value_1") > col("value_2A"),
...     col("value_1") < col("value_2B")
... )
   value_1  value_2A  value_2B
0        2         1         3
1        5         3         6
2        3         2         4
3        4         3         5
4        4         3         6

Select specific columns, after the join:

>>> df1.conditional_join(
...     df2,
...     col("value_1") > col("value_2A"),
...     col("value_1") < col("value_2B"),
...     right_columns='value_2B',
...     how='left'
... )
   value_1  value_2B
0        2       3.0
1        5       6.0
2        7       NaN
3        1       NaN
4        3       4.0
5        4       5.0
6        4       6.0

Rename columns, before the join:

>>> (df1
...  .rename(columns={'value_1':'left_column'})
...  .conditional_join(
...      df2,
...      ("left_column", "value_2A", ">"),
...      ("left_column", "value_2B", "<"),
...      right_columns='value_2B',
...      how='outer')
... )
    left_column  value_2B
0           7.0       NaN
1           1.0       NaN
2           2.0       3.0
3           5.0       6.0
4           3.0       4.0
5           4.0       5.0
6           4.0       6.0
7           NaN       1.0
8           NaN       9.0
9           NaN      15.0
10          NaN       1.0

Get the first match:

>>> df1.conditional_join(
...     df2,
...     col("value_1") > col("value_2A"),
...     col("value_1") < col("value_2B"),
...     keep='first'
... )
   value_1  value_2A  value_2B
0        2         1         3
1        5         3         6
2        3         2         4
3        4         3         5

Get the last match:

>>> df1.conditional_join(
...     df2,
...     col("value_1") > col("value_2A"),
...     col("value_1") < col("value_2B"),
...     keep='last'
... )
   value_1  value_2A  value_2B
0        2         1         3
1        5         3         6
2        3         2         4
3        4         3         6

Add an indicator column:

>>> df1.conditional_join(
...     df2,
...     ("value_1", "value_2A", ">"),
...     ("value_1", "value_2B", "<"),
...     how='outer',
...     indicator=True
... )
    value_1      _merge  value_2A  value_2B
0       7.0   left_only       NaN       NaN
1       1.0   left_only       NaN       NaN
2       2.0        both       1.0       3.0
3       5.0        both       3.0       6.0
4       3.0        both       2.0       4.0
5       4.0        both       3.0       5.0
6       4.0        both       3.0       6.0
7       NaN  right_only       0.0       1.0
8       NaN  right_only       7.0       9.0
9       NaN  right_only      12.0      15.0
10      NaN  right_only       0.0       1.0

Version Changed

  • 0.24.0
    • Added df_columns, right_columns, keep and use_numba parameters.
  • 0.24.1
    • Added indicator parameter.
  • 0.25.0
    • col class supported.
    • Outer join supported. sort_by_appearance deprecated.
    • Numba support for equi join
  • 0.27.0
    • Added support for timedelta dtype.

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
right Union[DataFrame, Series]

Named Series or DataFrame to join to.

required
conditions Any

Variable argument of tuple(s) of the form (left_on, right_on, op), where left_on is the column label from df, right_on is the column label from right, while op is the operator. The col class is also supported. The operator can be any of ==, !=, <=, <, >=, >. For multiple conditions, the and(&) operator is used to combine the results of the individual conditions.

()
how Literal['inner', 'left', 'right', 'outer']

Indicates the type of join to be performed. It can be one of inner, left, right or outer.

'inner'
sort_by_appearance bool

If how = inner and sort_by_appearance = False, there is no guarantee that the original order is preserved. Usually, this offers more performance. If how = left, the row order from the left dataframe is preserved; if how = right, the row order from the right dataframe is preserved.

Deprecated in 0.25.0

False
df_columns Optional[Any]

Columns to select from df in the final output dataframe. Column selection is based on the select syntax.

slice(None)
right_columns Optional[Any]

Columns to select from right in the final output dataframe. Column selection is based on the select syntax.

slice(None)
use_numba bool

Use numba, if installed, to accelerate the computation.

False
keep Literal['first', 'last', 'all']

Choose whether to return the first match, last match or all matches.

'all'
indicator Optional[Union[bool, str]]

If True, adds a column to the output DataFrame called _merge with information on the source of each row. The column can be given a different name by providing a string argument. The column will have a Categorical type with the value of left_only for observations whose merge key only appears in the left DataFrame, right_only for observations whose merge key only appears in the right DataFrame, and both if the observation’s merge key is found in both DataFrames.

False
force bool

If True, force the non-equi join conditions to execute before the equi join.

False

Returns:

Type Description
DataFrame

A pandas DataFrame of the two merged Pandas objects.

Source code in janitor/functions/conditional_join.py
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
@pf.register_dataframe_method
def conditional_join(
    df: pd.DataFrame,
    right: Union[pd.DataFrame, pd.Series],
    *conditions: Any,
    how: Literal["inner", "left", "right", "outer"] = "inner",
    sort_by_appearance: bool = False,
    df_columns: Optional[Any] = slice(None),
    right_columns: Optional[Any] = slice(None),
    keep: Literal["first", "last", "all"] = "all",
    use_numba: bool = False,
    indicator: Optional[Union[bool, str]] = False,
    force: bool = False,
) -> pd.DataFrame:
    """The conditional_join function operates similarly to `pd.merge`,
    but supports joins on inequality operators,
    or a combination of equi and non-equi joins.

    Joins solely on equality are not supported.

    If the join is solely on equality, `pd.merge` function
    covers that; if you are interested in nearest joins, asof joins,
    or rolling joins, then `pd.merge_asof` covers that.
    There is also pandas' IntervalIndex, which is efficient for range joins,
    especially if the intervals do not overlap.

    Column selection in `df_columns` and `right_columns` is possible using the
    [`select`][janitor.functions.select.select] syntax.

    Performance might be improved by setting `use_numba` to `True`.
    This assumes that `numba` is installed.

    This function returns rows, if any, where values from `df` meet the
    condition(s) for values from `right`. The conditions are passed in
    as a variable argument of tuples, where the tuple is of
    the form `(left_on, right_on, op)`; `left_on` is the column
    label from `df`, `right_on` is the column label from `right`,
    while `op` is the operator.

    The `col` class is also supported in the `conditional_join` syntax.

    For multiple conditions, the and(`&`)
    operator is used to combine the results of the individual conditions.

    In some scenarios there might be performance gains if the less than join,
    or the greater than join condition, or the range condition
    is executed before the equi join - pass `force=True` to force this.

    The operator can be any of `==`, `!=`, `<=`, `<`, `>=`, `>`.

    The join is done only on the columns.

    For non-equi joins, only numeric, timedelta and date columns are supported.

    `inner`, `left`, `right` and `outer` joins are supported.

    If the columns from `df` and `right` have nothing in common,
    a single index column is returned; else, a MultiIndex column
    is returned.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df1 = pd.DataFrame({"value_1": [2, 5, 7, 1, 3, 4]})
        >>> df2 = pd.DataFrame({"value_2A": [0, 3, 7, 12, 0, 2, 3, 1],
        ...                     "value_2B": [1, 5, 9, 15, 1, 4, 6, 3],
        ...                    })
        >>> df1
           value_1
        0        2
        1        5
        2        7
        3        1
        4        3
        5        4
        >>> df2
           value_2A  value_2B
        0         0         1
        1         3         5
        2         7         9
        3        12        15
        4         0         1
        5         2         4
        6         3         6
        7         1         3

        >>> df1.conditional_join(
        ...     df2,
        ...     ("value_1", "value_2A", ">"),
        ...     ("value_1", "value_2B", "<")
        ... )
           value_1  value_2A  value_2B
        0        2         1         3
        1        5         3         6
        2        3         2         4
        3        4         3         5
        4        4         3         6

        Use the `col` class:
        >>> df1.conditional_join(
        ...     df2,
        ...     col("value_1") > col("value_2A"),
        ...     col("value_1") < col("value_2B")
        ... )
           value_1  value_2A  value_2B
        0        2         1         3
        1        5         3         6
        2        3         2         4
        3        4         3         5
        4        4         3         6

        Select specific columns, after the join:
        >>> df1.conditional_join(
        ...     df2,
        ...     col("value_1") > col("value_2A"),
        ...     col("value_1") < col("value_2B"),
        ...     right_columns='value_2B',
        ...     how='left'
        ... )
           value_1  value_2B
        0        2       3.0
        1        5       6.0
        2        7       NaN
        3        1       NaN
        4        3       4.0
        5        4       5.0
        6        4       6.0

        Rename columns, before the join:
        >>> (df1
        ...  .rename(columns={'value_1':'left_column'})
        ...  .conditional_join(
        ...      df2,
        ...      ("left_column", "value_2A", ">"),
        ...      ("left_column", "value_2B", "<"),
        ...      right_columns='value_2B',
        ...      how='outer')
        ... )
            left_column  value_2B
        0           7.0       NaN
        1           1.0       NaN
        2           2.0       3.0
        3           5.0       6.0
        4           3.0       4.0
        5           4.0       5.0
        6           4.0       6.0
        7           NaN       1.0
        8           NaN       9.0
        9           NaN      15.0
        10          NaN       1.0

        Get the first match:
        >>> df1.conditional_join(
        ...     df2,
        ...     col("value_1") > col("value_2A"),
        ...     col("value_1") < col("value_2B"),
        ...     keep='first'
        ... )
           value_1  value_2A  value_2B
        0        2         1         3
        1        5         3         6
        2        3         2         4
        3        4         3         5

        Get the last match:
        >>> df1.conditional_join(
        ...     df2,
        ...     col("value_1") > col("value_2A"),
        ...     col("value_1") < col("value_2B"),
        ...     keep='last'
        ... )
           value_1  value_2A  value_2B
        0        2         1         3
        1        5         3         6
        2        3         2         4
        3        4         3         6

        Add an indicator column:
        >>> df1.conditional_join(
        ...     df2,
        ...     ("value_1", "value_2A", ">"),
        ...     ("value_1", "value_2B", "<"),
        ...     how='outer',
        ...     indicator=True
        ... )
            value_1      _merge  value_2A  value_2B
        0       7.0   left_only       NaN       NaN
        1       1.0   left_only       NaN       NaN
        2       2.0        both       1.0       3.0
        3       5.0        both       3.0       6.0
        4       3.0        both       2.0       4.0
        5       4.0        both       3.0       5.0
        6       4.0        both       3.0       6.0
        7       NaN  right_only       0.0       1.0
        8       NaN  right_only       7.0       9.0
        9       NaN  right_only      12.0      15.0
        10      NaN  right_only       0.0       1.0

    !!! abstract "Version Changed"

        - 0.24.0
            - Added `df_columns`, `right_columns`, `keep` and `use_numba` parameters.
        - 0.24.1
            - Added `indicator` parameter.
        - 0.25.0
            - `col` class supported.
            - Outer join supported. `sort_by_appearance` deprecated.
            - Numba support for equi join
        - 0.27.0
            - Added support for timedelta dtype.

    Args:
        df: A pandas DataFrame.
        right: Named Series or DataFrame to join to.
        conditions: Variable argument of tuple(s) of the form
            `(left_on, right_on, op)`, where `left_on` is the column
            label from `df`, `right_on` is the column label from `right`,
            while `op` is the operator.
            The `col` class is also supported. The operator can be any of
            `==`, `!=`, `<=`, `<`, `>=`, `>`. For multiple conditions,
            the and(`&`) operator is used to combine the results
            of the individual conditions.
        how: Indicates the type of join to be performed.
            It can be one of `inner`, `left`, `right` or `outer`.
        sort_by_appearance: If `how = inner` and
            `sort_by_appearance = False`, there
            is no guarantee that the original order is preserved.
            Usually, this offers more performance.
            If `how = left`, the row order from the left dataframe
            is preserved; if `how = right`, the row order
            from the right dataframe is preserved.
            !!!warning "Deprecated in 0.25.0"
        df_columns: Columns to select from `df` in the final output dataframe.
            Column selection is based on the
            [`select`][janitor.functions.select.select] syntax.
        right_columns: Columns to select from `right` in the final output dataframe.
            Column selection is based on the
            [`select`][janitor.functions.select.select] syntax.
        use_numba: Use numba, if installed, to accelerate the computation.
        keep: Choose whether to return the first match, last match or all matches.
        indicator: If `True`, adds a column to the output DataFrame
            called `_merge` with information on the source of each row.
            The column can be given a different name by providing a string argument.
            The column will have a Categorical type with the value of `left_only`
            for observations whose merge key only appears in the left DataFrame,
            `right_only` for observations whose merge key
            only appears in the right DataFrame, and `both` if the observation’s
            merge key is found in both DataFrames.
        force: If `True`, force the non-equi join conditions to execute before the equi join.


    Returns:
        A pandas DataFrame of the two merged Pandas objects.
    """  # noqa: E501

    return _conditional_join_compute(
        df,
        right,
        conditions,
        how,
        sort_by_appearance,
        df_columns,
        right_columns,
        keep,
        use_numba,
        indicator,
        force,
    )

get_join_indices(df, right, conditions, keep='all', use_numba=False, force=False)

Convenience function to return the matching indices from an inner join.

New in version 0.27.0

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
right Union[DataFrame, Series]

Named Series or DataFrame to join to.

required
conditions list[tuple[str]]

List of arguments of tuple(s) of the form (left_on, right_on, op), where left_on is the column label from df, right_on is the column label from right, while op is the operator. The col class is also supported. The operator can be any of ==, !=, <=, <, >=, >. For multiple conditions, the and(&) operator is used to combine the results of the individual conditions.

required
use_numba bool

Use numba, if installed, to accelerate the computation.

False
keep Literal['first', 'last', 'all']

Choose whether to return the first match, last match or all matches.

'all'
force bool

If True, force the non-equi join conditions to execute before the equi join.

False

Returns:

Type Description
tuple[ndarray, ndarray]

A tuple of indices for the rows in the dataframes that match.

Source code in janitor/functions/conditional_join.py
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
def get_join_indices(
    df: pd.DataFrame,
    right: Union[pd.DataFrame, pd.Series],
    conditions: list[tuple[str]],
    keep: Literal["first", "last", "all"] = "all",
    use_numba: bool = False,
    force: bool = False,
) -> tuple[np.ndarray, np.ndarray]:
    """Convenience function to return the matching indices from an inner join.

    !!! info "New in version 0.27.0"

    Args:
        df: A pandas DataFrame.
        right: Named Series or DataFrame to join to.
        conditions: List of arguments of tuple(s) of the form
            `(left_on, right_on, op)`, where `left_on` is the column
            label from `df`, `right_on` is the column label from `right`,
            while `op` is the operator.
            The `col` class is also supported. The operator can be any of
            `==`, `!=`, `<=`, `<`, `>=`, `>`. For multiple conditions,
            the and(`&`) operator is used to combine the results
            of the individual conditions.
        use_numba: Use numba, if installed, to accelerate the computation.
        keep: Choose whether to return the first match, last match or all matches.
        force: If `True`, force the non-equi join conditions
            to execute before the equi join.

    Returns:
        A tuple of indices for the rows in the dataframes that match.
    """
    return _conditional_join_compute(
        df=df,
        right=right,
        conditions=conditions,
        how="inner",
        sort_by_appearance=False,
        df_columns=None,
        right_columns=None,
        keep=keep,
        use_numba=use_numba,
        indicator=False,
        force=force,
        return_matching_indices=True,
    )

convert_date

convert_excel_date(df, column_name)

Convert Excel's serial date format into Python datetime format.

This method mutates the original DataFrame.

Implementation is also from Stack Overflow.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"date": [39690, 39690, 37118]})
>>> df
    date
0  39690
1  39690
2  37118
>>> df.convert_excel_date('date')
        date
0 2008-08-30
1 2008-08-30
2 2001-08-15

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_name Hashable

A column name.

required

Raises:

Type Description
ValueError

If there are non numeric values in the column.

Returns:

Type Description
DataFrame

A pandas DataFrame with corrected dates.

Source code in janitor/functions/convert_date.py
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
@pf.register_dataframe_method
@deprecated_alias(column="column_name")
def convert_excel_date(
    df: pd.DataFrame, column_name: Hashable
) -> pd.DataFrame:
    """Convert Excel's serial date format into Python datetime format.

    This method mutates the original DataFrame.

    Implementation is also from
    [Stack Overflow](https://stackoverflow.com/questions/38454403/convert-excel-style-date-with-pandas).

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"date": [39690, 39690, 37118]})
        >>> df
            date
        0  39690
        1  39690
        2  37118
        >>> df.convert_excel_date('date')
                date
        0 2008-08-30
        1 2008-08-30
        2 2001-08-15

    Args:
        df: A pandas DataFrame.
        column_name: A column name.

    Raises:
        ValueError: If there are non numeric values in the column.

    Returns:
        A pandas DataFrame with corrected dates.
    """  # noqa: E501

    if not is_numeric_dtype(df[column_name]):
        raise ValueError(
            "There are non-numeric values in the column. "
            "All values must be numeric."
        )

    df[column_name] = pd.TimedeltaIndex(
        df[column_name], unit="d"
    ) + dt.datetime(
        1899, 12, 30
    )  # noqa: W503
    return df

convert_matlab_date(df, column_name)

Convert Matlab's serial date number into Python datetime format.

Implementation is also from Stack Overflow.

This method mutates the original DataFrame.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"date": [737125.0, 737124.815863, 737124.4985, 737124]})
>>> df
            date
0  737125.000000
1  737124.815863
2  737124.498500
3  737124.000000
>>> df.convert_matlab_date('date')
                        date
0 2018-03-06 00:00:00.000000
1 2018-03-05 19:34:50.563200
2 2018-03-05 11:57:50.399999
3 2018-03-05 00:00:00.000000

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_name Hashable

A column name.

required

Returns:

Type Description
DataFrame

A pandas DataFrame with corrected dates.

Source code in janitor/functions/convert_date.py
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
@pf.register_dataframe_method
@deprecated_alias(column="column_name")
def convert_matlab_date(
    df: pd.DataFrame, column_name: Hashable
) -> pd.DataFrame:
    """Convert Matlab's serial date number into Python datetime format.

    Implementation is also from
    [Stack Overflow](https://stackoverflow.com/questions/13965740/converting-matlabs-datenum-format-to-python).

    This method mutates the original DataFrame.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"date": [737125.0, 737124.815863, 737124.4985, 737124]})
        >>> df
                    date
        0  737125.000000
        1  737124.815863
        2  737124.498500
        3  737124.000000
        >>> df.convert_matlab_date('date')
                                date
        0 2018-03-06 00:00:00.000000
        1 2018-03-05 19:34:50.563200
        2 2018-03-05 11:57:50.399999
        3 2018-03-05 00:00:00.000000

    Args:
        df: A pandas DataFrame.
        column_name: A column name.

    Returns:
        A pandas DataFrame with corrected dates.
    """  # noqa: E501
    days = pd.Series([dt.timedelta(v % 1) for v in df[column_name]])
    df[column_name] = (
        df[column_name].astype(int).apply(dt.datetime.fromordinal)
        + days
        - dt.timedelta(days=366)
    )
    return df

convert_unix_date(df, column_name)

Convert unix epoch time into Python datetime format.

Note that this ignores local tz and convert all timestamps to naive datetime based on UTC!

This method mutates the original DataFrame.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"date": [1651510462, 53394822, 1126233195]})
>>> df
         date
0  1651510462
1    53394822
2  1126233195
>>> df.convert_unix_date('date')
                 date
0 2022-05-02 16:54:22
1 1971-09-10 23:53:42
2 2005-09-09 02:33:15

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_name Hashable

A column name.

required

Returns:

Type Description
DataFrame

A pandas DataFrame with corrected dates.

Source code in janitor/functions/convert_date.py
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
@pf.register_dataframe_method
@deprecated_alias(column="column_name")
def convert_unix_date(df: pd.DataFrame, column_name: Hashable) -> pd.DataFrame:
    """Convert unix epoch time into Python datetime format.

    Note that this ignores local tz and convert all timestamps to naive
    datetime based on UTC!

    This method mutates the original DataFrame.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"date": [1651510462, 53394822, 1126233195]})
        >>> df
                 date
        0  1651510462
        1    53394822
        2  1126233195
        >>> df.convert_unix_date('date')
                         date
        0 2022-05-02 16:54:22
        1 1971-09-10 23:53:42
        2 2005-09-09 02:33:15

    Args:
        df: A pandas DataFrame.
        column_name: A column name.

    Returns:
        A pandas DataFrame with corrected dates.
    """

    try:
        df[column_name] = pd.to_datetime(df[column_name], unit="s")
    except OutOfBoundsDatetime:  # Indicates time is in milliseconds.
        df[column_name] = pd.to_datetime(df[column_name], unit="ms")
    return df

count_cumulative_unique

Implementation of count_cumulative_unique.

count_cumulative_unique(df, column_name, dest_column_name, case_sensitive=True)

Generates a running total of cumulative unique values in a given column.

A new column will be created containing a running count of unique values in the specified column. If case_sensitive is True, then the case of any letters will matter (i.e., a != A); otherwise, the case of any letters will not matter.

This method does not mutate the original DataFrame.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({
...     "letters": list("aabABb"),
...     "numbers": range(4, 10),
... })
>>> df
  letters  numbers
0       a        4
1       a        5
2       b        6
3       A        7
4       B        8
5       b        9
>>> df.count_cumulative_unique(
...     column_name="letters",
...     dest_column_name="letters_unique_count",
... )
  letters  numbers  letters_unique_count
0       a        4                     1
1       a        5                     1
2       b        6                     2
3       A        7                     3
4       B        8                     4
5       b        9                     4

Cumulative counts, ignoring casing.

>>> df.count_cumulative_unique(
...     column_name="letters",
...     dest_column_name="letters_unique_count",
...     case_sensitive=False,
... )
  letters  numbers  letters_unique_count
0       a        4                     1
1       a        5                     1
2       b        6                     2
3       A        7                     2
4       B        8                     2
5       b        9                     2

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_name Hashable

Name of the column containing values from which a running count of unique values will be created.

required
dest_column_name str

The name of the new column containing the cumulative count of unique values that will be created.

required
case_sensitive bool

Whether or not uppercase and lowercase letters will be considered equal. Only valid with string-like columns.

True

Raises:

Type Description
TypeError

If case_sensitive is False when counting a non-string column_name.

Returns:

Type Description
DataFrame

A pandas DataFrame with a new column containing a cumulative count of unique values from another column.

Source code in janitor/functions/count_cumulative_unique.py
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
@pf.register_dataframe_method
def count_cumulative_unique(
    df: pd.DataFrame,
    column_name: Hashable,
    dest_column_name: str,
    case_sensitive: bool = True,
) -> pd.DataFrame:
    """Generates a running total of cumulative unique values in a given column.

    A new column will be created containing a running
    count of unique values in the specified column.
    If `case_sensitive` is `True`, then the case of
    any letters will matter (i.e., `a != A`);
    otherwise, the case of any letters will not matter.

    This method does not mutate the original DataFrame.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({
        ...     "letters": list("aabABb"),
        ...     "numbers": range(4, 10),
        ... })
        >>> df
          letters  numbers
        0       a        4
        1       a        5
        2       b        6
        3       A        7
        4       B        8
        5       b        9
        >>> df.count_cumulative_unique(
        ...     column_name="letters",
        ...     dest_column_name="letters_unique_count",
        ... )
          letters  numbers  letters_unique_count
        0       a        4                     1
        1       a        5                     1
        2       b        6                     2
        3       A        7                     3
        4       B        8                     4
        5       b        9                     4

        Cumulative counts, ignoring casing.

        >>> df.count_cumulative_unique(
        ...     column_name="letters",
        ...     dest_column_name="letters_unique_count",
        ...     case_sensitive=False,
        ... )
          letters  numbers  letters_unique_count
        0       a        4                     1
        1       a        5                     1
        2       b        6                     2
        3       A        7                     2
        4       B        8                     2
        5       b        9                     2

    Args:
        df: A pandas DataFrame.
        column_name: Name of the column containing values from which a
            running count of unique values will be created.
        dest_column_name: The name of the new column containing the
            cumulative count of unique values that will be created.
        case_sensitive: Whether or not uppercase and lowercase letters
            will be considered equal. Only valid with string-like columns.

    Raises:
        TypeError: If `case_sensitive` is False when counting a non-string
            `column_name`.

    Returns:
        A pandas DataFrame with a new column containing a cumulative
            count of unique values from another column.
    """
    check_column(df, column_name)
    check_column(df, dest_column_name, present=False)

    counter = df[column_name]
    if not case_sensitive:
        try:
            # Make it so that the the same uppercase and lowercase
            # letter are treated as one unique value
            counter = counter.str.lower()
        except (AttributeError, TypeError) as e:
            # AttributeError is raised by pandas when .str is used on
            # non-string types, e.g. int.
            # TypeError is raised by pandas when .str.lower is used on a
            # forbidden string type, e.g. bytes.
            raise TypeError(
                "case_sensitive=False can only be used with a string-like "
                f"type. Column {column_name} is {counter.dtype} type."
            ) from e

    counter = (
        counter.groupby(counter, sort=False).cumcount().to_numpy(copy=False)
    )
    counter = np.cumsum(counter == 0)

    return df.assign(**{dest_column_name: counter})

currency_column_to_numeric

currency_column_to_numeric(df, column_name, cleaning_style=None, cast_non_numeric=None, fill_all_non_numeric=None, remove_non_numeric=False)

Convert currency column to numeric.

This method does not mutate the original DataFrame.

This method allows one to take a column containing currency values, inadvertently imported as a string, and cast it as a float. This is usually the case when reading CSV files that were modified in Excel. Empty strings (i.e. '') are retained as NaN values.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({
...     "a_col": [" 24.56", "-", "(12.12)", "1,000,000"],
...     "d_col": ["", "foo", "1.23 dollars", "-1,000 yen"],
... })
>>> df
       a_col         d_col
0      24.56
1          -           foo
2    (12.12)  1.23 dollars
3  1,000,000    -1,000 yen

The default cleaning style.

>>> df.currency_column_to_numeric("d_col")
       a_col    d_col
0      24.56      NaN
1          -      NaN
2    (12.12)     1.23
3  1,000,000 -1000.00

The accounting cleaning style.

>>> df.currency_column_to_numeric("a_col", cleaning_style="accounting")
        a_col         d_col
0       24.56
1        0.00           foo
2      -12.12  1.23 dollars
3  1000000.00    -1,000 yen

Valid cleaning styles are:

  • None: Default cleaning is applied. Empty strings are always retained as NaN. Numbers, -, . are extracted and the resulting string is cast to a float.
  • 'accounting': Replaces numbers in parentheses with negatives, removes commas.

Parameters:

Name Type Description Default
df DataFrame

The pandas DataFrame.

required
column_name str

The column containing currency values to modify.

required
cleaning_style Optional[str]

What style of cleaning to perform.

None
cast_non_numeric Optional[dict]

A dict of how to coerce certain strings to numeric type. For example, if there are values of 'REORDER' in the DataFrame, {'REORDER': 0} will cast all instances of 'REORDER' to 0. Only takes effect in the default cleaning style.

None
fill_all_non_numeric Optional[Union[float, int]]

Similar to cast_non_numeric, but fills all strings to the same value. For example, fill_all_non_numeric=1, will make everything that doesn't coerce to a currency 1. Only takes effect in the default cleaning style.

None
remove_non_numeric bool

If set to True, rows of df that contain non-numeric values in the column_name column will be removed. Only takes effect in the default cleaning style.

False

Raises:

Type Description
ValueError

If cleaning_style is not one of the accepted styles.

Returns:

Type Description
DataFrame

A pandas DataFrame.

Source code in janitor/functions/currency_column_to_numeric.py
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
@pf.register_dataframe_method
@deprecated_alias(col_name="column_name", type="cleaning_style")
def currency_column_to_numeric(
    df: pd.DataFrame,
    column_name: str,
    cleaning_style: Optional[str] = None,
    cast_non_numeric: Optional[dict] = None,
    fill_all_non_numeric: Optional[Union[float, int]] = None,
    remove_non_numeric: bool = False,
) -> pd.DataFrame:
    """Convert currency column to numeric.

    This method does not mutate the original DataFrame.

    This method allows one to take a column containing currency values,
    inadvertently imported as a string, and cast it as a float. This is
    usually the case when reading CSV files that were modified in Excel.
    Empty strings (i.e. `''`) are retained as `NaN` values.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({
        ...     "a_col": [" 24.56", "-", "(12.12)", "1,000,000"],
        ...     "d_col": ["", "foo", "1.23 dollars", "-1,000 yen"],
        ... })
        >>> df  # doctest: +NORMALIZE_WHITESPACE
               a_col         d_col
        0      24.56
        1          -           foo
        2    (12.12)  1.23 dollars
        3  1,000,000    -1,000 yen

        The default cleaning style.

        >>> df.currency_column_to_numeric("d_col")
               a_col    d_col
        0      24.56      NaN
        1          -      NaN
        2    (12.12)     1.23
        3  1,000,000 -1000.00

        The accounting cleaning style.

        >>> df.currency_column_to_numeric("a_col", cleaning_style="accounting")  # doctest: +NORMALIZE_WHITESPACE
                a_col         d_col
        0       24.56
        1        0.00           foo
        2      -12.12  1.23 dollars
        3  1000000.00    -1,000 yen

    Valid cleaning styles are:

    - `None`: Default cleaning is applied. Empty strings are always retained as
        `NaN`. Numbers, `-`, `.` are extracted and the resulting string
        is cast to a float.
    - `'accounting'`: Replaces numbers in parentheses with negatives, removes commas.

    Args:
        df: The pandas DataFrame.
        column_name: The column containing currency values to modify.
        cleaning_style: What style of cleaning to perform.
        cast_non_numeric: A dict of how to coerce certain strings to numeric
            type. For example, if there are values of 'REORDER' in the DataFrame,
            `{'REORDER': 0}` will cast all instances of 'REORDER' to 0.
            Only takes effect in the default cleaning style.
        fill_all_non_numeric: Similar to `cast_non_numeric`, but fills all
            strings to the same value. For example, `fill_all_non_numeric=1`, will
            make everything that doesn't coerce to a currency `1`.
            Only takes effect in the default cleaning style.
        remove_non_numeric: If set to True, rows of `df` that contain
            non-numeric values in the `column_name` column will be removed.
            Only takes effect in the default cleaning style.

    Raises:
        ValueError: If `cleaning_style` is not one of the accepted styles.

    Returns:
        A pandas DataFrame.
    """  # noqa: E501

    check("column_name", column_name, [str])
    check_column(df, column_name)

    column_series = df[column_name]
    if cleaning_style == "accounting":
        outcome = (
            df[column_name]
            .str.strip()
            .str.replace(",", "", regex=False)
            .str.replace(")", "", regex=False)
            .str.replace("(", "-", regex=False)
            .replace({"-": 0.0})
            .astype(float)
        )
        return df.assign(**{column_name: outcome})
    if cleaning_style is not None:
        raise ValueError(
            "`cleaning_style` is expected to be one of ('accounting', None). "
            f"Got {cleaning_style!r} instead."
        )

    if cast_non_numeric:
        check("cast_non_numeric", cast_non_numeric, [dict])

    _make_cc_patrial = partial(
        _currency_column_to_numeric,
        cast_non_numeric=cast_non_numeric,
    )
    column_series = column_series.apply(_make_cc_patrial)

    if remove_non_numeric:
        df = df.loc[column_series != "", :]

    # _replace_empty_string_with_none is applied here after the check on
    # remove_non_numeric since "" is our indicator that a string was coerced
    # in the original column
    column_series = _replace_empty_string_with_none(column_series)

    if fill_all_non_numeric is not None:
        check("fill_all_non_numeric", fill_all_non_numeric, [int, float])
        column_series = column_series.fillna(fill_all_non_numeric)

    column_series = _replace_original_empty_string_with_none(column_series)

    df = df.assign(**{column_name: pd.to_numeric(column_series)})

    return df

deconcatenate_column

Implementation of deconcatenating columns.

deconcatenate_column(df, column_name, sep=None, new_column_names=None, autoname=None, preserve_position=False)

De-concatenates a single column into multiple columns.

The column to de-concatenate can be either a collection (list, tuple, ...) which can be separated out with pd.Series.tolist(), or a string to slice based on sep.

To determine this behaviour automatically, the first element in the column specified is inspected.

If it is a string, then sep must be specified. Else, the function assumes that it is an iterable type (e.g. list or tuple), and will attempt to deconcatenate by splitting the list.

Given a column with string values, this is the inverse of the concatenate_columns function.

Used to quickly split columns out of a single column.

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"m": ["1-x", "2-y", "3-z"]})
>>> df
     m
0  1-x
1  2-y
2  3-z
>>> df.deconcatenate_column("m", sep="-", autoname="col")
     m col1 col2
0  1-x    1    x
1  2-y    2    y
2  3-z    3    z

The keyword argument preserve_position takes True or False boolean that controls whether the new_column_names will take the original position of the to-be-deconcatenated column_name:

  • When preserve_position=False (default), df.columns change from [..., column_name, ...] to [..., column_name, ..., new_column_names]. In other words, the deconcatenated new columns are appended to the right of the original dataframe and the original column_name is NOT dropped.
  • When preserve_position=True, df.column change from [..., column_name, ...] to [..., new_column_names, ...]. In other words, the deconcatenated new column will REPLACE the original column_name at its original position, and column_name itself is dropped.

The keyword argument autoname accepts a base string and then automatically creates numbered column names based off the base string. For example, if col is passed in as the argument to autoname, and 4 columns are created, then the resulting columns will be named col1, col2, col3, col4. Numbering is always 1-indexed, not 0-indexed, in order to make the column names human-friendly.

This method does not mutate the original DataFrame.

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_name Hashable

The column to split.

required
sep Optional[str]

The separator delimiting the column's data.

None
new_column_names Optional[Union[List[str], Tuple[str]]]

A list of new column names post-splitting.

None
autoname str

A base name for automatically naming the new columns. Takes precedence over new_column_names if both are provided.

None
preserve_position bool

Boolean for whether or not to preserve original position of the column upon de-concatenation.

False

Raises:

Type Description
ValueError

If column_name is not present in the DataFrame.

ValueError

If sep is not provided and the column values are of type str.

ValueError

If either new_column_names or autoname is not supplied.

JanitorError

If incorrect number of names is provided within new_column_names.

Returns:

Type Description
DataFrame

A pandas DataFrame with a deconcatenated column.

Source code in janitor/functions/deconcatenate_column.py
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
@pf.register_dataframe_method
@deprecated_alias(column="column_name")
def deconcatenate_column(
    df: pd.DataFrame,
    column_name: Hashable,
    sep: Optional[str] = None,
    new_column_names: Optional[Union[List[str], Tuple[str]]] = None,
    autoname: str = None,
    preserve_position: bool = False,
) -> pd.DataFrame:
    """De-concatenates a single column into multiple columns.

    The column to de-concatenate can be either a collection (list, tuple, ...)
    which can be separated out with `pd.Series.tolist()`,
    or a string to slice based on `sep`.

    To determine this behaviour automatically,
    the first element in the column specified is inspected.

    If it is a string, then `sep` must be specified.
    Else, the function assumes that it is an iterable type
    (e.g. `list` or `tuple`),
    and will attempt to deconcatenate by splitting the list.

    Given a column with string values, this is the inverse of the
    [`concatenate_columns`][janitor.functions.concatenate_columns.concatenate_columns]
    function.

    Used to quickly split columns out of a single column.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({"m": ["1-x", "2-y", "3-z"]})
        >>> df
             m
        0  1-x
        1  2-y
        2  3-z
        >>> df.deconcatenate_column("m", sep="-", autoname="col")
             m col1 col2
        0  1-x    1    x
        1  2-y    2    y
        2  3-z    3    z

    The keyword argument `preserve_position`
    takes `True` or `False` boolean
    that controls whether the `new_column_names`
    will take the original position
    of the to-be-deconcatenated `column_name`:

    - When `preserve_position=False` (default), `df.columns` change from
      `[..., column_name, ...]` to `[..., column_name, ..., new_column_names]`.
      In other words, the deconcatenated new columns are appended to the right
      of the original dataframe and the original `column_name` is NOT dropped.
    - When `preserve_position=True`, `df.column` change from
      `[..., column_name, ...]` to `[..., new_column_names, ...]`.
      In other words, the deconcatenated new column will REPLACE the original
      `column_name` at its original position, and `column_name` itself
      is dropped.

    The keyword argument `autoname` accepts a base string
    and then automatically creates numbered column names
    based off the base string.
    For example, if `col` is passed in as the argument to `autoname`,
    and 4 columns are created, then the resulting columns will be named
    `col1, col2, col3, col4`.
    Numbering is always 1-indexed, not 0-indexed,
    in order to make the column names human-friendly.

    This method does not mutate the original DataFrame.

    Args:
        df: A pandas DataFrame.
        column_name: The column to split.
        sep: The separator delimiting the column's data.
        new_column_names: A list of new column names post-splitting.
        autoname: A base name for automatically naming the new columns.
            Takes precedence over `new_column_names` if both are provided.
        preserve_position: Boolean for whether or not to preserve original
            position of the column upon de-concatenation.

    Raises:
        ValueError: If `column_name` is not present in the DataFrame.
        ValueError: If `sep` is not provided and the column values
            are of type `str`.
        ValueError: If either `new_column_names` or `autoname`
            is not supplied.
        JanitorError: If incorrect number of names is provided
            within `new_column_names`.

    Returns:
        A pandas DataFrame with a deconcatenated column.
    """  # noqa: E501

    if column_name not in df.columns:
        raise ValueError(f"column name {column_name} not present in DataFrame")

    if isinstance(df[column_name].iloc[0], str):
        if sep is None:
            raise ValueError(
                "`sep` must be specified if the column values "
                "are of type `str`."
            )
        df_deconcat = df[column_name].str.split(sep, expand=True)
    else:
        df_deconcat = pd.DataFrame(
            df[column_name].to_list(), columns=new_column_names, index=df.index
        )

    if new_column_names is None and autoname is None:
        raise ValueError(
            "One of `new_column_names` or `autoname` must be supplied."
        )

    if autoname:
        new_column_names = [
            f"{autoname}{i}" for i in range(1, df_deconcat.shape[1] + 1)
        ]

    if not len(new_column_names) == df_deconcat.shape[1]:
        raise JanitorError(
            f"You need to provide {len(df_deconcat.shape[1])} names "
            "to `new_column_names`"
        )

    df_deconcat.columns = new_column_names
    df_new = pd.concat([df, df_deconcat], axis=1)

    if preserve_position:
        df_original = df.copy()
        cols = list(df_original.columns)
        index_original = cols.index(column_name)

        for i, col_new in enumerate(new_column_names):
            cols.insert(index_original + i, col_new)

        df_new = df_new.select(cols, axis="columns").drop(columns=column_name)

    return df_new

drop_constant_columns

Implementation of drop_constant_columns.

drop_constant_columns(df)

Finds and drops the constant columns from a Pandas DataFrame.

Examples:

>>> import pandas as pd
>>> import janitor
>>> data_dict = {
...     "a": [1, 1, 1],
...     "b": [1, 2, 3],
...     "c": [1, 1, 1],
...     "d": ["rabbit", "leopard", "lion"],
...     "e": ["Cambridge", "Shanghai", "Basel"]
... }
>>> df = pd.DataFrame(data_dict)
>>> df
   a  b  c        d          e
0  1  1  1   rabbit  Cambridge
1  1  2  1  leopard   Shanghai
2  1  3  1     lion      Basel
>>> df.drop_constant_columns()
   b        d          e
0  1   rabbit  Cambridge
1  2  leopard   Shanghai
2  3     lion      Basel

Parameters:

Name Type Description Default
df DataFrame

Input Pandas DataFrame

required

Returns:

Type Description
DataFrame

The Pandas DataFrame with the constant columns dropped.

Source code in janitor/functions/drop_constant_columns.py
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
@pf.register_dataframe_method
def drop_constant_columns(df: pd.DataFrame) -> pd.DataFrame:
    """Finds and drops the constant columns from a Pandas DataFrame.

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> data_dict = {
        ...     "a": [1, 1, 1],
        ...     "b": [1, 2, 3],
        ...     "c": [1, 1, 1],
        ...     "d": ["rabbit", "leopard", "lion"],
        ...     "e": ["Cambridge", "Shanghai", "Basel"]
        ... }
        >>> df = pd.DataFrame(data_dict)
        >>> df
           a  b  c        d          e
        0  1  1  1   rabbit  Cambridge
        1  1  2  1  leopard   Shanghai
        2  1  3  1     lion      Basel
        >>> df.drop_constant_columns()
           b        d          e
        0  1   rabbit  Cambridge
        1  2  leopard   Shanghai
        2  3     lion      Basel

    Args:
        df: Input Pandas DataFrame

    Returns:
        The Pandas DataFrame with the constant columns dropped.
    """
    return df.loc[:, df.nunique().ne(1)]

drop_duplicate_columns

Implementation for drop_duplicate_columns.

drop_duplicate_columns(df, column_name, nth_index=0)

Remove a duplicated column specified by column_name.

Specifying nth_index=0 will remove the first column, nth_index=1 will remove the second column, and so on and so forth.

The corresponding tidyverse R's library is: select(-<column_name>_<nth_index + 1>)

Examples:

>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({
...     "a": range(2, 5),
...     "b": range(3, 6),
...     "A": range(4, 7),
...     "a*": range(6, 9),
... }).clean_names(remove_special=True)
>>> df
   a  b  a  a
0  2  3  4  6
1  3  4  5  7
2  4  5  6  8
>>> df.drop_duplicate_columns(column_name="a", nth_index=1)
   a  b  a
0  2  3  6
1  3  4  7
2  4  5  8

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame

required
column_name Hashable

Name of duplicated columns.

required
nth_index int

Among the duplicated columns, select the nth column to drop.

0

Returns:

Type Description
DataFrame

A pandas DataFrame

Source code in janitor/functions/drop_duplicate_columns.py
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
@pf.register_dataframe_method
def drop_duplicate_columns(
    df: pd.DataFrame, column_name: Hashable, nth_index: int = 0
) -> pd.DataFrame:
    """Remove a duplicated column specified by `column_name`.

    Specifying `nth_index=0` will remove the first column,
    `nth_index=1` will remove the second column,
    and so on and so forth.

    The corresponding tidyverse R's library is:
    `select(-<column_name>_<nth_index + 1>)`

    Examples:
        >>> import pandas as pd
        >>> import janitor
        >>> df = pd.DataFrame({
        ...     "a": range(2, 5),
        ...     "b": range(3, 6),
        ...     "A": range(4, 7),
        ...     "a*": range(6, 9),
        ... }).clean_names(remove_special=True)
        >>> df
           a  b  a  a
        0  2  3  4  6
        1  3  4  5  7
        2  4  5  6  8
        >>> df.drop_duplicate_columns(column_name="a", nth_index=1)
           a  b  a
        0  2  3  6
        1  3  4  7
        2  4  5  8

    Args:
        df: A pandas DataFrame
        column_name: Name of duplicated columns.
        nth_index: Among the duplicated columns,
            select the nth column to drop.

    Returns:
        A pandas DataFrame
    """
    col_indexes = [
        col_idx
        for col_idx, col_name in enumerate(df.columns)
        if col_name == column_name
    ]

    # Select the column to remove based on nth_index.
    removed_col_idx = col_indexes[nth_index]
    # Filter out columns except for the one to be removed.
    filtered_cols = [
        c_i for c_i, _ in enumerate(df.columns) if c_i != removed_col_idx
    ]

    return df.iloc[:, filtered_cols]

dropnotnull

Implementation source for dropnotnull.

dropnotnull(df, column_name)

Drop rows that do not have null values in the given column.

This method does not mutate the original DataFrame.

Examples:

>>> import numpy as np
>>> import pandas as pd
>>> import janitor
>>> df = pd.DataFrame({"a": [1., np.NaN, 3.], "b": [None, "y", "z"]})
>>> df
     a     b
0  1.0  None
1  NaN     y
2  3.0     z
>>> df.dropnotnull("a")
    a  b
1 NaN  y
>>> df.dropnotnull("b")
     a     b
0  1.0  None

Parameters:

Name Type Description Default
df DataFrame

A pandas DataFrame.

required
column_name Hashable

The column name to drop rows from.

required

Returns:

Type Description
DataFrame

A pandas DataFrame with dropped rows.

Source code in janitor/functions/dropnotnull.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
@pf.register_dataframe_method
@deprecated_alias(column="column_name")
def dropnotnull(df: pd.DataFrame, column_name: Hashable) -> pd.DataFrame:
    """Drop rows that do *not* have null values in the given column.

    This method does not mutate the original DataFrame.

    Examples:
        >>> import numpy as np
        >>> import pandas as pd
        >>> import janitor